paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:26734ce3b17851304312b1211ff74e054046d1d3
[ "This paper presents a machine reading comprehension dataset called ReClor. It is different from existing datasets in that ReClor targets logical reasoning. The authors identified biased data points and separated the testing dataset into biased and non-biased sets. Experimental results show that state-of-the-art models such as XLNet and RoBERTa struggle on the non-biased HARD set with poor performance near that of random guess.", "This paper presents a new reading comprehension dataset for logical reasoning. It is a multi-choice problem where questions are mainly from GMAT and LSAT, containing 4139 data points. The analyses of the data demonstrate that questions require diverse types of reasoning such as finding necessary/sufficient assumptions, whether statements strengthen/weaken the argument or explain/resolve the situation. The paper includes comprehensive experiments with baselines to identify bias in the dataset, where the answer-options-only model achieves near half (random is 25%). Based on this result, the test set is split into the easy and hard set, which will help better evaluation of the future models. The paper also reports the numbers on the split data using competitive baselines where the models achieve low performance on the hard set." ]
Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.1
[ { "affiliations": [], "name": "Weihao Yu" }, { "affiliations": [], "name": "Zihang Jiang" }, { "affiliations": [], "name": "Yanfei Dong" }, { "affiliations": [], "name": "Jiashi Feng" } ]
[ { "authors": [ "Khan Academy" ], "title": "https://www.khanacademy.org/test-prep/lsat/lsat-lessons/ logical-reasoning/a/logical-reasoning--article--question-typecatalog, 2019", "venue": "Accessed Sept", "year": 2019 }, { "authors": [ "Johan Bos", "Katja Markert" ], "title": "Recognising textual entailment with logical inference", "venue": "In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing,", "year": 2005 }, { "authors": [ "Samuel R Bowman", "Gabor Angeli", "Christopher Potts", "Christopher D Manning" ], "title": "A large annotated corpus for learning natural language inference", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Michael Bugert", "Yevgeniy Puzikov", "Andreas Rücklé", "Judith Eckle-Kohler", "Teresa Martin", "Eugenio Martı́nez-Cámara", "Daniil Sorokin", "Maxime Peyrard", "Iryna Gurevych" ], "title": "Exploring data generation methods for the story cloze test", "venue": "Lsdsem", "year": 2017 }, { "authors": [ "Zheng Cai", "Lifu Tu", "Kevin Gimpel" ], "title": "Pay attention to the ending: Strong neural baselines for the roc story cloze task", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "year": 2017 }, { "authors": [ "Peter Clark", "Isaac Cowhey", "Oren Etzioni", "Tushar Khot", "Ashish Sabharwal", "Carissa Schoenick", "Oyvind Tafjord" ], "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge", "venue": "arXiv preprint arXiv:1803.05457,", "year": 2018 }, { "authors": [ "Cleo Condoravdi", "Dick Crouch", "Valeria De Paiva", "Reinhard Stolle", "Daniel G Bobrow" ], "title": "Entailment, intensionality and text understanding", "venue": "In Proceedings of the HLT-NAACL 2003 workshop on Text meaning,", "year": 2003 }, { "authors": [ "Ido Dagan", "Oren Glickman", "Bernardo Magnini" ], "title": "The pascal recognising textual entailment challenge", "venue": "In Machine Learning Challenges Workshop,", "year": 2005 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Dheeru Dua", "Yizhong Wang", "Pradeep Dasigi", "Gabriel Stanovsky", "Sameer Singh", "Matt Gardner" ], "title": "Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs", "venue": "In Proceedings of NAACL-HLT,", "year": 2019 }, { "authors": [ "Yaroslav Fyodorov", "Yoad Winter", "Nissim Francez" ], "title": "A natural logic inference system", "venue": "In Proceedings of the 2nd Workshop on Inference in Computational Semantics (ICoS-2). Citeseer,", "year": 2000 }, { "authors": [ "Suchin Gururangan", "Swabha Swayamdipta", "Omer Levy", "Roy Schwartz", "Samuel R Bowman", "Noah A Smith" ], "title": "Annotation artifacts in natural language inference data", "venue": "arXiv preprint arXiv:1803.02324,", "year": 2018 }, { "authors": [ "Ivan Habernal", "Henning Wachsmuth", "Iryna Gurevych", "Benno Stein" ], "title": "The argument reasoning comprehension task: Identification and reconstruction of implicit warrants", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Lifu Huang", "Ronan Le Bras", "Chandra Bhagavatula", "Yejin Choi" ], "title": "Cosmos qa: Machine reading comprehension with contextual commonsense reasoning", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Di Jin", "Shuyang Gao", "Jiun-Yu Kao", "Tagyoung Chung", "Dilek Hakkani-tur" ], "title": "Mmm: Multi-stage multi-task learning for multi-choice reading comprehension", "venue": null, "year": 1910 }, { "authors": [ "Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Tomas Mikolov" ], "title": "Bag of tricks for efficient text classification", "venue": "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume", "year": 2017 }, { "authors": [ "Daniel Khashabi", "Snigdha Chaturvedi", "Michael Roth", "Shyam Upadhyay", "Dan Roth" ], "title": "Looking beyond the surface: A challenge set for reading comprehension over multiple sentences", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Tomáš Kočiskỳ", "Jonathan Schwarz", "Phil Blunsom", "Chris Dyer", "Karl Moritz Hermann", "Gábor Melis", "Edward Grefenstette" ], "title": "The narrativeqa reading comprehension challenge", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Guokun Lai", "Qizhe Xie", "Hanxiao Liu", "Yiming Yang", "Eduard Hovy" ], "title": "Race: Large-scale reading comprehension dataset from examinations", "venue": "arXiv preprint arXiv:1704.04683,", "year": 2017 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Bill MacCartney", "Christopher D Manning" ], "title": "An extended model of natural logic", "venue": "In Proceedings of the eighth international conference on computational semantics,", "year": 2009 }, { "authors": [ "Todor Mihaylov", "Peter Clark", "Tushar Khot", "Ashish Sabharwal" ], "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "venue": "arXiv preprint arXiv:1809.02789,", "year": 2018 }, { "authors": [ "Sewon Min", "Minjoon Seo", "Hannaneh Hajishirzi" ], "title": "Question answering through transfer learning from large fine-grained supervision data", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers),", "year": 2017 }, { "authors": [ "Timothy Niven", "Hung-Yu Kao" ], "title": "Probing neural network comprehension of natural language arguments", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Adam Poliak", "Jason Naradowsky", "Aparajita Haldar", "Rachel Rudinger", "Benjamin Van Durme" ], "title": "Hypothesis only baselines in natural language inference", "venue": "In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics,", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper", "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Pranav Rajpurkar", "Robin Jia", "Percy Liang" ], "title": "Know what you don’t know: Unanswerable questions for squad", "venue": "arXiv preprint arXiv:1806.03822,", "year": 2018 }, { "authors": [ "Matthew Richardson", "Christopher JC Burges", "Erin Renshaw" ], "title": "Mctest: A challenge dataset for the open-domain machine comprehension of text", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 }, { "authors": [ "Alvaro Rodrigo", "Anselmo Penas", "Yusuke Miyao", "Eduard H Hovy", "Noriko Kando" ], "title": "Overview of clef qa entrance exams task", "venue": "In CLEF (Working Notes),", "year": 2015 }, { "authors": [ "Roy Schwartz", "Maarten Sap", "Ioannis Konstas", "Leila Zilles", "Yejin Choi", "Noah A Smith" ], "title": "Story cloze task: Uw nlp system", "venue": "In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics,", "year": 2017 }, { "authors": [ "Hideyuki Shibuki", "Kotaro Sakamoto", "Yoshinobu Kano", "Teruko Mitamura", "Madoka Ishioroshi", "Kelly Y Itakura", "Di Wang", "Tatsunori Mori", "Noriko Kando" ], "title": "Overview of the ntcir-11 qa-lab", "venue": "Ntcir,", "year": 2014 }, { "authors": [ "Saku Sugawara", "Akiko Aizawa" ], "title": "An analysis of prerequisite skills for reading comprehension", "venue": "In Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods,", "year": 2016 }, { "authors": [ "Saku Sugawara", "Kentaro Inui", "Satoshi Sekine", "Akiko Aizawa" ], "title": "What makes reading comprehension questions easier", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Kai Sun", "Dian Yu", "Jianshu Chen", "Dong Yu", "Yejin Choi", "Claire Cardie" ], "title": "Dream: A challenge data set and models for dialogue-based reading comprehension", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Johannes Welbl", "Pontus Stenetorp", "Sebastian Riedel" ], "title": "Constructing datasets for multi-hop reading comprehension across documents", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Yonghui Wu", "Mike Schuster", "Zhifeng Chen", "Quoc V Le", "Mohammad Norouzi", "Wolfgang Macherey", "Maxim Krikun", "Yuan Cao", "Qin Gao", "Klaus Macherey" ], "title": "Google’s neural machine translation system: Bridging the gap between human and machine translation", "venue": "arXiv preprint arXiv:1609.08144,", "year": 2016 }, { "authors": [ "Deshraj Yadav", "Rishabh Jain", "Harsh Agrawal", "Prithvijit Chattopadhyay", "Taranjeet Singh", "Akash Jain", "Shiv Baran Singh", "Stefan Lee", "Dhruv Batra" ], "title": "Evalai: Towards better evaluation systems for ai agents", "venue": null, "year": 1902 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": null, "year": 1906 }, { "authors": [ "Rowan Zellers", "Ari Holtzman", "Yonatan Bisk", "Ali Farhadi", "Yejin Choi" ], "title": "Hellaswag: Can a machine really finish your sentence", "venue": null, "year": 1905 }, { "authors": [ "Sheng Zhang", "Xiaodong Liu", "Jingjing Liu", "Jianfeng Gao", "Kevin Duh", "Benjamin Van Durme" ], "title": "Record: Bridging the gap between human and machine commonsense reading comprehension", "venue": "arXiv preprint arXiv:1810.12885,", "year": 2018 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Rich Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "fastText. FastText (Joulin" ], "title": "2017) models sentences as a bag of n-grams, and tries to predict the probability of each answer being correct independently. We choose the answer with the highest score as the prediction for the multiple-choice setting", "venue": null, "year": 2017 }, { "authors": [ "BERT. BERT (Devlin" ], "title": "2019) is also a transformer (Vaswani et al., 2017) based model which is trained by using BooksCorpus (Zhu et al., 2015) and English Wikipedia in two unsupervised tasks, i.e., Masked LM (MLM) and Next Sentence Prediction (NSP). During fine-tuning, the final hidden vector corresponding to the first input token ([CLS]) is used as the aggregate representation followed by two extra fully connected layers to compute the score", "venue": null, "year": 2015 }, { "authors": [ "RoBERTa. RoBERTa (Liu" ], "title": "2019) is an improved pre-training procedure of BERT with training the model longer, with bigger batches over more data and removing NSP objective etc.. Extra two fully connected layers are added to transform the final hidden vector of the first input token (<s> to the score", "venue": null, "year": 2019 }, { "authors": [ "GPT Radford" ], "title": "2018) start Context delimiter Question || Option", "venue": null, "year": 2018 }, { "authors": [ "Radford" ], "title": "2019) start Context delimiter Question || Option", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine reading comprehension (MRC) is a fundamental task in Natural Language Processing, which requires models to understand a body of text and answer a particular question related to the context. With success of unsupervised representation learning in NLP, language pre-training based models such as GPT-2 (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) have achieved nearly saturated performance on most of the popular MRC datasets (Rajpurkar et al., 2016; Lai et al., 2017; Rajpurkar et al., 2018; Wang et al., 2018). It is time to challenge state-of-the-art models with more difficult reading comprehension tasks and move a step forward to more comprehensive analysis and reasoning over text (Dua et al., 2019).\nIn natural language understanding, logical reasoning is an important ability to examine, analyze and critically evaluate arguments as they occur in ordinary language according to the definition from Law School Admission Council (2019a). It is a significant component of human intelligence and is essential in negotiation, debate and writing etc. However, existing reading comprehension datasets have none or merely a small amount of data requiring logical reasoning, e.g., 0% in MCTest dataset (Richardson et al., 2013) and 1.2% in SQuAD (Rajpurkar et al., 2016) according to Sugawara & Aizawa (2016). One related task is natural language inference, which requires models to label the logical relationships of sentence pairs. However, this task only considers three types of simple logical relationships and only needs reasoning at sentence-level. To push the development of models in logical reasoning from simple logical relationship classification to multiple complicated logical reasoning and from sentence-level to passage-level, it is necessary to introduce a reading comprehension dataset targeting logical reasoning.\nA typical example of logical reasoning questions is shown in Table 1. Similar to the format of multiple-choice reading comprehension datasets (Richardson et al., 2013; Lai et al., 2017), it contains a context, a question and four options with only one right answer. To answer the question\n∗Equal contribution. 1Project page: http://whyu.me/reclor/\nin this example, readers need to identify the logical connections between the lines to pinpoint the conflict, then understand each of the options and select an option that solves the conflict. Human minds need extensive training and practice to get used to complex reasoning, and it will take immense efforts for crowdsourcing workers to design such logical reasoning questions. Inspired by the datasets extracted from standardized examinations (Lai et al., 2017; Clark et al., 2018), we build a dataset by selecting such logical reasoning questions from standardized exams such as GMAT 2 and LSAT 3. We finally collect 6,138 pieces of logical reasoning questions, which constitute a Reading Comprehension dataset requiring logical reasoning (ReClor).\nHuman-annotated datasets usually contain biases (Schwartz et al., 2017; Cai et al., 2017; Bugert et al., 2017; Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019), which are often exploited by neural network models as shortcut solutions to achieve high testing accuracy. For data points whose options can be selected correctly without knowing the contexts and questions, we classify them as biased ones. In order to fully assess the logical reasoning ability of the models, we propose to identify the biased data points and group them as EASY set, and put the rest into HARD set. Based on our experiments on these separate sets, we find that even the state-of-the-art models can only perform well on EASY set and struggle on HARD set as shown in Figure 1. This phenomenon shows that current models can well capture the biases in the dataset but lack the ability to understand the text and reason based on connections between the lines. On the other hand, human beings perform similarly on both the EASY and HARD set. It is thus observed that there is still a long way to go to equip models with true logical reasoning ability.\nThe contributions of our paper are two-fold. First, we introduce ReClor, a new reading comprehension dataset requiring logical reasoning. We use option-only-input baselines trained with different random seeds to identify the data points with biases in the testing set, and group them as EASY set, with the rest as HARD set to facilitate comprehensive evaluation. Second, we evaluate several stateof-the-art models on ReClor and find these pre-trained language models can perform well on EASY set but struggle on the HARD set. This indicates although current models are good at exploiting biases in the dataset, they are far from capable of performing real logical reasoning yet." }, { "heading": "2 RELATED WORK", "text": "Reading Comprehension Datasets. A variety of reading comprehension datasets have been introduced to promote the development of this field. MCTest (Richardson et al., 2013) is a dataset with 2,000 multiple-choice reading comprehension questions about fictional stories in the format similar to ReClor. Rajpurkar et al. (2016) proposed SQuAD dataset, which contains 107,785 questionanswer pairs on 536 Wikipedia articles. The authors manually labeled 192 examples of the dataset and found that the examples mainly require reasoning of lexical or syntactic variation. In an analysis of the above-mentioned datasets, Sugawara & Aizawa (2016) found that none of questions requiring logical reasoning in MCTest dataset (Richardson et al., 2013) and only 1.2% in SQuAD dataset (Rajpurkar et al., 2016). Lai et al. (2017) introduced RACE dataset by collecting the English exams for middle and high school Chinese students in the age range between 12 to 18. They hired crowd workers on Amazon Mechanical Turk to label the reasoning type of 500 samples in the dataset and show that around 70 % of the samples are in the category of word matching, paraphrasing or single-sentence reasoning. To encourage progress on deeper comprehension of language,\n2https://en.wikipedia.org/wiki/Graduate Management Admission Test 3https://en.wikipedia.org/wiki/Law School Admission Test\nmore reading comprehension datasets requiring more complicated reasoning types are introduced, such as iterative reasoning about the narrative of a story (Kočiskỳ et al., 2018), multi-hop reasoning across multiple sentences (Khashabi et al., 2018) and multiple documents (Welbl et al., 2018), commonsense knowledge reasoning (Mihaylov et al., 2018; Zhang et al., 2018; Huang et al., 2019) and numerical discrete reasoning over paragraphs (Dua et al., 2019). However, to the best of our knowledge, although there are some datasets targeting logical reasoning in other NLP tasks mentioned in the next section, there is no dataset targeting evaluating logical reasoning in reading comprehension task. This work introduces a new dataset to fill this gap.\nLogical Reasoning in NLP. There are several tasks and datasets introduced to investigate logical reasoning in NLP. The task of natural language inference, also known as recognizing textual entailment (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos & Markert, 2005; Dagan et al., 2005; MacCartney & Manning, 2009) requires models to take a pair of sentence as input and classify their relationship types, i.e., ENTAILMENT, NEUTRAL, or CONTRADICTION. SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) datasets are proposed for this task. However, this task only focuses on sentence-level logical relationship reasoning and the relationships are limited to only a few types. Another task related to logical reasoning in NLP is argument reasoning comprehension task introduced by Habernal et al. (2018) with a dataset of this task. Given an argument with a claim and a premise, this task aims to select the correct implicit warrant from two options. Although the task is on passage-level logical reasoning, it is limited to only one logical reasoning type, i.e., identifying warrants. ReClor and the proposed task integrate various logical reasoning types into reading comprehension, with the aim to promote the development of models in logical reasoning not only from sentence-level to passage-level, but also from simple logical reasoning types to the complicated diverse ones.\nDatasets from Examinations. There have been several datasets extracted from human standardized examinations in NLP, such as RACE dataset (Lai et al., 2017) mentioned above. Besides, NTCIR QA Lab (Shibuki et al., 2014) offers comparative evaluation for solving real-world university entrance exam questions; The dataset of CLEF QA Entrance Exams Task (Rodrigo et al., 2015) is extracted from standardized English examinations for university admission in Japan; ARC dataset (Clark et al., 2018) consists of 7,787 science questions targeting student grade level, ranging from 3rd grade to 9th; The dialogue-based multiple-choice reading comprehension dataset DREAM (Sun et al., 2019) contains 10,197 questions for 6,444 multi-turn multi-party dialogues from English language exams that are designed by human experts to assess the comprehension level of Chinese learners of English. Compared with these datasets, ReClor distinguishes itself by targeting logical reasoning." }, { "heading": "3 RECLOR DATA COLLECTION AND ANALYSIS", "text": "" }, { "heading": "3.1 DATA COLLECTION", "text": "The format of data in ReClor is similar to other multiple-choice reading comprehension datasets (Richardson et al., 2013; Lai et al., 2017), where a data point contains a context, a question and four\nanswer options, among which only one option is right/most suitable. We collect reading comprehension problems that require complicated logical reasoning. However, producing such data requires the ability to perform complex logical reasoning, which makes it hard for crowdsourcing workers to generate such logical questions. Fortunately, we find the reading comprehension problems in some standardized tests, such as GMAT and LSAT, are highly in line with our expectation.\nWe construct a dataset containing 6,138 logical reasoning questions sourced from open websites and books. In the original problems, there are five answer options in which only one is right. To comply with fair use of law4, we shuffle the order of answer options and randomly delete one of the wrong options for each data point, which results in four options with one right option and three wrong options. Furthermore, similar to ImageNet dataset5, ReClor is available for non-commercial research purpose only. We are also hosting a public evaluation server on EvalAI (Yadav et al., 2019) to benchmark progress on Reclor." }, { "heading": "3.2 DATA ANALYSIS", "text": "As mentioned above, we collect 6,138 data points, in which 91.22% are from actual exams of GMAT and LSAT while others are from high-quality practice exams. They are divided into training set, validation set and testing set with 4,638, 500 and 1,000 data points respectively. The overall statistics of ReClor and comparison with other similar multiple-choice MRC datasets are summarized in Table 2. As shown, ReClor is of comparable size and relatively large vocabulary size. Compared with RACE, the length of the context of ReCor is much shorter. In RACE, there are many redundant sentences in context to answer a question. However, in ReClor, every sentence in the context passages is important, which makes this dataset focus on evaluating the logical reasoning ability of models rather than the ability to extract relevant information from a long context. The length of answer options of ReClor is largest among these datasets. We analyze and manually annotate the types of questions on the testing set and group them into 17 categories, whose percentages and descriptions are shown in Table 3. The percentages of different types of questions reflect those in the logical reasoning module of GMAT and LSAT. Some examples of different types of logical reasoning are listed in Figure 2, and more examples are listed in the Appendix C. Taking two examples, we further express how humans would solve such questions in Table 4, showing the challenge of ReClor." }, { "heading": "3.3 DATA BIASES IN THE DATASET", "text": "The dataset is collected from exams devised by experts in logical reasoning, which means it is annotated by humans and may introduce biases in the dataset. Recent studies have shown that models can utilize the biases in a dataset of natural language understanding to perform well on the task without truly understanding the text (Schwartz et al., 2017; Cai et al., 2017; Bugert et al., 2017; Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019). It is necessary to analyze such data biases to help evaluate models. In the ReClor dataset, the common context and question are shared across the four options for each data point, so we focus on the analysis of the difference in lexical choice and sentence length of the right and wrong options without contexts and questions. We first investigate the biases of lexical choice. We lowercase the options and then use WordPiece tokenization (Wu et al., 2016) of BERTBASE (Devlin et al., 2019) to get the tokens. Similar to\n4https://www.copyright.gov/fair-use/more-info.html 5http://image-net.org/download-faq\nPoliak et al. (2018), for the tokens in options, we analyze their conditional probability of label l ∈ {right,wrong} given by the token t by p(l|t) = count(t, l)/count(t). The larger the correlation score is for a particular token, the more likely it contributes to the prediction of related option. Table 5 reports tokens in training set which occur at least twenty times with the highest scores since many of the tokens with the highest scores are of low frequency. We further analyze the lengths of right and wrong options (Gururangan et al., 2018) in training set. We notice a slight difference in the distribution of sentence length for right and wrong options. The average length for wrong options is around 21.82 whereas that for right options is generally longer with an average length of 23.06." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 BASELINE MODELS", "text": "Many neural network based models such as FastText (Joulin et al., 2017), Bi-LSTM, GPT (Radford et al., 2018), GPT-2 (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019),\nRoBERTa (Liu et al., 2019) have achieved impressive results in various NLP tasks. We challenge these neural models with ReClor to investigate how well they can perform. Details of the baseline models and implementation are shown in the Appendix A and B." }, { "heading": "4.2 EXPERIMENTS TO FIND BIASED DATA", "text": "As mentioned earlier, biases prevalently exist in human-annotated datasets (Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019; Niven & Kao, 2019), which are often exploited by models to perform well without truly understanding the text. Therefore, it is necessary to find out the biased data points in ReClor in order to evaluate models in a more comprehensive manner (Sugawara et al., 2018). To this end, we feed the five strong baseline models (GPT, GPT-2, BERTBASE, XLNetBASE and RoBERTaBASE) with ONLY THE ANSWER OPTIONS for each problem. In other words, we purposely remove the context and question in the inputs. In this way, we are able to identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question. However, the setting of this task is a multiple-choice question with 4 probable options, and even a chance baseline could have 25% probability to get it right. To eliminate the effect of random guess, we set four different random seeds for each model and pick the data points that are predicted correctly in all four cases to form the EASY set. Then, the data points which are predicted correctly by the models at random could be nearly eliminated, since any data point only has a probability of (25%)4 = 0.39% to be guessed right consecutively for four times. Then we unite the sets of data points that are consistently predicted right by each model,\nbecause intuitively different models may learn different biases of the dataset. The above process is formulated as the following expression,\nCEASY = (Cseed1GPT ∩ C seed2 GPT ∩ C seed3 GPT ∩ C seed4 GPT )\n∪ (Cseed1GPT−2 ∩ C seed2 GPT−2 ∩ C seed3 GPT−2 ∩ C seed4 GPT−2)\n∪ (Cseed1BERT ∩ C seed2 BERT ∩ C seed3 BERT ∩ C seed4 BERT)\n∪ (Cseed1XLNet ∩ C seed2 XLNet ∩ C seed3 XLNet ∩ C seed4 XLNet)\n∪ (Cseed1RoBERTa ∩ C seed2 RoBERTa ∩ C seed3 RoBERTa ∩ C seed4 RoBERTa),\nCHARD = CTEST − CEASY,\n(1)\nwhere Cseed1BERT denotes the set of data points which are predicted correctly by BERTBASE with seed 1, and similarly for the rest. Table 6 shows the average performance for each model trained with four different random seeds and the number of data points predicted correctly by all of them. Finally, we get 440 data points from the testing set CTEST and we denote this subset as EASY set CEASY and the other as HARD set CHARD." }, { "heading": "4.3 TRANSFER LEARNING THROUGH FINE-TUNING", "text": "Among multiple-choice reading comprehension or QA datasets from exams, although the size of ReClor is comparable to those of ARC (Clark et al., 2018) and DREAM (Sun et al., 2019), it is much smaller than RACE Lai et al. (2017). Recent studies (Min et al., 2017; Howard & Ruder, 2018; Huang et al., 2019; Jin et al., 2019) have shown the effectiveness of pre-training on similar tasks or datasets then fine-tuning on the target dataset for transfer learning. Jin et al. (2019) find that by first training on RACE (Lai et al., 2017) and then further fine-tuning on the target dataset, the performances of BERTBASE on multiple-choice dataset MC500 (Richardson et al., 2013) and DREAM (Sun et al., 2019) can significantly boost from 69.5% to 81.2%, and from 63.2% to 70.2%, respectively. However, they also find that the model cannot obtain significant improvement even performs worse if it is first fine-tuned on span-based dataset like SQuAD (Rajpurkar et al., 2016). ReClor is a multiple-choice dataset, so we choose RACE for fine-tuning study." }, { "heading": "4.4 RESULTS AND ANALYSIS", "text": "The performance of all tested models on the ReClor is presented in Table 7. This dataset is built on questions designed for students who apply for admission to graduate schools, thus we randomly choose 100 samples from the testing set and divide them into ten tests, which are distributed to ten different graduate students in a university. We take the average of their scores and present it as the baseline of graduate students. The data of ReClor are carefully chosen and modified from only high-quality questions from standardized graduate entrance exams. We set the ceiling performance to 100% since ambiguous questions are not included in the dataset.\nThe performance of fastText is better than random guess, showing that word correlation could be used to help improve performance to some extent. It is difficult for Bi-LSTM to converge on this\ndataset. Transformer-based pre-training models have relatively good performance, close to the performance of graduate students. However, we find that these models only perform well on EASY set with around 75% accuracy, showing these models have an outstanding ability to capture the biases of the dataset, but they perform poorly on HARD set with only around 30% accuracy. In contrast, humans can still keep good performance on HARD set. We notice the difference in testing accuracy performed by graduate students on EASY and HARD set, but this could be due to the small number of students participated in the experiments. Therefore, we say humans perform relatively consistent on both biased and non-biased dataset.\nIt is noticed that if the models are first trained on RACE and then fine-tuned on ReClor, they could obtain significant improvement, especially on HARD set. The overall performance of RoBERTaLARGE is even better than that of graduate students. This similar phenomenon can also be observed on DREAM dataset (Sun et al., 2019) by Jin et al. (2019), which shows the potential of transfer learning for reasoning tasks. However, even after fine-tuning on RACE, the best performance of these strong baselines on HARD set is around 50%, still lower than that of graduate students and far away from ceiling performance.\nExperiments in different input settings are also done. Compared with the input setting of answer options only (A), the setting of questions and answer options (Q, A) can not bring significant improvement. This may be because some questions e.g., Which one of the following is an assumption required by the argument?, Which one of the following, if true, most strengthens the argument? can be used in the same reasoning types of question, which could not offer much information. Further adding context causes significant boost, showing the high informativeness of the context.\nWe further analyze the model performance with respect to different question types of logical reasoning. Some results are shown in Figure 4 and the full results are shown in Figure 5, 6 and 7 in the Appendix E. Three models of BERTLARGE, XLNetLARGE and RoBERTaLARGE perform well on most of types. On HARD set, the three models perform poorly on certain types such as STRENGTHEN, WEAKEN and ROLE which require extensive logical reasoning. However, they perform relatively better on other certain types, such as CONCLUSION/MAIN POINT and MATCH STRUCTURES that\nare more straight-forward. For the result of transfer learning, we analyze XLNetLARGE in detail. Though the overall performance is significantly boosted after fine-tuning on RACE first, the histograms in the bottom of Figure 4 show that on EASY set, accuracy of the model with fine-tuning on RACE is similar to that without it among most question types, while on HARD set, significant improvement on some question types is observed, such as CONCLUSION/MAIN POINT and MOST STRONGLY SUPPORTED. This may be because these types require less logical reasoning to some extent compared with other types, and similar question types may also be found in RACE dataset. Thus, the pre-training on RACE helps enhance the ability of logical reasoning especially of relatively simple reasoning types, but more methods are still needed to further enhance the ability especially that of relatively complex reasoning types." }, { "heading": "5 CONCLUSION", "text": "In this paper, we introduce ReClor, a reading comprehension dataset requiring logical reasoning, with the aim to push research progress on logical reasoning in NLP forward from sentence-level to passage-level and from simple logical reasoning to multiple complicated one. We propose to identify biased data points and split the testing set into EASY and HARD group for biased and non-biased data separately. We further empirically study the different behaviors of state-of-the-art models on these two testing sets, and find recent powerful transformer-based pre-trained language models have an excellent ability to exploit the biases in the dataset but have difficulty in understanding and reasoning given the non-biased data with low performance close to or slightly better than random guess. These results show there is a long way to equip deep learning models with real logical reasoning abilities. We hope this work would inspire more research in future to adopt similar split technique and evaluation scheme when reporting their model performance. We also show by first fine-tuning on a large-scale dataset RACE then fine-tuning on ReClor, the models could obtain significant improvement, showing the potential of transfer learning to solve reasoning tasks." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank the anonymous reviewers for their insightful comments and suggestions; thank Rishabh Jain from Georgia Tech for helping build up the leaderboard of ReClor on EvalAI. Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133, MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490. Weihao Yu and Zihang Jiang would like to thank TFRC program for the support of computational resources." }, { "heading": "A BASELINE MODELS", "text": "fastText. FastText (Joulin et al., 2017) models sentences as a bag of n-grams, and tries to predict the probability of each answer being correct independently. We choose the answer with the highest score as the prediction for the multiple-choice setting.\nLSTM sentence encoder. A two-layer bi-LSTM is randomly initialized as a sentence encoder with GloVe word embedding (Pennington et al., 2014). With a span of text as input, the last hidden state of the second layer is max-pooled and then fed into a fully-connected layer to compute the output score.\nGPT and GPT-2. GPT (Radford et al., 2018) and GPT-2 (Radford et al., 2019) are both transformer (Vaswani et al., 2017) based models which are pre-trained using unsupervised method with a standard language modeling objective. GPT is pre-trained on BooksCorpus; GPT-2 is pre-trained using a larger dataset called WebText. Here we use the smallest model proposed in (Radford et al., 2019) as our GPT-2 baseline. To fine-tune on ReClor, the final hidden vector corresponding to the last input token ([ classify ]) is used as the aggregate representation followed by an extra fully connected layer to compute the score.\nBERT. BERT (Devlin et al., 2019) is also a transformer (Vaswani et al., 2017) based model which is trained by using BooksCorpus (Zhu et al., 2015) and English Wikipedia in two unsupervised tasks, i.e., Masked LM (MLM) and Next Sentence Prediction (NSP). During fine-tuning, the final hidden vector corresponding to the first input token ([CLS]) is used as the aggregate representation followed by two extra fully connected layers to compute the score.\nXLNet. XLNet (Yang et al., 2019) is trained with Permutation Language Modeling and without NSP. In addition, beside BooksCorpus and English Wikipedia used in BERT, it uses Giga5 (Parker et al., 2011), ClueWeb 2012-B (extended from (Callan et al., 2009)), and Common Crawl (com, 2019) for pre-training. We use the final hidden vector corresponding to the last input token <cls> as the aggregate representation and introduce two fully connected layers to predict the score.\nRoBERTa. RoBERTa (Liu et al., 2019) is an improved pre-training procedure of BERT with training the model longer, with bigger batches over more data and removing NSP objective etc.. Extra two fully connected layers are added to transform the final hidden vector of the first input token (<s> to the score.\nThe input format of different models is shown in Table 8.\nB IMPLEMENTATION DETAIL\nAdam is used by all models. For fastText, we use its python library6 by converting ReClor to the required form, and keep the default setting of the hyper parameters. For Bi-LSTM, we use a twolayer Bidirectional LSTM with the GloVe 300d word embedding (Pennington et al., 2014) followed by max-pooling and a fully-connected layer. We train the model for 100 epochs using a batch size of 64 and learning rate of 0.1. A learning rate decay of 0.5 is also applied every 10 epochs. For pre-training models, we modify the code of Transformers of Hugging Face7 to implement them on ReClor. We use a batch size of 24 and fine-tune for 10 epochs. The maximum input sequence length for all models is 256. The detailed hyperparameters are shown in Table 9.\n6https://github.com/facebookresearch/fastText 7https://github.com/huggingface/transformers" }, { "heading": "C EXAMPLES", "text": "" }, { "heading": "D CONSISTENCY OF DIFFERENT MODELS", "text": "" }, { "heading": "E RESULTS WITH RESPECT TO DIFFERENT QUESTION TYPES", "text": "0 20 40 60 80 100 Accuracy (%)\nNecessary Assumptions\nSufficient Assumptions\nStrengthen\nWeaken\nEvaluation\nImplication\nConclusion/Main Point\nMost Strongly Supported\nExplain or Resolve\nPrinciple\nDispute\nTechnique\nRole\nIdentify a Flaw\nMatch Flaws\nMatch Structures\nOthers\nfastText Bi-LSTM\nGPT GPT-2\nBERTBASE BERTLARGE\nXLNetBASE XLNetLARGE\nRoBERTaBASE RoBERTaLARGE\n0 20 40 60 80 100 Accuracy (%)\nNecessary Assumptions\nSufficient Assumptions\nStrengthen\nWeaken\nEvaluation\nImplication\nConclusion/Main Point\nMost Strongly Supported\nExplain or Resolve\nPrinciple\nDispute\nTechnique\nRole\nIdentify a Flaw\nMatch Flaws\nMatch Structures\nOthers\nfastText Bi-LSTM\nGPT GPT-2\nBERTBASE BERTLARGE\nXLNetBASE XLNetLARGE\nRoBERTaBASE RoBERTaLARGE\nFigure 6: Accuracy of all baseline models on EASY set of testing set\n0 20 40 60 80 100 Accuracy (%)\nNecessary Assumptions\nSufficient Assumptions\nStrengthen\nWeaken\nEvaluation\nImplication\nConclusion/Main Point\nMost Strongly Supported\nExplain or Resolve\nPrinciple\nDispute\nTechnique\nRole\nIdentify a Flaw\nMatch Flaws\nMatch Structures\nOthers\nfastText Bi-LSTM\nGPT GPT-2\nBERTBASE BERTLARGE\nXLNetBASE XLNetLARGE\nRoBERTaBASE RoBERTaLARGE\nFigure 7: Accuracy of all baseline models on HARD set of testing set" } ]
2,020
RECLOR: A READING COMPREHENSION DATASET REQUIRING LOGICAL REASONING
SP:11e711f93423bcab7a9bad9c9bfd969519b09eb2
[ "This paper is on building binary network. The steps for building binary network takes several components: traditional strategy to binary/optimize a model (like data augmentation, binary initialization using 2-stage optimization, etc), real-to-binary attention matching that tries to match the output of real values and binarized model, and data-driven channel rescaling to better approximate real convolutions. All these components together makes a strong binary network.", "This paper studies the problem of training binary neural networks. The authors first provide a strong baseline by assembling a group of training techniques that appeared in recent work that achieves state-of-the-art performance. Then the authors proposed two methods to further boost the performance gain. The first method is to use a teacher-student mechanism that uses a fully real-valued network to teach a binary network. The process is divided into three stages involving two intermediate models to reduce the gap within each teacher-student pair. The second method is to learn a re-scale factor for binary activations using real-valued activations from the previous block. Experiments show that the proposed methods improves the performance on ImageNet and CIFAR-100." ]
This paper shows how to train binary networks to within a few percent points (∼ 3− 5%) of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet and reduces the gap to its realvalued counterpart to less than 3% and 5% top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at https://github.com/brais-martinez/real2binary.
[ { "affiliations": [], "name": "Brais Martinez" }, { "affiliations": [], "name": "Jing Yang" }, { "affiliations": [], "name": "Adrian Bulat" }, { "affiliations": [], "name": "Georgios Tzimiropoulos" } ]
[ { "authors": [ "Milad Alizadeh", "Javier Fernández-Marqués", "Nicholas D. Lane", "Yarin Gal" ], "title": "An empirical study of binary neural networks", "venue": "optimisation. In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Joseph Bethge", "Haojin Yang", "Marvin Bornstein", "Christoph Meinel" ], "title": "Back to simplicity: How to train accurate BNNs from scratch", "venue": null, "year": 1906 }, { "authors": [ "Adrian Bulat", "Georgios Tzimiropoulos" ], "title": "XNOR-Net++: Improved binary neural networks", "venue": "In British Machine Vision Conference,", "year": 2019 }, { "authors": [ "Adrian Bulat", "Georgios Tzimiropoulos", "Jean Kossaifi", "Maja Pantic" ], "title": "Improved training of binary networks for human pose estimation and image recognition", "venue": null, "year": 1904 }, { "authors": [ "Zhaowei Cai", "Xiaodong He", "Jian Sun", "Nuno Vasconcelos" ], "title": "Deep learning with low precision by half-wave gaussian quantization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Matthieu Courbariaux", "Itay Hubara", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or-1", "venue": null, "year": 2016 }, { "authors": [ "Ruizhou Ding", "Ting-Wu Chin", "Zeye Liu", "Diana Marculescu" ], "title": "Regularizing activation distribution for training binarized deep networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Julian Faraone", "Nicholas J. Fraser", "Michaela Blott", "Philip H.W. Leong" ], "title": "SYQ: learning symmetric quantization for efficient deep neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ruihao Gong", "Xianglong Liu", "Shenghu Jiang", "Tianxiang Li", "Peng Hu", "Jiazhen Lin", "Fengwei Yu", "Junjie Yan" ], "title": "Differentiable soft quantization: Bridging full-precision and low-bit neural networks", "venue": null, "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Xiaofan Lin", "Cong Zhao", "Wei Pan" ], "title": "Towards accurate binary convolutional neural network", "venue": "In Advances on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chunlei Liu", "Wenrui Ding", "Xin Xia", "Baochang Zhang", "Jiaxin Gu", "Jianzhuang Liu", "Rongrong Ji", "David Doermann" ], "title": "Circulant binary convolutional networks: Enhancing the performance of 1-bit dcnns with circulant back propagation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zechun Liu", "Baoyuan Wu", "Wenhan Luo", "Xin Yang", "Wei Liu", "Kwang-Ting Cheng" ], "title": "Bi-Real Net: Enhancing the performance of 1-bit CNNs with improved representational capability and advanced training algorithm", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-Net: Imagenet classification using binary convolutional neural networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Daniel Soudry", "Itay Hubara", "Ron Meir" ], "title": "Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights", "venue": "In Advances on Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ziwei Wang", "Jiwen Lu", "Chenxin Tao", "Jie Zhou", "Qi Tian" ], "title": "Learning channel-wise interactions for binary convolutional neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zhe Xu", "Ray C.C. Cheung" ], "title": "Accurate and compact convolutional neural networks with trained binarization", "venue": "In British Machine Vision Conference,", "year": 2019 }, { "authors": [ "Haojin Yang", "Martin Fritzsche", "Christian Bartz", "Christoph Meinel" ], "title": "BMXNet: An open-source binary neural network implementation based on MXNet", "venue": "In ACM International Conference on Multimedia,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Dongqing Zhang", "Jiaolong Yang", "Dongqiangzi Ye", "Gang Hua" ], "title": "LQ-Nets: Learned quantization for highly accurate and compact deep neural networks", "venue": "In European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "Mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Jianhao Zhang", "Yingwei Pan", "Ting Yao", "He Zhao", "Tao Mei" ], "title": "dabnn: A super fast inference framework for binary neural networks on ARM devices", "venue": "In ACM International Conference on Multimedia,", "year": 2019 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": null, "year": 2016 }, { "authors": [ "Chenzhuo Zhu", "Song Han", "Huizi Mao", "William J Dally" ], "title": "Trained ternary quantization", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Shilin Zhu", "Xin Dong", "Hao Su" ], "title": "Binary ensemble neural network: More bits per network or more networks per bit", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Bohan Zhuang", "Chunhua Shen", "Mingkui Tan", "Lingqiao Liu", "Ian D. Reid" ], "title": "Towards effective low-bitwidth convolutional neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Bohan Zhuang", "Chunhua Shen", "Mingkui Tan", "Lingqiao Liu", "Ian Reid" ], "title": "Structured binary neural networks for accurate image classification and semantic segmentation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Following the introduction of the BinaryNeuralNet (BNN) algorithm (Courbariaux et al., 2016), binary neural networks emerged as one of the most promising approaches for obtaining highly efficient neural networks that can be deployed on devices with limited computational resources. Binary convolutions are appealing mainly for two reasons: (a) Model compression: if the weights of the network are stored as bits in a 32-bit float, this implies a reduction of 32× in memory usage. (b) Computational speed-up: computationally intensive floating-point multiply and add operations are replaced by efficient xnor and pop-count operations, which have been shown to provide practical speed-ups of up to 58× on CPU (Rastegari et al., 2016) and, as opposed to general low bit-width operations, are amenable to standard hardware. Despite these appealing properties, binary neural networks have been criticized as binarization typically results in large accuracy drops. Thus, their deployment in practical scenarios is uncommon. For example, on ImageNet classification, there is a ∼ 18% gap in top-1 accuracy between a ResNet-18 and its binary counterpart when binarized with XNOR-Net (Rastegari et al., 2016), which is the method of choice for neural network binarization.\nBut how far are we from training binary neural networks that are powerful enough to become a viable alternative to real-valued networks? Our first contribution in this work is to take stock of recent advances on binary neural networks and train a very strong baseline which already results in state-of-the-art performance. Our second contribution is a method for bridging most of the remaining gap, which boils down to minimizing the discrepancy between the output of the binary and the corresponding real-valued convolution. This idea is materialized in our work in two complementary ways: Firstly, we use an attention matching strategy so that the real-valued network can more\n* Denotes equal contribution\nclosely guide the binary network during optimization. However, we show that due to the architectural discrepancies between the real and the binary networks, a direct application of teacher-student produces sub-optimal performance. Instead, we propose to use a sequence of teacher-student pairs that progressively bridges the architectural gap. Secondly, we further propose to use the real-valued activations of the binary network, available prior to the binarization preceding convolution, to compute scale factors that are used to re-scale the activations right after the application of the binary convolution. This is in line with recent works which have shown that re-scaling the binary convolution output can result in large performance gains (Rastegari et al., 2016; Bulat & Tzimiropoulos, 2019). However, unlike prior work, we compute the scaling factors in a data-driven manner based on the real-valued activations of each layer prior to binarization, which results in superior performance.\nOverall, we make the following contributions:\n• We construct a very strong baseline by combining some recent insights on training binary networks and by performing a thorough experimentation to find the most well-suited optimization techniques. We show that this baseline already achieves state-of-the-art accuracy on ImageNet, surpassing all previously published works on binary networks.\n• We propose a real-to-binary attention matching: this entails that matching spatial attention maps computed at the output of the binary and real-valued convolutions is particularly suited for training binary neural networks (see Fig. 1 left and section 4.2). We also devise an approach in which the architectural gap between real and binary networks is progressively bridged through a sequence of teacher-student pairs.\n• We propose a data-driven channel re-scaling: this entails using the real-valued activations of the binary network prior to their binarization to compute the scale factors used to rescale the activations produced right after the application of the binary convolution. See Fig. 1, right, and section 4.3.\n• We show that our combined contributions provide, for the first time, competitive results on two standard datasets, achieving 76.2% top-1 performance on CIFAR-100 and 65.4% top-1 performance on ImageNet when using a ResNet-18 –a gap bellow 3% and 5% respectively compared to their full precision counterparts." }, { "heading": "2 RELATED WORK", "text": "While being pre-dated by other works on binary networks (Soudry et al., 2014), the BNN algorithm (Courbariaux et al., 2016) established how to train networks with binary weights within the familiar back-propagation paradigm. The training method relies on a real-valued copy of the network weights which is binarized during the forward pass, but is updated during back-propagation ignoring the binarization step. Unfortunately, BNN resulted in a staggering ∼ 28% gap in top-1 accuracy compared to the full precision ResNet-18 on ImageNet.\nIt is worth noting that binary networks do have a number of floating point operations. In fact, the output of a binary convolution is not binary (values are integers resulting from the count). Also, in accordance to other low bit-width quantization methodologies, the first convolution (a costly 7 × 7 kernel in ResNet), the fully connected layer and the batch normalization layers are all real-valued. In consequence, a line of research has focused on developing methodologies that add a fractional amount of real-valued operations in exchange for significant accuracy gains. For example, the seminal work of XNOR-Net (Rastegari et al., 2016) proposed to add a real-valued scaling factor to each output channel of a binary convolution, a technique that has become standard for binary networks. Similarly, Bi-Real Net (Liu et al., 2018) argued that skip connections are fundamental for binary networks and observed that the flow of full precision activations provided by the skip connections is interrupted by the binary downsample convolutions. This degrades the signal and make subsequent skip connections less effective. To alleviate this, they proposed making the downsample layers real valued, obtaining around 3% accuracy increase in exchange for a small increase in computational complexity.\nImproving the optimization algorithm for binary networks has been another fundamental line of research. Examples include the use of smooth approximations of the gradient, the use of PReLU (Bulat et al., 2019), a two-stage training which binarizes the weights first and then the activations (Bulat et al., 2019) and progressive quantization (Gong et al., 2019; Bulat et al., 2019). The work in (Wang et al., 2019) proposed to learn channel correlations through reinforcement learning to better preserve the sign of a convolution output. A set of regularizers are added to the loss term in (Ding et al., 2019) so as to control the range of values of the activations, and guarantee good gradient flow. Other optimization aspects, such the effect of gradient clipping or batch-norm momentum, were empirically tested in (Alizadeh et al., 2019). In section 4.1, we show how to combine many of the insights provided in these works with standard optimization techniques to obtain a very strong baseline that already achieves state-of-the-art accuracy.\nWhile the aforementioned works either maintain the same computational cost, or increase it by a fractional amount, other research has focused instead on relaxing the problem constraints by increasing the number of binary operations by a large amount, typically a factor of 2 to 8 times. Examples include ABC-Net (Lin et al., 2017), the structure approximation of (Zhuang et al., 2019), the circulant CNN of (Liu et al., 2019), and the binary ensemble of (Zhu et al., 2019). Note that the large increase of binary operations diminishes the efficiency claim that justifies the use of binary networks in first place. Furthermore, we will show that there is still a lot of margin in order to bridge the accuracy gap prior to resorting to scaling up the network capacity1.\nThe methodology proposed in this paper has some relations with prior work: our use of attention matching as described in section 4.2 is somewhat related to the feature distillation approach of (Zhuang et al., 2018). However, (Zhuang et al., 2018) tries to match whole feature maps of the to-be-quantized network with the quantized feature maps of a real-valued network that is trained in parallel with the to-be-quantized network. Such an approach is shown to improve training of low-bitwidth quantized models but not binary networks. Notably, our approach based on matching attention maps is much simpler and shown to be effective for the case of binary networks.\nOur data-driven channel re-scaling approach, described in section 4.3, is related to the channel rescaling approach of XNOR-Net, and also that of (Xu & Cheung, 2019; Bulat & Tzimiropoulos, 2019), which propose to learn the scale factors discriminatively through backpropagation. Contrary to (Xu & Cheung, 2019; Bulat & Tzimiropoulos, 2019), our method is data-driven and avoids using\n1There is also a large body of work focusing on other low-bit quantization strategies, but a review of these techniques goes beyond the scope of this section.\nfixed scale factors learnt during training. Contrary to XNOR-Net, our method discriminatively learns how to produce the data-driven scale factors so that they are optimal for the task in hand." }, { "heading": "3 BACKGROUND", "text": "This section reviews the binarization process proposed in (Courbariaux et al., 2016) and its improved version from (Rastegari et al., 2016), which is the method of choice for neural network binarization.\nWe denote byW ∈ Ro×c×k×k andA ∈ Rc×win×hin the weights and input features of a CNN layer, where o and c represent the number of output and input channels, k the width and height of the kernel, and win and hin represent the spatial dimension of the input features A. In (Courbariaux et al., 2016), both weights and activations are binarized using the sign function and then convolution is performed as A ∗W ≈ sign(A)©∗ sign(W) where©∗ denotes the binary convolution, which can be implemented using bit-wise operations.\nHowever, this direct binarization approach introduces a high quantization error that leads to low accuracy. To alleviate this, XNOR-Net (Rastegari et al., 2016) proposes to use real-valued scaling factors to re-scale the output of the binary convolution as\nA ∗W ≈ (sign(A)©∗ sign(W)) Kα, (1)\nwhere denotes the element-wise multiplication, α and K are the weight and activation scaling factors, respectively, calculated in Rastegari et al. (2016) in an analytic manner. More recently, Bulat & Tzimiropoulos (2019) proposed to fuse α and K into a single factor Γ that is learned via backpropagation, resulting in further accuracy gains." }, { "heading": "4 METHOD", "text": "This section firstly introduces our strong baseline. Then, we present two ways to improve the approximation of Eq. 1: Firstly, we use a loss based on matching attention maps computed from the binary and a real-valued network (see section 4.2). Secondly, we make the scaling factor a function of the real-valued input activations A (see section 4.3)." }, { "heading": "4.1 BUILDING A STRONG BASELINE", "text": "Currently, almost all works on binary networks use XNOR-Net and BNN as baselines. In this section, we show how to construct a strong baseline by incorporating insights and techniques described in recent works as well as standard optimization techniques. We show that our baseline already achieves state-of-the-art accuracy. We believe this is an important contribution towards understanding the true impact of proposed methodologies and towards assessing the true gap with real-valued networks. Following prior work in binary networks, we focus on the ResNet-18 architecture and apply the improvements listed below:\nBlock structure: It is well-known that a modified ResNet block must be used to obtain optimal results for binary networks. We found the widely-used setting where the operations are ordered as BatchNorm→ Binarization→ BinaryConv→ Activation to be the best. The skip connection is the last operation of the block (Rastegari et al., 2016). Note that we use the sign function to binarize the activations. However, the BatchNorm layer includes an affine transformation and this ordering of the blocks allows its bias term act as a learnable binarization threshold.\nResidual learning: We used double skip connections, as proposed in (Liu et al., 2018).\nActivation: We used PReLU (He et al., 2015) as it is known to facilitate the training of binary networks (Bulat et al., 2019).\nScaling factors: We used discriminatively learnt scaling factors via backpropagation as in (Bulat & Tzimiropoulos, 2019).\nDownsample layers: We used real-valued downsample layers (Liu et al., 2018). We found the large accuracy boost to be consistent across our experiments (around 3− 4% top-1 improvement on ImageNet).\nWe used the following training strategies to train our strong baseline:\nInitialization: When training binary networks, it is crucial to use a 2-stage optimization strategy (Bulat et al., 2019). In particular, we first train a network using binary activations and real-valued weights, and then use the resulting model as initialization to train a network where both weights and activations are binarized.\nWeight decay: Setting up weight decay carefully is surprisingly important. We use 1e − 5 when training stage 1 (binary activation and real weights network), and set it to 0 on stage 2 (Bethge et al., 2019). Note that weights at stage 2 are either 1 or−1, so applying an L2 regularization term to them does not make sense.\nData augmentation: For CIFAR-100 we use the standard random crop, horizontal flip and rotation (±15◦). For ImageNet, we found that random cropping, flipping and colour jitter augmentation worked best. However, colour jitter is disabled for stage 2.\nMix-up: We found that mix-up (Zhang et al., 2017) is crucial for CIFAR-100, while it slightly hurts performance for ImageNet – this is due to the higher risk of overfitting on CIFAR-100.\nWarm-up: We used warm-up for 5 epochs during stage 1 and no warm-up for stage 2.\nOptimizer: We used Adam (Kingma & Ba, 2014) with a stepwise scheduler. The learning rate is set to 1e − 3 for stage 1, and 2e − 4 for stage 2. For CIFAR-100, we trained for 350 epochs, with steps at epochs 150, 250 and 320. For ImageNet, we train for 75 epochs, with steps at epochs 40, 60 and 70. Batch sizes are 256 for ImageNet and 128 for CIFAR-100." }, { "heading": "4.2 REAL-TO-BINARY ATTENTION MATCHING", "text": "We make the reasonable assumption that if a binary network is trained so that the output of each binary convolution more closely matches the output of a real convolution in the corresponding layer of a real-valued network, then significant accuracy gains can be obtained. Notably, a similar assumption was made in (Rastegari et al., 2016) where analytic scale factors were calculated so that the error between binary and real convolutions is minimized. Instead, and inspired by the attention transfer method of (Zagoruyko & Komodakis, 2017), we propose to enforce such a constraint via a loss term at the end of each convolutional block by comparing attention maps calculated from the binary and real-valued activations. Such supervisory signals provide the binary network with muchneeded extra guidance. It is also well-known that backpropagation for binary networks is not as effective as for real-valued ones. By introducing such loss terms at the end of each block, gradients do not have to traverse the whole network and suffer a degraded signal.\nAssuming that attention matching is applied at a set of J transfer points within the network, the total loss can be expressed as:\nLatt = J∑\nj=1 ‖ QjS ‖QjS‖2 − QjT ‖QjT ‖2 ‖, (2)\nwhere Qj = ∑c\ni=1 |Ai|2 and Ai is the i−th channel of activation map A. Moreover, at the end of the network, we apply a standard logit matching loss (Hinton et al., 2015).\nProgressive teacher-student: We observed that teacher and student having as similar architecture as possible is very important in our case. We thus train a sequence of teacher-student pairs that progressively bridges the differences between the real network and the binary network in small increments: Step 1: the teacher is the real-valued network with the standard ResNet architecture. The student is another real-valued network, but with the same architecture as the binary ResNet-18 (e.g. double skip connection, layer ordering, PReLU activations, etc). Furthermore, a soft binarization (a Tanh function) is applied to the activations instead of the binarization (sign) function. In this way the network is still real-valued, but it behaves more closely to a network with binary activations. Step 2: The network resulting from the previous step is used as the teacher. A network with binary activations and real-valued weights is used as the student. Step 3: The network resulting from step 2 is used as the teacher and the network with binary weights and binary activations is the student. In this stage, only logit matching is used." }, { "heading": "4.3 DATA-DRIVEN CHANNEL RE-SCALING", "text": "While the approach of the previous section provides better guidance for the training of binary networks, the representation power of binary convolutions is still limited, hindering its capacity to approximate the real-valued network. Here we describe how to boost the representation capability of a binary neural network and yet incur in only a negligible increment on the number of operations.\nPrevious works have shown the effectiveness of re-scaling binary convolutions with the goal of better approximating real convolutions. XNOR-Net (Rastegari et al., 2016) proposed to compute these scale factors analytically while (Bulat & Tzimiropoulos, 2019; Xu & Cheung, 2019) proposed to learn them discriminatively in an end-to-end manner, showing additional accuracy gains. For the latter case, during training, the optimization aims to find a set of fixed scaling factors that minimize the average expected loss for the training set. We propose instead to go beyond this and obtain discriminatively-trained input-dependent scaling factors – thus, at test time, these scaling factors will not be fixed but rather inferred from data.\nLet us first recall what the signal flow is when going through a binary block. The activations entering a binary block are actually real-valued. Batch normalization centers the activations, which are then binarized, losing a large amount of information. Binary convolution, re-scaling and PReLU follow. We propose to use the full-precision activation signal, available prior to the large information loss incurred by the binarization operation, to predict the scaling factors used to re-scale the output of the binary convolution channel-wise. Specifically, we propose to approximate the real convolution as follows:\nA ∗W ≈ (sign(A)©∗ sign(W)) α G(A;WG), (3)\nwhere WG are the parameters of the gating function G. Such function computes the scale factors used to re-scale the output of the binary convolution, and uses the pre-convolution real-valued activations as input. Fig. 1 shows our implementation of function G. The design is inspired by Hu et al. (2018), but we use the gating function to predict ahead rather than as a self-attention mechanism.\nAn optimal mechanism to modulate the output of the binary convolution clearly should not be the same for all examples as in Bulat & Tzimiropoulos (2019) or Xu & Cheung (2019). Note that in Rastegari et al. (2016) the computation of the scale factors depends on the input activations. However the analytic calculation is sub-optimal with respect to the task at hand. To circumvent the aforementioned problems, our method learns, via backpropagation for the task at hand, to predict the modulating factors using the real-valued input activations. By doing so, more than 1/3 of the remaining gap with the real-valued network is bridged." }, { "heading": "4.4 COMPUTATIONAL COST ANALYSIS", "text": "Table 1 details the computational cost of the different binary network methodologies. We differentiate between the number of binary and floating point operations, including operations such as skip connections, pooling layers, etc. It shows that our method leaves the number of binary operations constant, and that the number of FLOPs increases by only 1% of the total floating point operation count. This is assuming a factor r of 8, which is the one used in all of our experiments. To put this into perspective, the magnitude is similar to the operation increase incurred by the XNOR-Net with respect to its predecessor, BNN. Similarly, the double skip connections proposed in (Liu et al., 2018) adds again a comparable amount of operations. Note however that in order to fully exploit the computational efficiency of binary convolutions during inference, a specialized engine such as (Zhang et al., 2019; Yang et al., 2017) is required." }, { "heading": "5 RESULTS", "text": "We present two main sets of experiments. We used ImageNet (Russakovsky et al., 2015) as a benchmark to compare our method against other state-of-the-art approaches in Sec. 5.1. ImageNet is the most widely used dataset to report results on binary networks and, at the same time, allows us to show for the first time that binary networks can perform competitively on a large-scale dataset. We further used CIFAR-100 (Krizhevsky & Hinton, 2009) to conduct ablation studies (Sec. 5.2)." }, { "heading": "5.1 COMPARISON WITH THE STATE-OF-THE-ART", "text": "Table 2 shows a comparison between our method and relevant state-of-the-art methods, including low-bit quantization methods other than binary.\nVs. other binary networks: Our strong baseline already comfortably achieves state-of-the art results, surpassing the previously best-reported result by about 1% (Wang et al., 2019). Our full method further improves over the state-of-the-art by 5.5% top-1 accuracy. When comparing to binary models that scale the capacity of the network (second set of results on Tab. 2), only (Zhuang et al., 2019) outperforms our method, surpassing it by 0.9% top-1 accuracy - yet, this is achieved using 4 times the number of binary blocks.\nVs. real-valued networks: Our method reduces the performance gap with its real-valued counterpart to ∼ 4% top-1 accuracy, or ∼ 5% if we compare against a real-valued network trained with attention transfer.\nVs. other low-bit quantization: Table 2 also shows a comparison to the state-of-the-art for low-bit quantization methods (first set of results). It can be seen that our method surpasses the performance of all methods, except for TTQ (Zhu et al., 2017), which uses 2-bit weights, full-precision activations and 1.5 the channel width at each layer." }, { "heading": "5.2 ABLATION STUDIES", "text": "In order to conduct a more detailed ablation study we provide results on CIFAR-100. We thoroughly optimized a ResNet-18 full precision network to serve as the real-valued baseline.\nTeacher-Student effectiveness: We trained a real-valued ResNet-18 using ResNet-34 as its teacher, yielding ∼ 1% top-1 accuracy increase. Instead, our progressive teacher-student strategy yields ∼ 5% top-1 accuracy gain, showing that it is a fundamental tool when training binary networks, and that its impact is much larger than for real-valued networks, where the baseline optimization is already healthier.\nPerformance gap to real-valued: We observe that, for CIFAR-100, we close the gap with realvalued networks to about 2% when comparing with the full-precision ResNet-18, and to about 3% when optimized using teacher supervision. The gap is consistent to that on ImageNet in relative terms: 13% and 10% relative degradation on ImageNet and CIFAR-100 respectively.\nBinary vs real downsample: Our proposed method achieves similar performance increase irrespective of whether binary or real-valued downsample layers are used, the improvement being 5.5% and 6.6% top-1 accuracy gain respectively. It is also interesting to note that the results on the ablation study are consistent for all entries on both cases.\nScaling factors and attention matching: It is also noteworthy that the gating module is not effective in the absence of attention matching (see SB+G entries). It seems clear from this result that both are interconnected: the extra supervisory signal is necessary to properly guide the training, while the extra flexibility added through the gating mechanism boosts the capacity of the network to mimic the attention map." }, { "heading": "6 CONCLUSION", "text": "In this work we showed how to train binary networks to within a few percent points of their realvalued counterpart, turning binary networks from hopeful research into a compelling alternative to real-valued networks. We did so by training a binary network to not only predict training labels, but also mimic the behaviour of real-valued networks. To this end, we devised a progressive attention matching strategy to drive optimization, and combined it with a gating strategy for scaling the output of binary convolutions, increasing the representation power of the convolutional block. The two strategies combine perfectly to boost the state-of-the-art of binary networks by 5.5 top-1 accuracy on ImageNet, the standard benchmark for binary networks." } ]
2,020
null
SP:c7c4415e10a9426b0cffb18491d42922700dce85
[ "The paper proposes improvements on existing probabilistic models for code that predicts and repairs variable misuses. This is a variant of the task, proposed by Vasic et al. The task takes a dataset of python functions, introduces errors in these functions and makes a classifier that would identify what errors were introduced and effectively reconstruct the original code.", "In this paper, the authors proposed a new method to model the source code for the bug repairing task. Traditional methods use either a global sequence based model or a local graph based model. The authors proposed a new sandwich model like [RNN GNN RNN]. The experiments show that such simple combination of models significantly improve the localization and repair accuracy. " ]
Models of code can learn distributed representations of a program’s syntax and semantics to predict many non-trivial properties of a program. Recent state-ofthe-art models leverage highly structured representations of programs, such as trees, graphs and paths therein (e.g., data-flow relations), which are precise and abundantly available for code. This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models. Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer. In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional (gated) graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers (GREAT for short), which bias traditional Transformers with relational information from graph edge types. By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation. Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10–15%, while training both faster and using fewer parameters.
[ { "affiliations": [], "name": "SOURCE CODE" }, { "affiliations": [], "name": "Vincent J. Hellendoorn" }, { "affiliations": [], "name": "Petros Maniatis" }, { "affiliations": [], "name": "Rishabh Singh" }, { "affiliations": [], "name": "Charles Sutton" }, { "affiliations": [], "name": "David Bieber" } ]
[ { "authors": [ "Miltiadis Allamanis" ], "title": "The adverse effects of code duplication in machine learning models of code", "venue": "CoRR, abs/1812.06469,", "year": 2018 }, { "authors": [ "Miltiadis Allamanis", "Charles Sutton" ], "title": "Mining source code repositories at massive scale using language modeling", "venue": "In Working Conference on Mining Software Repositories (MSR),", "year": 2013 }, { "authors": [ "Miltiadis Allamanis", "Earl T Barr", "Christian Bird", "Charles Sutton" ], "title": "Suggesting accurate method and class names", "venue": "In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering,", "year": 2015 }, { "authors": [ "Miltiadis Allamanis", "Earl T. Barr", "Premkumar Devanbu", "Charles Sutton" ], "title": "A survey of machine learning for big code and naturalness", "venue": "ACM Comput. Surv.,", "year": 2018 }, { "authors": [ "Miltiadis Allamanis", "Marc Brockschmidt", "Mahmoud Khademi" ], "title": "Learning to represent programs with graphs", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Uri Alon", "Meital Zilberstein", "Omer Levy", "Eran Yahav" ], "title": "code2vec: Learning distributed representations of code", "venue": null, "year": 2018 }, { "authors": [ "Uri Alon", "Shaked Brody", "Omer Levy", "Eran Yahav" ], "title": "code2seq: Generating sequences from structured representations of code", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Avishkar Bhoopchand", "Tim Rocktäschel", "Earl Barr", "Sebastian Riedel" ], "title": "Learning python code suggestion with a sparse pointer network", "venue": "arXiv preprint arXiv:1611.08307,", "year": 2016 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Tree-to-tree neural networks for program translation", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Patrick Fernandes", "Miltiadis Allamanis", "Marc Brockschmidt" ], "title": "Structured neural summarization", "venue": "arXiv preprint arXiv:1811.01824,", "year": 2018 }, { "authors": [ "Luca Gazzola", "Daniela Micucci", "Leonardo Mariani" ], "title": "Automatic software repair: A survey", "venue": "IEEE Trans. Software Eng.,", "year": 2019 }, { "authors": [ "Vincent J Hellendoorn", "Premkumar Devanbu" ], "title": "Are deep neural networks the best choice for modeling source code", "venue": "In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering,", "year": 2017 }, { "authors": [ "Vincent J Hellendoorn", "Christian Bird", "Earl T Barr", "Miltiadis Allamanis" ], "title": "Deep learning type inference", "venue": "In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering,", "year": 2018 }, { "authors": [ "Abram Hindle", "Earl T. Barr", "Zhendong Su", "Mark Gabel", "Premkumar Devanbu" ], "title": "On the naturalness of software", "venue": "In Proceedings of the 34th International Conference on Software Engineering,", "year": 2012 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": null, "year": 2015 }, { "authors": [ "Martin Monperrus" ], "title": "Automatic software repair: A bibliography", "venue": "ACM Comput. Surv.,", "year": 2018 }, { "authors": [ "Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli" ], "title": "Neuro-symbolic program synthesis", "venue": "CoRR, abs/1611.01855,", "year": 2016 }, { "authors": [ "Chris Piech", "Jonathan Huang", "Andy Nguyen", "Mike Phulsuksombati", "Mehran Sahami", "Leonidas Guibas" ], "title": "Learning program embeddings to propagate feedback on student code", "venue": "In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37,", "year": 2015 }, { "authors": [ "Veselin Raychev", "Martin Vechev", "Andreas Krause" ], "title": "Predicting program properties from ”big code", "venue": "In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2015 }, { "authors": [ "Veselin Raychev", "Pavol Bielik", "Martin T. Vechev" ], "title": "Probabilistic model for code with decision trees", "venue": "In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming,", "year": 2016 }, { "authors": [ "Peter Shaw", "Jakob Uszkoreit", "Ashish Vaswani" ], "title": "Self-attention with relative position representations", "venue": "arXiv preprint arXiv:1803.02155,", "year": 2018 }, { "authors": [ "Marko Vasic", "Aditya Kanade", "Petros Maniatis", "David Bieber", "Rishabh Singh" ], "title": "Neural program repair by jointly learning to localize and repair", "venue": null, "year": 1904 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Samy Bengio", "Eugene Brevdo", "François Chollet", "Aidan N. Gomez", "Stephan Gouws", "Llion Jones", "Lukasz Kaiser", "Nal Kalchbrenner", "Niki Parmar", "Ryan Sepassi", "Noam Shazeer", "Jakob Uszkoreit" ], "title": "Tensor2tensor for neural machine translation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas, AMTA 2018", "venue": "Research Papers,", "year": 2018 }, { "authors": [ "Ke Wang", "Rishabh Singh", "Zhendong Su" ], "title": "Dynamic neural program embeddings for program repair", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Martin White", "Christopher Vendome", "Mario Linares-Vásquez", "Denys Poshyvanyk" ], "title": "Toward deep learning software repositories", "venue": "In Proceedings of the 12th Working Conference on Mining Software Repositories,", "year": 2015 } ]
[ { "heading": null, "text": "Models of code can learn distributed representations of a program’s syntax and semantics to predict many non-trivial properties of a program. Recent state-ofthe-art models leverage highly structured representations of programs, such as trees, graphs and paths therein (e.g., data-flow relations), which are precise and abundantly available for code. This provides a strong inductive bias towards semantically meaningful relations, yielding more generalizable representations than classical sequence-based models. Unfortunately, these models primarily rely on graph-based message passing to represent relations in code, which makes them de facto local due to the high cost of message-passing steps, quite in contrast to modern, global sequence-based models, such as the Transformer. In this work, we bridge this divide between global and structured models by introducing two new hybrid model families that are both global and incorporate structural bias: Graph Sandwiches, which wrap traditional (gated) graph message-passing layers in sequential message-passing layers; and Graph Relational Embedding Attention Transformers (GREAT for short), which bias traditional Transformers with relational information from graph edge types. By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation. Starting with a graph-based model that already improves upon the prior state-of-the-art for this task by 20%, we show that our proposed hybrid models improve an additional 10–15%, while training both faster and using fewer parameters." }, { "heading": "1 INTRODUCTION", "text": "Well-trained models of source code can learn complex properties of a program, such as its implicit type structure (Hellendoorn et al., 2018), naming conventions (Allamanis et al., 2015), and potential bugs and repairs (Vasic et al., 2019). This requires learning to represent a program’s latent, semantic properties based on its source. Initial representations of source code relied on sequential models from natural-language processing, such as n-gram language models (Hindle et al., 2012; Allamanis & Sutton, 2013; Hellendoorn & Devanbu, 2017) and Recurrent Neural Networks (RNNs) (White et al., 2015), but these models struggle to capture the complexity of source code.\nSource code is rich in structured information, such as a program’s abstract syntax tree, data and control flow. Allamanis et al. (2018b) proposed to model some of this structure directly, providing a powerful inductive bias towards semantically meaningful relations in the code. Their Gated Graph Neural Network (GGNN) model for embedding programs was shown to learn better, more generalizable representations faster than classical RNN-based sequence models.\nHowever, the debate on effective modeling of code is far from settled. Graph neural networks typically rely on synchronous message passing, which makes them inherently local, requiring many iterations of message passing to aggregate information from distant parts of the code. However, state-of-the-art graph neural networks for code often use as few as eight message-passing iterations (Allamanis et al., 2018b; Fernandes et al., 2018), primarily for computational reasons: program graphs can be very large, and training time grows linearly with the number of message passes. This is in contrast to, e.g., Transformer models (Vaswani et al., 2017), which allow program-wide information flow at every step, yet lack the powerful inductive bias from knowing the code’s structure.\nThis leads us to a basic research question: is there a fundamental dichotomy between global, unstructured and local, structured models? Our answer is an emphatic no. Our starting point is the sequence-to-pointer model of Vasic et al. (2019), which is state-of-the-art for the task of localizing and repairing a particular type of bug. As a sequence model, their architecture can (at least potentially) propagate information globally, but it lacks access to the known semantic structure of code. To this end, we replace the sequence encoder of Vasic et al. (2019) with a GGNN, yielding a new graph-to-mutlihead-pointer model. Remarkably, this model alone yields a 20% improvement over the state of the art, though at the cost of being significantly larger than the sequence model.\nMotivated by this result, we propose two new families of models that efficiently combine longerdistance information, such as the sequence model can represent, with the semantic structural information available to the GGNN. One family, the Graph Sandwich, alternates between message passing and sequential information flow through a chain of nodes within the graph; the other, the Graph Relational Embedding Attention Transformer (GREAT), generalizes the relative position embeddings in Transformers by Shaw et al. (2018) to convey structural relations instead. We show that our proposed model families outperform all prior results, as well as our new, already stronger baseline by an additional 10% each, while training both substantially faster and using fewer parameters." }, { "heading": "2 RELATED WORK", "text": "Distributed Representation of Programs: There has been increasing interest in modeling source code using machine learning (Allamanis et al., 2018a). Hindle et al. (2012) model programs as sequences of tokens and use an n-gram model for predicting code completions. Raychev et al. (2015) use conditional random fields (CRFs) to predict program properties over a set of pairwise program features obtained from the program’s dependency graph. Many approaches use neural language models to embed programs as sequences of tokens (Bhoopchand et al., 2016; White et al., 2015). Some techniques leverage the ASTs of programs in tree-structured recurrent models (Piech et al., 2015; Parisotto et al., 2016; Chen et al., 2018). code2vec (Alon et al., 2018) and code2seq (Alon et al., 2019) model programs as a weighted combination of a set of leaf-to-leaf paths in the abstract syntax tree. Finally, Allamanis et al. (2018b) proposed using GGNNs for embedding program graphs consisting of ASTs together with control-flow and data-flow edges. Some recent models of code embed run-time information of programs, e.g., program traces, besides syntactic information (Wang et al., 2018). In this paper, we explore the space of combining sequence-based and graph-based representations of programs, as well as introduce a Transformer-based model with additional program-edge information to learn program representations. Fernandes et al. (2018) also combine an RNN and a GNN architecture, achieving slight improvements over a GGNN. However, they only consider a single RNN layer inserted at the start; we include larger and more diverse hybrids, as well as entirely different combinations of structural and sequential features.\nNeural Program Repair: Automatically generating fixes to repair program bugs is an active field of research with many proposed approaches based on genetic programming, program analysis and formal methods, and machine learning (Monperrus, 2018; Gazzola et al., 2019). In this paper, we focus on a specific class of repair task called VarMisuse as proposed by Allamanis et al. (2018b), who use a graph-based embedding of programs to predict the most likely variable at each variable-use location and generate a repair prediction whenever the predicted variable is different from the one present, using an enumerative approach. Vasic et al. (2019) improved this approach by jointly predicting both the bug and repair locations using a two-headed pointer mechanism. Our multi-headed pointer graph, graph-sandwich and GREAT models significantly outperform these approaches." }, { "heading": "3 SEMI-STRUCTURED MODELS OF SOURCE CODE", "text": "Models of code have so far either been structured (GNNs) or unstructured (RNNs, Transformers). Considering the graph-based models’ substantially superior performance compared to RNNs despite their locality limitations, we may ask: to what extent could global information help GNNs, and to what extent could structural features help sequence-based models?" }, { "heading": "3.1 MODELS", "text": "We address the questions of combining local and global information with two families of models.\nGraph-sandwich Models Let T = 〈t1, t2, · · · , tn〉 denote a program’s token sequence and G = (V, E) denote the corresponding program graph, where V is the set of node vertices and E is the list of edge sets for different edge types. In both graph and sequence based models, the nodes v maintain a state vector h(v) that is initialized with initial node embedding x(v) ∈ RD. In a GGNN layer, messages of type k are sent from each node v ∈ V to its neighbors computed as m(v)k = LinearLayerk(h\n(v)). After the message passing step, the set of messages at each node are aggregated as m(v) = Σek(u,v)∈E m (u) k . Finally, the state vector of a node v is updated as h (v) new = GRU(m(v), h(v)). In an RNN layer, the state of a node v (corresponding to a terminal leaf token ti in T ) is updated as h (v) new = f(tv, h\n(ti−1)). A Transformer compute tv → qt,kt,vt, corresponding to a query, key and value for each token.1 Each token then computes its attention to all other tokens using eij = (qikj>)/ √ N ,2 which can be soft-maxed to yield attention probabilities aij = exp(eij)/Σ exp(ei,:).\nOur first class of models follows from the observation that T ⊆ V ; i.e., the source code tokens used by sequence models, like RNNs, are by definition also nodes in the program graph, so GGNNs update their state with every message pass. We can thus envision a combined model that uses each of these as a building block; for instance, assuming initial node features x(v) ∈ RD, the formula [RNN, GGNN(3), RNN] describes a model in which we first run an RNN on all tokens ∈ T (in lexical ordering), then, using these as initial states for v ∈ T while using the default node-type embeddings for all other nodes, run three message passing steps using a GGNN, after which we again gather the nodes corresponding to T and update their state with an additional RNN pass.\nThe resulting family of models alternates GGNN-style message passing operations and layers of sequence-based models. By varying the number and size of sequential layers and blocks of GGNNstyle message passing, this variant particularly provides insight into the first question above (how can global information help GNNs?), by showing the transition in performance potential of models that increasingly incorporate sequential features. We refer to this class of models as sandwich models.\nGraph Relational Embedding Attention Transformer The above family of models still rely on explicit message passing for their structural bias, thereby only indirectly combining structural and global information to the model. We may wish to instead directly encode structural bias into a sequence-based model, which requires a relaxation of the ‘hard’ inductive bias from the GGNN. For Transformer-based architectures, Shaw et al. (2018) show that relational features can be incorporated directly into the attention function by changing the attention computation to:3 eij = (qi + bij)kj >/ √ N where qi and kj correspond to the query and key vectors as described above, bij is an added bias term for the specific attention weight between tokens i and j, and N is the per-head attention dimension. In our case, we compute bij = W>e e + be, where We ∈ RN , be ∈ R, and e ∈ RN is an embedding of the edge type connecting nodes i and j, if any. If multiple edge types are present between two nodes, the resulting biases are simply added. We name this model GREAT, for Graph Relational Embedding Attention Transformer." }, { "heading": "3.2 ARCHITECTURAL DETAILS", "text": "In this section, we present details of different architectures we compare and their hyperparameters.\nGeneral: All of our models follow the structure proposed by Vasic et al. (2019), stacking an initial token-embedding layer, a ‘core’ model that computes a distributed representation of the code under inspection (in their case, an LSTM), followed by a projection into two pointers for the localization and repair tasks (see Section 4). This core model is the part varied in our work. We use\n1Transformers introduce several more components, including multi-headed attention, feed-forward blocks in every layer and layer-normalization (Vaswani et al., 2017).\n2Where the √ N term is used to scale the attention weights 3Note that, although equivalent, Shaw et al. (2018) add the relational bias to the ‘key’ computation instead.\nSubwordTextEncoder from Tensor2Tensor (Vaswani et al., 2018) to generate a 10K sub-token vocabulary from training data and embed each token by averaging embeddings of its sub-token(s).\nGGNN: Many types of graph-based message-passing neural networks have been proposed for code, mostly differing in how a node’s state is updated based on ‘messages’ sent by nodes it is connected to. Most commonly used is the gated graph neural network (GGNN) (Li et al., 2015), which uses a GRU cell (Cho et al., 2014) to update a node’s state. Although other options sometimes outperform this architecture, the improvements are generally minor, so we rely on this model for our baseline. One hyperparameter of the architecture is whether to use different transformations at each message-passing step, or to reuse one set of transformations for multiple message passes. Following Allamanis et al. (2018b), we use blocks of two message-passing layers, in which the first layer is repeated three times, for four message passes per block. We then sweep over GGNN architectures that repeat these blocks 1 to 4 times (thus yielding 4 to 16 message passes). By default, the message dimension is set to 128, but we include an ablation with 256-dimensional messages as well.\nRNNs: We experimented with the one-directional entailment-attention-based RNN proposed by Vasic et al. (2019), but found a simpler bi-directional RNN architecture to work even better. We use GRUs as the recurrent cells, vary the number of layers from 1 to 3, and the hidden dimension (of the concatenated forward and backward component) in {128, 256, 512}. Transformers: We base our architecture on the original Transformer (Vaswani et al., 2017), varying the number of layers from 1 to 10 and the attention dimension in {128, 256, 512, 1024}. Sandwich Models: We distinguish between two types of sandwich models: ‘small’ sandwiches, which add a single RNN or Transformer to a GGNN architecture, and ‘large’ sandwiches, which wrap every message-passing block (as defined above) with a 128-dimensional (bi-directional) RNN/Transformer layer. We vary the number of message-passing blocks from 1 to 3 (corresponding to 4 to 12 message passes) to span a similar parameter domain as the GGNNs above (ca. 1.5M – 5M), increasing the number of layers to 2 and their dimension to 512 for a later ablation.\nGREAT: Uses the same architectural variations as the Transformer family; edge-type embedding dimensions are fixed at the per-head attention dimension, as described above.\nGlobal hyper-parameters: We train most of our models with batch sizes of {12.5K, 25K, 50K} tokens, with the exception of the Transformer architectures; due to the quadratic nature of the attention computation, 25K tokens was too large for these models, so we additionally trained these with 6.25K-token batches.4 Learning rates were varied in {1e-3, 4-e4, 1e-4, 4e-5, 1e-5} using an Adam optimizer, where we omitted the first option for our GGNN models and the last for our RNNs due to poor performance. Sub-tokens were embedded using 128-dimensional embeddings.\nHardware: all our models were trained on a single Tesla P100 GPU on 25 million samples, which required between 40 and 250 hours for our various models. However, we emphasize that overall training time is not our main objective; we primarily assess the ultimated converged accuracy of our models and present training behavior over time for reference of our various models’ training behavior." }, { "heading": "3.3 ABOUT GRAPH REPRESENTATIONS OF CODE", "text": "Our program graphs borrow many edge types from Allamanis et al. (2018b), such as data-flow (e.g., read & write), adjacent-token, and syntactic edges, which we further augment with edges between control-flow statements and function calls. When representing programs as graphs, a key decision needs to be made regarding the Abstract Syntax Tree (AST). Typically, one of the edge types in the graphs represents syntactic parent-child relationships in the AST. Additionally, some of the edges representing relations (e.g., control-flow) are naturally represented as edges between internal nodes in this tree, e.g., between two IfStatement nodes. However, ablations often find that the effectiveness of including the AST is limited in graph-based models (Allamanis et al., 2018b).\nThis raises the question of whether it is possible to represent programs as graphs that include sequential and semantic information, but not syntax. To this end, we propose a leaves-only graph representation for code as follows: edges that represent semantic relationships such as control flow and data flow can easily be moved down from internal nodes – which typically represent a span of\n4Which still translates into 50+ samples per batch on average\nmultiple tokens – to those leaf nodes in the graph that represent the begin token of that span. Thus, an edge that used to connect two IfStatement interior AST nodes is moved down to connect the corresponding if tokens. Now, the AST can be omitted entirely, thereby removing parent-child relations among syntax nodes, producing what we call a leaves-only graph. This latter representation is substantially more compressed than the graphs with ASTs, often using 2–3x fewer nodes (while retaining most of the edges), and additionally aligns better with sequence-based models, because all edges are directly connected to the original code tokens.5 We compare both settings for each graph-based model, but unless otherwise specified, we use the ‘full’ graphs for the regular GGNN model and the ‘leaves-only’ graphs (without ASTs) for the sandwich and GREAT models." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "The VarMisuse Task We focus our study on the variable-misuse localization-and-repair task (Vasic et al., 2019): given a function, predict two pointers into the function’s tokens, one pointer for the location of a variable use containing the wrong variable (or a special no-bug location), and one pointer for any occurrence of the correct variable that should be used at the faulty location instead.\nSynthetic Dataset We used the ETH Py150 dataset (Raychev et al., 2016), which is based on GitHub Python code, and already partitioned into train and test splits (100K and 50K files, respectively). We further split the 100K train files into 90K train and 10K validation examples and applied a deduplication step on that dataset (Allamanis, 2018). We extracted all top-level function definitions from these files; any function that uses multiple variables can be turned into a training example by randomly replacing one variable usage with another. As there may be many candidates for such bugs in a function, we limit our extraction to up to three samples per function to avoid biasing our dataset too strongly towards longer functions. For every synthetically generated buggy example, an unperturbed, bug-free example of the function is included as well, to keep our dataset balanced, yielding ca. 2M total training and 755K test samples. Finally, we train with functions with up to 250 tokens; at test time, we raise this to 1,000 to study our models’ generalization to longer functions.\nMetrics As we are mainly interested in contrasting the behavior of different models of code, we focus most of our results on the various models’ learning curves by tracking development-set accuracy on 25K held-out samples every 250K samples as the models train. Here, we measure two accuracy metrics: localization accuracy (whether the model correctly identifies the bug’s location for buggy samples); and (independently) repair accuracy (whether the model points to the correct variable to repair the bug). Note that these metrics focus on buggy samples; the models also determine whether a function is buggy, which we discuss below. We group all models by their ‘family’, as categorized in Section 3.2, reporting the maximum held-out performance per family.\nFor deeper insight into the fully trained models’ performance, we also analyze the performance of the best models in each family in more depth on the test portion of our synthetic dataset. Specifically, we assess their bugginess-classification accuracy (whether the model correctly identifies the method as (non-)buggy) and their joint localization and repair accuracy (for buggy samples, how often the model correctly localizes and repairs the bug). Here, we also increase the maximum function size to 1,000 tokens and analyze the impact of longer functions on our models’ performance." }, { "heading": "4.1 DATA & CODE RELEASE", "text": "We release a public implementation of the GREAT model based on Tensorflow, as well as the program graphs for all samples in our training and evaluation datasets whose license permits us to redistribute these at: https://doi.org/10.5281/zenodo.3668323, which tracks the latest release of our Github repository at: https://github.com/VHellendoorn/ ICLR20-Great." }, { "heading": "5 RESULTS", "text": "There are many degrees of freedom in our family of models, so we structure our results around a series of comparisons, which we analyze and discuss in this section. We start with our key re-\n5Which, we conjecture, improves the interaction between the two types of models.\nsult, which compares all our model families (RNNs, Transformers, GGNNs, Sandwich hybrids, and GREAT models) across a comparable parameter domain (ca. 1.4M – 5.5M parameters) in Figure 1.\nAlthough there are subtle differences between the models’ behavior on localization and repair accuracy, the overall picture is consistent: whereas our newly proposed graph-to-multihead-pointer models already substantially outperform RNN-based models (the previous state-of-the-art), and sometimes Transformers, the hybrid global & structured models achieve significantly better results faster.\nTime-wise (the top two figures), the GGNN models take the longest to converge, continuing to improve slightly even after a week of training mainly because their largest (16-layer) architecture starts to dominate the 12-layer version’s performance after ca. 170h. The Sandwich models follow its training curve, but are more accurate and faster to converge, achieving especially good results for limited training budgets, partly because they succeeded with just 4 – 8 layers of message passing by relying on their global components.\nThe Transformer architecture, although slower at first, widely outperforms the RNN as a baseline model. The GREAT model tracks its learning curve, starting out slower than the models with explicit message passing, but gradually overtaking them after ca. 10h, as the underlying Transformer becomes increasingly effective. We note that this model achieves state-of-the-art results despite having, at the time of this writing, received less training time (ca. 64h compared to up to 240h).\nThe bottom half of Figure 1 abstracts away the potentially confounding issue of implementation speed of our models by tracking performance w.r.t the number of training samples. Naturally, the end-points of the curves (the converged accuracy) are identical, but importantly the various models’\ntraining curves are quite similar; even here, the GGNN is only able to outperform some of our proposed models briefly, yielding inferior performance to all within just one epoch.\nThe RNN and GGNN models appear to be particularly complementary; even though the RNN’s localization accuracy is very poor compared to the GGNN, the combined model still sustains a ∼5% improvement on the latter.6 However, the Transformer Sandwich does not seem to benefit similarly, showing virtually no difference in performance with the RNN Sandwich model. This strongly suggests that the Transformer’s ability to access long-distance information overlaps in large part (though not entirely, given GREAT’s performance) with the GGNNs’ ability to do so using message passing. We conjecture that the Transformer learns to infer many of the same connections (e.g., data-flow, control-flow) that are encoded explicitly in the graph’s message passing.\nTo understand the behavior of the many models and combinations that may be used for code, we now explore the variations on our choices of parameters, models, and metrics." }, { "heading": "5.1 LARGER MODELS", "text": "Model capacity is a potential threat to any comparison between models of different families. We aimed to ensure a fair comparison in the previous section by selecting a range of hyper-parameters (which includes the number of stacked layers) for these architectures that span a similar parameter count range. For instance, a 6-layer Transformer with 512-dimensional attention is comparable to a 2-layer 512-dimensional bi-directional RNN and an 8-layer GGNN. However, all these architectures are relatively modest, having at most ∼5M parameters. By increasing the number, and dimensionality of their layers, we can evaluate a second family of models with ca. 5–20M parameters.\nFigure 2 shows the performance for the low- and high-parameter variations for each of our bestperforming model families. Overall, while providing more parameters to the GGNNs made virtually no difference, all our hybrid models increase 2–3% in both localization and repair accuracy, providing further support for combining global, structured models. The best-performing instances of each model family were consistently the larger architectures, 15M parameters for the GGNN, 12.5M & 10M for the RNN and Transformer Sandwiches respectively and 7.9M for GREAT.7" }, { "heading": "5.2 ON SYNTACTIC INFORMATION", "text": "In the previous results, the GGNN models were trained on ‘full’ graphs (as described in Section 3.3), that use the code’s AST structure, and the sandwich models on ‘leaves-only’ graphs, with only source token nodes, and edges moved to connect these directly. These settings are arguably each appropriate to the underlying model, but both models can also use the alternative setting.\n6Difference up to convergence of the combined model; the accuracy gap shrinks to 2.1% after ca. 10 days. 7Which achieves state-of-the-art results in all settings despite having had comparatively less training time.\nFigure 3 shows the training curves for the alternative settings. In all cases, the models that do not use syntax train substantially faster because each sample’s graph representation is more than twice as small, so these models naturally lead in accuracy early on in training. However, whereas the GGNN equipped with syntax overtakes its counter-part within ca. 48h, the sandwich models display a much longer lag, with no cross-over observed at all on localization accuracy in this time window.8\nThe sandwich model on full graphs still compares favorably with the GGNN baseline, though its early training behavior is not as effective. It is also interesting to note that the best-performing Sandwich models in this setting were consistently architectures with more message-passing steps. This may be due to the additional distance between information propagated along the tokens and along semantic edges, which in this setting are almost universally connected to AST-internal nodes." }, { "heading": "5.3 RNNS IN SANDWICHES: SINGLE VS. MANY", "text": "Recent work on neural summarization also mixed RNNs and GGNNs (Fernandes et al., 2018), but did so by inserting a single RNN layer into a GNN architecture, before any message passing. We compare this architecture to a full Sandwich model in Figure 4. Although a single RNN certainly helps compared to the GGNN, interleaving RNNs and GGNN-style message passes performed substantially better. In fact, the best performing full Sandwiches used fewer parameters than the Single models because they used 8 message passes instead of 12, relying more heavily on the RNNs.9" }, { "heading": "5.4 TEST-SET ANALYSIS", "text": "Having identified our best-performing models in each family, we now study their performance on the test data, specifically using the metrics used in Vasic et al. (2019) (see Section 4) in Table 1. In general, the two metrics correlated well; models that accurately determined whether a function contained a bug also accurately identified the bug (and repair), as may be expected. The baseline RNN model achieves a modest 44% accuracy at the latter task; this is slightly lower than reported in prior work (Vasic et al., 2019), which is likely due in part to our dataset de-duplication. Transformers and GGNNs perform substantially better (and comparably, though the latter trained 6x longer for this performance), but still fall well short of our hybrid models’ performance, which are especially much more accurate on long functions. The GREAT model shows most promise on the repair task, already outperforming the sandwich models despite having so far had limited training time.\n8Given the slope of the two curves, we may expect a reversal after 10+ days of training. 9The same pattern held for high-parameter versions of these models, where the performance gap also grew." }, { "heading": "5.5 REAL BUGS ANALYSIS", "text": "We want to ensure that our models can be useful for real bugs and do not simply overfit to the synthetic data generation that we used. This risk exists because we did not filter our introduced bugs based on whether they would be difficult to detect, for instance because they conflate variables with similar names, usage, or data types; presumably, such bugs are more likely to escape a developer’s notice and find their way into real code bases. It is therefore expected that performance on realworld bugs will be lower for all our models, but we must assert that our proposed models do not just outperform GGNNs on synthetic data, e.g. by memorizing characteristics of synthetic bugs.\nTo mitigate this threat, we collect real variable misuse bugs from code on Github. Specifically, we collect ca. 1 million commits that modify Python files from Github. We extracted all changes to functions from these commits, filtering these according to the same criteria that we used to introduce variable-misuse bugs: we looked for commits that exclusively changed a single variable usage in a function body from one variable in scope to another. We focus on functions with up to 250 tokens, since all our models performed substantially better on these in Table 1. We identified 170 such changes, in which we assumed that the version before the change was buggy and the updated version correct. We removed any functions that had occurred in our training data, of which we found 9 and paired the remaining functions up (correct and buggy) to create a real-world evaluation set with 322 functions, which we presented to our models.\nTable 2 shows the results of running our various models on these bugs. In general, these were clearly substantially more difficult for all our models than the synthetic samples we generated. However, we see a clear difference in performance with all our proposed models performing substantially better than previous baselines, and showing favorable precision/recall trade-offs." }, { "heading": "6 CONCLUSION", "text": "We demonstrate that models leveraging richly structured representations of source code do not have to be confined to local contexts. Instead, models that leverage only limited message passing in combination with global models learn much more powerful representations faster. We proposed two different architectures for combining local and global information: sandwich models that combine two different message-passing schedules and achieve highly competitive models quickly, and the GREAT model which adds information from a sparse graph to a Transformer to achieve stateof-the-art results. In the process, we raise the state-of-the-art performance on the VarMisuse bug localization and repair task by over 30%." } ]
2,020
null
SP:23a43ab91a13463f9b5185d5bb0ab328ea6eb0c7
[ "The authors propose a method for learning macro-actions in a multi-step manner, where Sequitur, a grammar calculator, is leveraged together with an entropy-minimisation based strategy to find relevant macro-actions. The authors propose a system to bootstrap the weights of these macro-actions when increasing the policy's action space, and a system to increase the amount of data (and bias it towards macro-actions) used to learn a policy for when conditioned on this increased action-space. The authors test against a subset of the Arcade Learning Environment suite.", "This paper introduced a way to combine actions into meta-actions through action grammar. The authors trained agents that executes both primitive actions and meta-actions, resulting in better performance on Atari games. Specifically, meta-actions are generated after a period of training from collected greedy action sequences by finding repeated sub-sequences of actions. Several tricks are used to speed up learning and to make the framework more flexible. The most effective one is HAR (hindsight action replay), without which the agent's performance reduces to that of the baseline." ]
From a young age we learn to use grammatical principles to hierarchically combine words into sentences. Action grammars is the parallel idea, that there is an underlying set of rules (a grammar) that govern how we hierarchically combine actions to form new, more complex actions. We introduce the Action Grammar Reinforcement Learning (AG-RL) framework which leverages the concept of action grammars to consistently improve the sample efficiency of Reinforcement Learning agents. AG-RL works by using a grammar inference algorithm to infer the action grammar of an agent midway through training. The agent’s action space is then augmented with macro-actions identified by the grammar. We apply this framework to Double Deep Q-Learning (AG-DDQN) and a discrete action version of Soft Actor-Critic (AG-SAC) and find that it improves performance in 8 out of 8 tested Atari games (median +31%, max +668%) and 19 out of 20 tested Atari games (median +96%, maximum +3,756%) respectively without substantive hyperparameter tuning. We also show that AG-SAC beats the model-free state-ofthe-art for sample efficiency in 17 out of the 20 tested Atari games (median +62%, maximum +13,140%), again without substantive hyperparameter tuning.
[]
[ { "authors": [ "2017. Benjamin Beyret", "Ali Shafti", "A Aldo Faisal" ], "title": "Dot-to-dot: Explainable hierarchical reinforcement", "venue": null, "year": 2017 }, { "authors": [ "Steven J Bradtke", "Michael O Duff" ], "title": "Reinforcement learning methods for continuous-time markov decision problems", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Christian Daniel", "Herke Van Hoof", "Jan Peters", "Gerhard Neumann" ], "title": "Probabilistic inference for determining options in reinforcement learning", "venue": "Machine Learning,", "year": 2016 }, { "authors": [ "N. Ding", "L. Melloni", "X. Tian", "D. Poeppel" ], "title": "Rule-based and word-level statistics-based processing of language: Insights from neuroscience", "venue": "Philosophical Transactions of the Royal Society of London B: Biological Sciences,", "year": 2012 }, { "authors": [ "Aldo Faisal", "Dietrich Stout", "Jan Apel", "Bruce Bradley" ], "title": "The manipulative complexity of lower paleolithic stone toolmaking", "venue": "PloS one,", "year": 2010 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1704.03012,", "year": 2017 }, { "authors": [ "Marta Garnelo", "Murray Shanahan" ], "title": "Reconciling deep learning with symbolic artificial intelligence: representing objects and relations", "venue": "Current Opinion in Behavioral Sciences,", "year": 2019 }, { "authors": [ "Marta Garnelo", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Towards deep symbolic reinforcement learning", "venue": "arXiv preprint arXiv:1609.05518,", "year": 2016 }, { "authors": [ "Erin E Hecht", "DA Gutman", "Nada Khreisheh", "SV Taylor", "J Kilner", "AA Faisal", "BA Bradley", "Thierry Chaminade", "Dietrich Stout" ], "title": "Acquisition of paleolithic toolmaking abilities involves structural remodeling to inferior frontoparietal regions", "venue": "Brain Structure and Function,", "year": 2015 }, { "authors": [ "Bernhard Hengst" ], "title": "Discovering hierarchy in reinforcement learning with hexq", "venue": "In ICML,", "year": 2002 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H. Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine", "Ryan Sepassi", "George Tucker", "Henryk Michalewski" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Shie Mannor", "Ishai Menache", "Amit Hoze", "Uri Klein" ], "title": "Dynamic abstraction in reinforcement learning via clustering", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin A. Riedmiller", "Andreas Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518:529–533,", "year": 2015 }, { "authors": [ "C. Nevill-Manning", "I. Witten" ], "title": "Identifying hierarchical structure in sequences: A linear-time algorithm", "venue": null, "year": 1997 }, { "authors": [ "OpenAI", "Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Józefowicz", "Bob McGrew", "Jakub W. Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray", "Jonas Schneider", "Szymon Sidor", "Josh Tobin", "Peter Welinder", "Lilian Weng", "Wojciech Zaremba" ], "title": "Learning dexterous in-hand", "venue": "manipulation. ArXiv,", "year": 2018 }, { "authors": [ "T. Osa", "V. Tangkaratt", "M. Sugiyama" ], "title": "Hierarchical reinforcement learning via advantageweighted information maximization", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ronald Edward Parr" ], "title": "Hierarchical control and learning for Markov decision processes", "venue": null, "year": 1998 }, { "authors": [ "K. Pastra", "Y. Aloimonos" ], "title": "The minimalist grammar of action", "venue": "Philosophical Transactions of the Royal Society of London B: Biological Sciences,", "year": 2012 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lynette R. Baker", "Matthew Lai", "Adrian Bolton", "Yutian Chen", "Timothy P. Lillicrap", "Hui-zhen Fan", "Laurent Sifre", "George van den Driessche", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Ozgür Simsek", "Alicia P Wolfe", "Andrew G Barto" ], "title": "Local graph partitioning as a basis for generating temporally-extended actions in reinforcement learning", "venue": "In AAAI Workshop Proceedings,", "year": 2004 }, { "authors": [ "Matthew Smith", "Herke van Hoof", "Joelle Pineau" ], "title": "An inference-based policy gradient method for learning options", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Martin Stolle", "Doina Precup" ], "title": "Learning options in reinforcement learning", "venue": "In International Symposium on abstraction, reformulation, and approximation,", "year": 2002 }, { "authors": [ "Dietrich Stout", "Thierry Chaminade", "Andreas A.C. Thomik", "Jan Apel", "Aldo A. Faisal" ], "title": "Grammars of action in human behavior and evolution", "venue": null, "year": 2018 }, { "authors": [ "Hado van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In AAAI,", "year": 2015 }, { "authors": [ "Alexander Vezhnevets", "Volodymyr Mnih", "John Agapiou", "Simon Osindero", "Alex Graves", "Oriol Vinyals", "Koray Kavukcuoglu" ], "title": "Strategic attentive writer for learning macro-actions", "venue": null, "year": 2016 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1703.01161,", "year": 2017 }, { "authors": [ "Yuhuai Wu", "Elman Mansimov", "Shun Liao", "Roger B. Grosse", "Jimmy Ba" ], "title": "Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation", "venue": null, "year": 2017 }, { "authors": [ "G. Yule" ], "title": "The Study of Language", "venue": null, "year": 2015 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart" ], "title": "Relational deep reinforcement learning", "venue": "arXiv preprint arXiv:1806.01830,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement Learning (RL) has made great progress in recent years, successfully being applied to settings such as board games (Silver et al., 2017), video games (Mnih et al., 2015) and robot tasks (OpenAI et al., 2018). Some of this advance is due to the use of deep learning techniques to avoid induction biases in the mapping of sensory information to states. However, widespread adoption of RL in real-world domains has remained limited primarily because of its poor sample efficiency, a dominant concern in RL (Wu et al., 2017), and complexity of the training process that need to be managed by various heuristics.\nHierarchical Reinforcement Learning (HRL) attempts to improve the sample efficiency of RL agents by making their policies to be hierarchical rather than single level. Using hierarchical policies can lead not only to faster learning, but also ease human understanding of the agent’s behaviour – this is because higher-level action representations are easier to understand than low-level ones (Beyret et al., 2019). Identifying the right hierarchical policy structure is, however a non-trivial task (Osa et al., 2019) and so far progress in hierarchical RL has been slow and incomplete, as no truly scalable and successful hierarchical architectures exist (Vezhnevets et al., 2016). Not surprisingly most stateof-the-art RL agents at the moment are not hierarchical.\nIn contrast, humans, use hierarchical grammatical principles to communicate using spoken/written language (Ding et al., 2012) but also for forming meaningful abstractions when interacting with various entities. In the language case, for example, to construct a valid sentence we generally combine a noun phrase with a verb phrase (Yule, 2015). Action grammars are an analogous idea proposing there is an underlying set of rules for how we hierarchically combine actions over time to produce new actions. There is growing neuroscientific evidence that the brain uses similar processing strategies for both language and action, meaning that grammatical principles are used in both neural representations of language and action (Faisal et al., 2010; Hecht et al., 2015; Pastra & Aloimonos, 2012). We hypothesise that using action grammars would allow us to form a hierarchical action represen-\ntation that we could use to accelerate learning. Additionally, in light of the neuroscientific findings, hierarchical structures of action may also explain the interpretability of hierarchical RL agents, as their representations are structurally more similar to how humans structure tasks. In the following we will explore the use of grammar inference techniques to form a hierarchical representation of actions that agents can operate on. Just like much of Deep RL has focused on forming unbiased sensory representations from data-driven agent experience, here we explore how data-driven agent experience of actions can contribute to forming efficient representations for learning and controlling tasks.\nAction Grammar Reinforcement Learning (AG-RL) operates by using the observed actions of the agent within a time window to infer an action grammar. We use a grammar inference algorithm to substitute repeated patterns of primitive actions (i.e. words) into temporal abstractions (rules, analogous to a sentence). Similarly, we then replace repeatedly occurring rules of temporal abstractions with higher-level rules of temporal abstractions (analogous to paragraphs), and so forth. These extracted action grammars (the set of rules, rules of rules, rules of rules of rules, etc) is appended to the agent’s action set in the form of macro-actions so that agents can choose (and evaluate) primitive actions as well as any of the action grammar rules. We show that AG-RL is able to consistently and significantly improve sample efficiency across a wide range of Atari settings." }, { "heading": "2 RELATED WORK", "text": "Our concept of action grammars that act as efficient representations of actions is both related to and different from existing work in the domain of Hierarchical Control. Hierarchical control of temporally-extended actions allows RL agents to constrain the dimensionality of the temporal credit assignment problem. Instead of having to make an action choice at every tick of the environment, the top-level policy selects a lower-level policy that executes actions for potentially multiple timesteps. Once the lower-level policy finishes execution, it returns control back to top-level policy. Identification of suitable low level sub-policies poses a key challenge to HRL.\nCurrent approaches can be grouped into three main pillars: Graph theoretic (Hengst, 2002; Mannor et al., 2004; Simsek et al., 2004) and visitation-based (Stolle & Precup, 2002; Simsek et al., 2004) approaches aim to identify ”bottlenecks” within the state space. Bottlenecks are regions in the state space which characterize successful trajectories. Our work, on the other hand, identifies patterns solely in the action space and does not rely on reward-less exploration of the state space. Furthermore, our action grammar framework defines a set of macro-actions as opposed to full option-specific sub-policies. Thereby, it is less expressive but more sample-efficient to infer.\nGradient-based approaches, on the other hand, discover parametrized temporally-extended actions by iteratively optimizing an objective function such as the estimated expected value of the log likelihood of the observed data under the current policy with respect to the latent variables in a probabilistic setting (Daniel et al., 2016) or the expected cumulative reward in a policy gradient context (Bacon et al., 2017; Smith et al., 2018). Our action grammar induction, on the other hand, infers patterns without supervision solely based on a compression objective. The resulting parse tree provides an interpretable structure for the distilled skill set.\nFurthermore, recent approaches in macro-action discovery (Vezhnevets et al., 2017; Florensa et al., 2017) attempt to split the goal declaration and goal achievement across different stages and layers of the learned architecture. Thus, while hierarchical construction of goals follows a set of coded rules, our action grammars are inferred entirely data-driven based on agent experience. Usually, the top level of the hierarchy specifies goals in the environment while the lower levels have to achieve them. Again, such architectures lack sample efficiency and easy interpretation. Our context-free grammar-based approach, on the other hand, is a symbolic method that requires few rollout traces and generalizes to more difficult task-settings.\nFinally, unlike recent work on unifying symbolic and connectionist methods, we do not aim to discover relationships between entities (Garnelo et al., 2016; Zambaldi et al., 2018). Instead our proposed action grammar framework achieves interpretability by extracting hierarchical subroutines associated with sub-goal achievements (Beyret et al., 2019)." }, { "heading": "3 METHODOLOGY", "text": "The Action Grammar Reinforcement Learning framework operates by having the agent repeatedly iterate through two steps (as laid out in Figure 1):\n(A) Gather Experience: the base off-policy RL agent interacts with the environment and stores its experiences\n(B) Identify Action Grammar: the experiences are used to identify the agent’s action grammar which is then appended to the agent’s action set in the form of macro-actions\nHierarchical RL with Action Grammars Algorithm\nDuring the first Gather Experience step of the game the base RL agent plays normally in the environment for some set number of episodes. The only difference is that during this time we occasionally run an episode of the game with all random exploration turned off and store these experiences separately. We do this because we will later use these experiences to identify the action grammar and so we do not want them to be influenced by noise. See left part of Figure 1.\nAfter some set number of episodes we pause the agent’s interaction with the environment and enter the first Identify Action Grammar stage, see middle part of Figure 1. This firstly involves collecting the actions used in the best performing of the no-exploration episodes mentioned above. We then feed these actions into a grammar calculator which identifies the action grammar. A simple choice for the grammar calculator is Sequitur (Nevill-Manning & Witten, 1997). Sequitur receives a sequence of actions as input and then iteratively creates new symbols to replace any repeating subsequences of actions. These newly created symbols then represent the macro-actions of the action grammar. To minimise the influence of noise on grammar generation, however, we need to regularise the process. A naive regulariser is k-Sequitur (Stout et al., 2018) which is a version of Sequitur that only creates a new symbol if a sub-sequence repeats at least k times (instead of at least two times), where higher k corresponds to stronger regularisation. Here we use a more principled approach and regularise on the basis of an information theoretic criterion: we generate a new symbol if doing so reduces the total amount of information needed to encode the sequence.\nAfter we have identified the action grammar we enter our second Gather Experience step. This firstly involves appending the macro-actions in the action grammar to the agent’s action set. To do this without destroying what our q-network and/or policy has already learned we use transfer learning. For every new macro-action we add a new node to the final layer of the network, leaving all other nodes and weights unchanged. We also initialise the weights of each new node to the weights of their first primitive action (as it is this action that is most likely to have a similar action value to the macro-action). e.g. if the primitive actions are {a, b} and we are adding macro-action abb then we initialise the weights of the new macro-action to those of a.\nThen our agent begins interacting with the environment as normal but with an action set that now includes macro-actions and four additional changes:\ni) In order to maximise information efficiency, when storing experiences from this stage onwards we use a new technique we call Hindsight Action Replay (HAR). It is related to Hindsight Experience Replay which creates new experiences by reimagining the goals the agent was trying to achieve. Instead of reimagining the goals, HAR creates new experiences by reimagining the actions. In particular it reimagines them in two ways:\n1. If we play a macro-action then we also store the experiences as if we had played the sequence of primitive actions individually\n2. If we play a sequence of primitive actions that matches an existing macro-action then we also store the experiences as if we had played the macro-action\nSee Appendix A for an example of how HAR is able to more than double the number of collected experiences in some cases.\nii) To make sure that the longer macro-actions receive enough attention while learning, we sample experiences from an action balanced replay buffer. This acts as a normal replay buffer except it returns samples of experiences containing equal amounts of each action.\niii) To reduce the variance involved in using long macro-actions we develop a new technique called Abandon Ship. During every timestep of conducting a macro-action we calculate how much worse it is for the agent to continue executing its macro-action compared to abandoning the macro-action and picking the highest value primitive action instead. Formally we calculate this value as d = 1− exp(qm)exp(qhighest) where qm is the action value of the primitive action we are conducting as part of the macro-action and qhighest is the action value of the primitive action with the highest action value. We also store the moving average, m(d), and moving standard deviation, std(d), of d. Then each timestep we compare d to threshold t = m(d)+std(d)z where z is the abandon ship hyperparameter that determines how often we will abandon macro-actions. If d > t then we abandon our macroaction and return control back to the policy, otherwise we continue executing the macro-action.\nAlgorithm 1 AG-RL 1: Initialise environment env, base RL algorithm R, replay buffer D and action set A 2: for each iteration do 3: F ← GATHER EXPERIENCE(A) 4: A← IDENTIFY ACTION GRAMMAR(F ) 5: 6: procedure GATHER EXPERIENCE(A) 7: transfer learning(A) . If action set changed do transfer learning 8: F ← ∅ . Initialise F to store no-exploration episode experiences 9: for each episode do 10: if no exploration time then turn off exploration . Periodically turn off exploration 11: E ← ∅ . Initialise E to store an episode’s experiences 12: while not done do 13: mat = R.pick action(st) . Pick next primitive action / macro-action 14: for at in mat do . Iterate through each primitive action in the macro-action 15: if abandon ship(st, at) then break . Abandon macro-action if required 16: st+1, rt+1, dt+1 = env.step(at) . Play action in environment 17: E ← E ∪ {(st, at, rt+1, st+1, dt+1)} . Store the episode’s experiences 18: R.learn(D) . Learning iteration for base RL algorithm 19: D ← D ∪ HAR(E) . Use HAR when updating replay buffer 20: if no exploration time then F ← F ∪ E . Store no-exploration experiences 21: return F 22: 23: procedure IDENTIFY ACTION GRAMMAR(F) 24: F ← extract best episodes(F) . Keep only the best performing no-exploration episodes 25: action grammar← grammar algorithm(F) . Infer action grammar using experiences 26: A← A ∪ action grammar . Update the action set with identified macro-actions 27: return A\niv) When our agent is picking random exploration moves we bias its choices towards macro-actions. For example, when a DQN agent picks a random move (which it does epsilon proportion of the time) we set the probability that it will pick a macro-action, rather than a primitive action, to the higher probability given by the hyperparameter “Macro Action Exploration Bonus”. In these cases, we do not use Abandon Ship and instead let the macro-actions fully roll out.\nThe second Gather Experience step then continues until it is time to do another Identify Action Grammar step or until the agent has been trained for long enough and the game ends. Algorithm 1 provides the full AG-RL algorithm." }, { "heading": "4 SIMPLE EXAMPLE", "text": "We now highlight the core aspects of how AG-RL works using the simple game Towers of Hanoi. The game starts with a set of disks placed on a rod in decreasing size order. The objective of the game is to move the entire stack to another rod while obeying the following rules: i) Only one disk can be moved at a time; ii) Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty rod; and iii) No larger disk may be placed on top of a smaller disk. The agent only receives a reward when the game is solved, meaning rewards are sparse and that it is difficult for an RL agent to learn the solution. Figure 2 runs through an example of how the game can be solved with the letters ‘a’ to ‘f ’ being used to represent the 6 possible moves in the game.\nIn this game, AG-RL proceeds by having the base agent (which can be any off-policy RL agent) play the game as normal. After some period of time we pause the agent and collect some of the actions taken by the agent, e.g. say the agent played the sequence of actions: “bafbcdbafecfbafbcdbcfecdbafbcdb”. Then we use a grammar induction algorithm such as Sequitur to create new symbols to represent repeating sub-sequences. In this example, Sequitur would create the 4 new symbols: {G : bc,H : ec, I : baf, J : bafbcd}. We then append these symbols to the agent’s action set as macro-actions, so that the action set goes from A = {a, b, c, d, e, f} to:\nA = {a, b, c, d, e, f} ∪ {bc, ec, baf, bafbcd}\nThe agent then continues playing in the environment with this new action set which includes macroactions. Because the macro-actions are of length greater than one it means that their usage effectively reduces the time dimensionality of the problem, making it an easier problem to solve in some cases.1\nWe now demonstrate the ability of AG-RL to consistently improve sample efficiency on the much more complicated Atari suite setting." }, { "heading": "5 RESULTS", "text": "We first evaluate the AG-RL framework using DDQN as the base RL algorithm. We refer to this as AG-DDQN. We compare the performance of AG-DDQN and DDQN after training for 350,000 steps on 8 Atari games chosen a priori to represent a broad range. To accelerate training times for both we set their convolutional layer weights as equal to those of some pre-trained agents2 and then only train the fully connected layers.\n1Experimental results for this simplified setting may be found in section F of the appendix. 2We used the pre-trained agents in the GitHub repository rl-baselines-zoo https://github.com/araffin/rl-\nbaselines-zoo/tree/master/trained agents/dqn\nFor the DDQN-specific hyperparameters of both networks we do no hyperparameter tuning and instead use the hyperparmaeters from van Hasselt et al. (2015) or set them manually. For the AGspecific hyperparameters we tried four options for the abandon ship hyperparameter (No Abandon Ship, 1, 2, 3) for the game Qbert and chose the option with the highest score. No other hyperparameters were tuned and all games then used the same hyperparameters which can all be found in Appendix B along with a more detailed description of the experimental setup.\nWe find that AG-DDQN outperforms DDQN in all 8 games with a median final improvement of 31% and a maximum final improvement of 668% - Figure 3 summarises the results.\nNext we further evaluate AG-RL by using SAC as the base RL algorithm, leading to AG-SAC. We compare the performance of AG-SAC to SAC. We train both algorithms for 100,000 steps on 20 Atari games chosen a priori to represent a broad range games. As SAC is a much more efficient algorithm than DDQN this time we train both agents from scratch and do not use pre-trained convolutional layers.\nFor the SAC-specific hyperparameters of both networks we did no hyperparameter tuning and instead used a mixture of the hyperparameters found in Haarnoja et al. (2018) and Kaiser et al. (2019). For the AG-specific hyperparmaeters again the only tuning we did was amongst 4 options for the abandon ship hyperparameter (No Abandon Ship, 1, 2, 3) on the game Qbert. No other hyperparameters were tuned and all games then used the same hyperparameters, details of which can be found in Appendix C.\nOur results show AG-SAC outperforms SAC in 19 out of 20 games with a median improvement of 96% and a maximum improvement of 3,756% - Figure 4 summarises the results and Appendix D provides them in more detail.\nWe also find that AG-SAC outperforms Rainbow, which is the model-free state-of-the-art for Atari sample efficiency, in 17 out of 20 games with a median improvement of 62% and maximum improvement of 13,140% - see Appendix D for more details. Also note that the Rainbow scores used were taken from Kaiser et al. (2019) who explain they were the result of extensive hyperparameter tuning compared to our AG-SAC scores which benefited from very little hyperparameter tuning." }, { "heading": "6 DISCUSSION", "text": "To better understand the results, we first explore what types of macro-actions get identified during the Identify Action Grammar stage, whether the agents use them extensively or not, and to what extent the Abandon Ship technique plays a role. We find that the length of macro-actions can vary greatly from length 2 to over 100. An example of an inferred macro-action was 8888111188881111\nfrom the game Beam Rider where 1 represents Shoot and 8 represents move Down-Right. This macro-action seems particularly useful in this game as the game is about shooting enemies whilst avoiding being hit by them.\nWe also found that the agents made extensive use of the macro-actions. Taking each game’s best performing AG-SAC agent, the average attempted move length during evaluation was 20.0. Because\nof Abandon Ship the average executed move length was significantly lower at 6.9 but still far above the average of 1.0 we would get if the agents were not using their macro-actions. Appendix E gives more details on the differences in move lengths between games.\nWe now conduct an ablation study to investigate the main drivers of AG-DDQN’s performance in the game Qbert, with results in Figure 5 . Firstly we find that HAR was crucial for the improved performance and without it AG-DDQN performed no better than DDQN. We suspect that this is because without HAR there are much fewer experiences to learn from and so our action value estimates have very high variance.\nNext we find that using an action balanced replay buffer improved performance somewhat but by a much smaller and potentially insignificant amount. This potentially implies that it may not be necessary to use an action balanced replay and that the technique may work with an ordinary replay buffer. We also see that with our chosen abandon ship hyperparameter of 1.0, performance was higher than when abandon ship was not used. Performance was also similar for the choice of 2.0 which suggests performance was not too sensitive to this choice of hyperparameter. Finally we see improved performance from using transfer learning when appending macro-actions to the agent’s action set rather than creating a new network.\nLastly, we note that the game in which AG-DDQN and AG-SAC both do relatively best in is Enduro. Enduro is a game with sparse rewards and therefore one where exploration is very important. We therefore speculate that AG-RL does best on this game because using long macro-actions increases the variance of where an agent can end up and therefore helps exploration." }, { "heading": "7 CONCLUSION", "text": "Motivated by the parallels between the hierarchical composition of language and that of actions, we combine techniques from computational linguistics and RL to help develop the Action Grammars Reinforcement Learning framework.\nThe framework expands on two key areas of RL research: Symbolic RL and Hierarchical RL. We extend the ideas of symbolic manipulation in RL (Garnelo et al., 2016; Garnelo & Shanahan, 2019) to the dynamics of sequential action execution. Moreover, while Relational RL approaches (Zambaldi et al., 2018) draw on the complex logic-based framework of inductive programming, we merely observe successful behavioral sequences to induce higher order structures.\nWe provided two implementations of the framework: AG-DDQN and AG-SAC. We showed that AG-DDQN improves on DDQN in 8 out of 8 tested Atari games (median +31%, max +668%) and AG-SAC improves on SAC in 19 out of 20 tested Atari games (median +96%, max +3,756%) all without substantive hyperparameter tuning. We also show that AG-SAC beats the model-free stateof-the-art for 17 out of 20 Atari games (median +62%, max +13,140%) in terms of sample efficiency, again even without substantive hyperparameter tuning.\nAs part of AG-RL we also provided two new and generally applicable techniques: Hindsight Action Replay and Abandon Ship. Hindsight Action Replay can be used to drastically improve information efficiency in any off-policy setting involving macro-actions. Abandon Ship reduces the variance involved when training macro-actions, making it feasible to train algorithms with very long macroactions (over 100 steps in some cases).\nOverall, we have demonstrated the power of action grammars to consistently improve the performance of RL agents. We believe our work is just one of many possible ways of incorporating the concept of action grammars into RL and we look forward to exploring other methods." }, { "heading": "A HINDSIGHT ACTION REPLAY", "text": "Below we provide an example of how HAR stores the experiences of an agent after they played the moves acab where a and b are primitive actions and c represents the macro-action ababa." }, { "heading": "B AG-DDQN EXPERIMENT AND HYPERPARAMETERS", "text": "The hyperparameters used for the DDQN results are given by Table 1. The network architecture was the same as in the original Deepmind Atari paper (Mnih et al., 2015).\nThe architecture and hyperparameters used for the AG-DDQN results that are relevant to DDQN are the same as for DDQN and then the rest of the hyperparameters are given by Table 2." }, { "heading": "C AG-SAC HYPERPARAMETERS", "text": "The hyperparameters used for the discrete SAC results are given by Table 3. The network architecture for both the actor and the critic was the same as in the original Deepmind Atari paper (Mnih et al., 2015).\nThe architecture and hyperparameters used for the AG-SAC results that are relevant to SAC are the same as for SAC and then the rest of the hyperparameters are given by Table 4." }, { "heading": "D SAC, AG-SAC AND RAINBOW ATARI RESULTS", "text": "Below we provide all SAC, AG-SAC and Rainbow results after running 100,000 iterations for 5 random seeds. We see that AG-SAC improves over SAC in 19 out of 20 games (median improvement of 96%, maximum improvement of 3756%) and that AG-SAC improves over Rainbow in 17 out of 20 games (median improvement 62%, maximum improvement 13140%).\nNote that the scores for Pong are negative and so to calculate the proportional improvement for this game we first convert the scores to their increment over the minimum possible score. In Pong the minimum score is -21.0 and so we first add 21 to all scores before calculating relative performance.\nAlso note that for Pong both AG-SAC and SAC perform worse than random. The improvement of AG-SAC over SAC for Pong therefore could be considered a somewhat spurious result and potentially should be ignored. Note that there are no other games where AG-SAC performs worse than random though and so this issue is contained to the game Pong." }, { "heading": "E MACRO-ACTIONS AND ABANDON SHIP", "text": "" }, { "heading": "F ADDITIONAL EXPERIMENTS", "text": "The Towers of Hanoi experiments depicted in the figure below are run with SMDP-Q-Learning. Let rτm = ∑τm i=1 γ\ni−1rt+i denote the accumulated and discounted reward for executing a macro. Tabular value estimates can then be updated using SMDP-Q-Learning (Bradtke & Duff, 1995; Parr, 1998) in a model-free bootstrapping-based manner:\nQ(s,m)k+1 = (1− α)Q(s,m)k + α ( rτm + γ\nτm max m′∈M Q(s′,m′)k\n) (1)\nWe do not make use of HAR or “Abandon Ship” in these experiments and use the following hyperparameters:\nTabular Action Grammar SMDP-Q-Learning Hyperparameters: Hyperparameter Value Hyperparameter Value Learning rate α 0.8 Discount factor γ 0.95 Eligibility Trace λ 0 Exploration 0.1\nThe TD(λ) baseline shares all the hyperparameters apart from the eligibility trace λ which is set to 0.1. We train the agents 300000 (5 disks) and 7000000 (6 disks) steps." } ]
2,019
null
SP:7227922e5ec088fabf0fa9c0584ee4f5c1f3887a
[ "This paper proposed a new sampling method to train GCN in the mini-batch manner. In particular, unlike existing methods which samples the mini-batch in the node-wise way, GraphSAINT proposed to sample a mini-batch in the graph-wise way. As a result, GraphSAINT uses the same graph across different GCN layers, while most existing methods use different graphs across different GCN layers. In addition, the authors show that this sampling method is unbiased. Extensive experimental results have shown improvement over existing methods. Overall, this idea is interesting and well presented. ", "Scaling GCNs to large graphs is important for real applications. Instead of sampling the nodes or edges across GCN layers, this paper proposes to sample the training graph to improve training efficiency and accuracy. It is a smart idea to construct a complete GCN from the sampled subgraphs. Convincing experiments can verify the effectiveness of the proposed method. It is a good work." ]
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the “neighbor explosion” problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).
[ { "affiliations": [], "name": "Hanqing Zeng" }, { "affiliations": [], "name": "Hongkuan Zhou" } ]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Hrayr Harutyunyan", "Nazanin Alipourfard", "Kristina Lerman", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing", "venue": null, "year": 1905 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on", "venue": "graphs. CoRR,", "year": 2013 }, { "authors": [ "HongYun Cai", "Vincent W. Zheng", "Kevin Chen-Chuan Chang" ], "title": "A comprehensive survey of graph embedding: Problems, techniques and applications", "venue": "CoRR, abs/1709.07604,", "year": 2017 }, { "authors": [ "Jianfei Chen", "Jun Zhu", "Le Song" ], "title": "Stochastic training of graph convolutional networks with variance reduction", "venue": "In ICML, pp", "year": 2018 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: Fast learning with graph convolutional networks via importance sampling", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "URL http://arxiv.org/abs/1905.07953", "year": 1905 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Matthias Fey" ], "title": "Just jump: Dynamic neighborhood aggregation in graph neural networks", "venue": "CoRR, abs/1904.04849,", "year": 2019 }, { "authors": [ "Hongyang Gao", "Zhengyang Wang", "Shuiwang Ji" ], "title": "Large-scale learnable graph convolutional networks", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD", "year": 2018 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CoRR, abs/1512.03385,", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Pili Hu", "Wing Cheong Lau" ], "title": "A survey and taxonomy of graph sampling", "venue": "arXiv preprint arXiv:1308.5865,", "year": 2013 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Wenbing Huang", "Tong Zhang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "CoRR, abs/1609.02907,", "year": 2016 }, { "authors": [ "Johannes Klicpera", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Personalized embedding propagation: Combining neural networks on graphs with personalized pagerank", "venue": "CoRR, abs/1810.05997,", "year": 2018 }, { "authors": [ "John Boaz Lee", "Ryan A. Rossi", "Xiangnan Kong", "Sungchul Kim", "Eunyee Koh", "Anup Rao" ], "title": "Higherorder graph convolutional networks", "venue": "CoRR, abs/1809.07697,", "year": 2018 }, { "authors": [ "Jure Leskovec", "Christos Faloutsos" ], "title": "Sampling from large graphs", "venue": "In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2006 }, { "authors": [ "R. Li", "J.X. Yu", "L. Qin", "R. Mao", "T. Jin" ], "title": "On random walk based graph sampling", "venue": "In 2015 IEEE 31st International Conference on Data Engineering,", "year": 2015 }, { "authors": [ "Haonan Lu", "Seth H. Huang", "Tian Ye", "Xiuyan Guo" ], "title": "Graph star net for generalized multi-task learning", "venue": "CoRR, abs/1906.12330,", "year": 2019 }, { "authors": [ "Bruno Ribeiro", "Don Towsley" ], "title": "Estimating and sampling graphs with multidimensional random walks", "venue": "In Proceedings of the 10th ACM SIGCOMM conference on Internet measurement,", "year": 2010 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "Philip S. Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": "CoRR, abs/1901.00596,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": "arXiv preprint arXiv:1806.03536,", "year": 2018 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L. Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18,", "year": 2018 }, { "authors": [ "Rex Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "William L. Hamilton", "Jure Leskovec" ], "title": "Hierarchical graph representation learning with differentiable pooling", "venue": "In Proceedings of the 32Nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Hanqing Zeng", "Viktor Prasanna" ], "title": "GraphACT: Accelerating GCN training on CPU-FPGA heterogeneous platforms", "venue": "arXiv preprint arXiv:2001.02498,", "year": 2019 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor K. Prasanna" ], "title": "Accurate, efficient and scalable graph embedding", "venue": "CoRR, abs/1810.11899,", "year": 2018 }, { "authors": [ "Jiani Zhang", "Xingjian Shi", "Junyuan Xie", "Hao Ma", "Irwin King", "Dit-Yan Yeung" ], "title": "Gaan: Gated attention networks for learning on large and spatiotemporal graphs", "venue": "arXiv preprint arXiv:1803.07294,", "year": 2018 }, { "authors": [ "Zhenpeng Zhou" ], "title": "Graph convolutional networks for molecules", "venue": "CoRR, abs/1706.09916,", "year": 2017 }, { "authors": [ "Huang" ], "title": "Note that the α calculation is slightly different from the original equation in Veličković et al. (2017). Namely, GAT-SAINT does not normalize α by softmax across all neighbors of v. We make such modification since under the minibatch setting, node v does not see all its neighbors in the training graph. The removal of softmax is also seen in the attention design", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently, representation learning on graphs has attracted much attention, since it greatly facilitates tasks such as classification and clustering (Wu et al., 2019; Cai et al., 2017). Current works on Graph Convolutional Networks (GCNs) (Hamilton et al., 2017; Chen et al., 2018b; Gao et al., 2018; Huang et al., 2018; Chen et al., 2018a) mostly focus on shallow models (2 layers) on relatively small graphs. Scaling GCNs to larger datasets and deeper layers still requires fast alternate training methods.\nIn a GCN, data to be gathered for one output node comes from its neighbors in the previous layer. Each of these neighbors in turn, gathers its output from the previous layer, and so on. Clearly, the deeper we back track, the more multi-hop neighbors to support the computation of the root. The number of support nodes (and thus the training time) potentially grows exponentially with the GCN depth. To mitigate such “neighbor explosion”, state-of-the-art methods use various layer sampling techniques. The works by Hamilton et al. (2017); Ying et al. (2018a); Chen et al. (2018a) ensure that only a small number of neighbors (typically from 2 to 50) are selected by one node in the next layer. Chen et al. (2018b) and Huang et al. (2018) further propose samplers to restrict the neighbor expansion factor to 1, by ensuring a fixed sample size in all layers. While these methods significantly speed up training, they face challenges in scalability, accuracy or computation complexity.\n∗Equal contribution\nPresent work We present GraphSAINT (Graph SAmpling based INductive learning meThod) to efficiently train deep GCNs. GraphSAINT is developed from a fundamentally different way of minibatch construction. Instead of building a GCN on the full training graph and then sampling across the layers, we sample the training graph first and then build a full GCN on the subgraph. Our method is thus graph sampling based. Naturally, GraphSAINT resolves “neighbor explosion”, since every GCN of the minibatches is a small yet complete one. On the other hand, graph sampling based method also brings new challenges in training. Intuitively, nodes of higher influence on each other should have higher probability to form a subgraph. This enables the sampled nodes to “support” each other without going outside the minibatch. Unfortunately, such strategy results in non-identical node sampling probability, and introduces bias in the minibatch estimator. To address this issue, we develop normalization techniques so that the feature learning does not give preference to nodes more frequently sampled. To further improve training quality, we perform variance reduction analysis, and design light-weight sampling algorithms by quantifying “influence” of neighbors. Experiments on GraphSAINT using five large datasets show significant performance gain in both training accuracy and time. We also demonstrate the flexibility of GraphSAINT by integrating our minibatch method with popular GCN architectures such as JK-net (Xu et al., 2018) and GAT (Veličković et al., 2017). The resulting deep models achieve new state-of-the-art F1 scores on PPI (0.995) and Reddit (0.970)." }, { "heading": "2 RELATED WORK", "text": "A neural network model that extends convolution operation to the graph domain is first proposed by Bruna et al. (2013). Further, Kipf & Welling (2016); Defferrard et al. (2016) speed up graph convolution computation with localized filters based on Chebyshev expansion. They target relatively small datasets and thus the training proceeds in full batch. In order to scale GCNs to large graphs, layer sampling techniques (Hamilton et al., 2017; Chen et al., 2018b; Ying et al., 2018a; Chen et al., 2018a; Gao et al., 2018; Huang et al., 2018) have been proposed for efficient minibatch training. All of them follow the three meta steps: 1. Construct a complete GCN on the full training graph. 2. Sample nodes or edges of each layer to form minibatches. 3. Propagate forward and backward among the sampled GCN. Steps (2) and (3) proceed iteratively to update the weights via stochastic gradient descent. The layer sampling algorithm of GraphSAGE (Hamilton et al., 2017) performs uniform node sampling on the previous layer neighbors. It enforces a pre-defined budget on the sample size, so as to bound the minibatch computation complexity. Ying et al. (2018a) enhances the layer sampler of Hamilton et al. (2017) by introducing an importance score to each neighbor. The algorithm presumably leads to less information loss due to weighted aggregation. S-GCN (Chen et al., 2018a) further restricts neighborhood size by requiring only two support nodes in the previous layer. The idea is to use the historical activations in the previous layer to avoid redundant re-evaluation. FastGCN (Chen et al., 2018b) performs sampling from another perspective. Instead of tracking down the inter-layer connections, node sampling is performed independently for each layer. It applies importance sampling to reduce variance, and results in constant sample size in all layers. However, the minibatches potentially become too sparse to achieve high accuracy. Huang et al. (2018) improves FastGCN by an additional sampling neural network. It ensures high accuracy, since sampling is conditioned on the selected nodes in the next layer. Significant overhead may be incurred due to the expensive sampling algorithm and the extra sampler parameters to be learned.\nInstead of sampling layers, the works of Zeng et al. (2018) and Chiang et al. (2019) build minibatches from subgraphs. Zeng et al. (2018) proposes a specific graph sampling algorithm to ensure connectivity among minibatch nodes. They further present techniques to scale such training on shared-memory multi-core platforms. More recently, ClusterGCN (Chiang et al., 2019) proposes graph clustering based minibatch training. During pre-processing, the training graph is partitioned into densely connected clusters. During training, clusters are randomly selected to form minibatches, and intra-cluster edge connections remain unchanged. Similar to GraphSAINT, the works of Zeng et al. (2018) and Chiang et al. (2019) do not sample the layers and thus “neighbor explosion” is avoided. Unlike GraphSAINT, both works are heuristic based, and do not account for bias due to the unequal probability of each node / edge appearing in a minibatch.\nAnother line of research focuses on improving model capacity. Applying attention on graphs, the architectures of Veličković et al. (2017); Zhang et al. (2018); Lu et al. (2019) better capture neighbor features by dynamically adjusting edge weights. Klicpera et al. (2018) combines PageRank with GCNs to enable efficient information propagation from many hops away. To develop deeper models,\n“skip-connection” is borrowed from CNNs (He et al., 2015; Huang et al., 2017) into the GCN context. In particular, JK-net Xu et al. (2018) demonstrates significant accuracy improvement on GCNs with more than two layers. Note, however, that JK-net (Xu et al., 2018) follows the same sampling strategy as GraphSAGE (Hamilton et al., 2017). Thus, its training cost is high due to neighbor explosion. In addition, high order graph convolutional layers (Zhou, 2017; Lee et al., 2018; Abu-El-Haija et al., 2019) also help propagate long-distance features. With the numerous architectural variants developed, the question of how to train them efficiently via minibatches still remains to be answered.\n3 PROPOSED METHOD: GraphSAINT\nGraph sampling based method is motivated by the challenges in scalability (in terms of model depth and graph size). We analyze the bias (Section 3.2) and variance (Section 3.3) introduced by graph sampling, and thus, propose feasible sampling algorithms (Section 3.4). We show the applicability of GraphSAINT to other architectures, both conceptually (Section 4) and experimentally (Section 5.2).\nIn the following, we define the problem of interest and the corresponding notations. A GCN learns representation of an un-directed, attributed graph G (V, E), where each node v ∈ V has a length-f attribute xv . Let A be the adjacency matrix and à be the normalized one (i.e., à = D−1A, and D is the diagonal degree matrix). Let the dimension of layer-` input activation be f (`). The activation of node v is x(`)v ∈ Rf (`)\n, and the weight matrix is W (`) ∈ Rf(`)×f(`+1) . Note that xv = x(1)v . Propagation rule of a layer is defined as follows:\nx(`+1)v = σ (∑ u∈V Ãv,u ( W (`) )T x(`)u ) (1)\nwhere Ãv,u is a scalar, taking an element of Ã. And σ (·) is the activation function (e.g., ReLU). We use subscript “s” to denote parameterd of the sampled graph (e.g., Gs, Vs, Es). GCNs can be applied under inductive and transductive settings. While GraphSAINT is applicable to both, in this paper, we focus on inductive learning. It has been shown that inductive learning is especially challenging (Hamilton et al., 2017) — during training, neither attributes nor connections of the test nodes are present. Thus, an inductive model has to generalize to completely unseen graphs.\n3.1 MINIBATCH BY GRAPH SAMPLING\nGraphSAINT follows the design philosophy of directly sampling the training graph G, rather than the corresponding GCN. Our goals are to 1. extract appropriately connected subgraphs so that little information is lost when propagating within the subgraphs, and 2. combine information of many subgraphs together so that the training process overall learns good representation of the full graph.\nFigure 1 and Algorithm 1 illustrate the training algorithm. Before training starts, we perform light-weight pre-processing on G with the given sampler SAMPLE. The pre-processing estimates the probability of a node v ∈ V and an edge e ∈ E being sampled by SAMPLE. Such probability is later used to normalize the subgraph neighbor aggregation and the minibatch loss (Section 3.2). Afterwards,\nAlgorithm 1 GraphSAINT training algorithm\nInput: Training graph G (V, E ,X); Labels Y ; Sampler SAMPLE; Output: GCN model with trained weights\n1: Pre-processing: Setup SAMPLE parameters; Compute normalization coefficients α, λ. 2: for each minibatch do 3: Gs (Vs, Es)← Sampled sub-graph of G according to SAMPLE 4: GCN construction on Gs. 5: {yv | v ∈ Vs} ← Forward propagation of {xv | v ∈ Vs}, normalized by α 6: Backward propagation from λ-normalized loss L (yv,yv). Update weights. 7: end for\ntraining proceeds by iterative weight updates via SGD. Each iteration starts with an independently sampled Gs (where |Vs| |V|). We then build a full GCN on Gs to generate embedding and calculate loss for every v ∈ Vs. In Algorithm 1, node representation is learned by performing node classification in the supervised setting, and each training node v comes with a ground truth label yv .\nIntuitively, there are two requirements for SAMPLE: 1. Nodes having high influence on each other should be sampled in the same subgraph. 2. Each edge should have non-negligible probability to be sampled. For requirement (1), an ideal SAMPLE would consider the joint information from node connections as well as attributes. However, the resulting algorithm may have high complexity as it would need to infer the relationships between features. For simplicity, we define “influence” from the graph connectivity perspective and design topology based samplers. Requirement (2) leads to better generalization since it enables the neural net to explore the full feature and label space." }, { "heading": "3.2 NORMALIZATION", "text": "A sampler that preserves connectivity characteristic of G will almost inevitably introduce bias into minibatch estimation. In the following, we present normalization techniques to eliminate biases.\nAnalysis of the complete multi-layer GCN is difficult due to non-linear activations. Thus, we analyze the embedding of each layer independently. This is similar to the treatment of layers independently by prior work (Chen et al., 2018b; Huang et al., 2018). Consider a layer-(`+ 1) node v and a layer-` node u. If v is sampled (i.e., v ∈ Vs), we can compute the aggregated feature of v as:\nζ(`+1)v = ∑ u∈V Ãv,u αu,v ( W (`) )T x(`)u 1u|v = ∑ u∈V Ãv,u αu,v x̃(`)u 1u|v, (2)\nwhere x̃(`)u = ( W (`) )T x (`) u , and 1u|v ∈ {0, 1} is the indicator function given v is in the subgraph (i.e., 1u|v = 0 if v ∈ Vs ∧ (u, v) 6∈ Es; 1u|v = 1 if (u, v) ∈ Es; 1u|v not defined if v 6∈ Vs). We refer to the constant αu,v as aggregator normalization. Define pu,v = pv,u as the probability of an edge (u, v) ∈ E being sampled in a subgraph, and pv as the probability of a node v ∈ V being sampled.\nProposition 3.1. ζ(`+1)v is an unbiased estimator of the aggregation of v in the full (`+ 1)th GCN layer, if αu,v =\npu,v pv\n. i.e., E ( ζ (`+1) v ) = ∑ u∈V Ãv,ux̃ (`) u .\nAssuming that each layer independently learns an embedding, we use Proposition 3.1 to normalize feature propagation of each layer of the GCN built by GraphSAINT. Further, let Lv be the loss on v in the output layer. The minibatch loss is calculated as Lbatch = ∑ v∈Gs Lv/λv , where λv is a constant that we term loss normalization. We set λv = |V| · pv so that:\nE (Lbatch) = 1 |G| ∑ Gs∈G ∑ v∈Vs Lv λv = 1 |V| ∑ v∈V Lv. (3)\nFeature propagation within subgraphs thus requires normalization factors α and λ, which are computed based on the edge and node probability pu,v, pv. In the case of random node or random edge samplers, pu,v and pv can be derived analytically. For other samplers in general, closed form expression is hard to obtain. Thus, we perform pre-processing for estimation. Before training starts,\nwe run the sampler repeatedly to obtain a set of N subgraphs G. We setup a counter Cv and Cu,v for each v ∈ V and (u, v) ∈ E , to count the number of times the node or edge appears in the subgraphs of G. Then we set αu,v = Cu,vCv = Cv,u Cv\nand λv = CvN . The subgraphs Gs ∈ G can all be reused as minibatches during training. Thus, the overhead of pre-processing is small (see Appendix D.2)." }, { "heading": "3.3 VARIANCE", "text": "We derive samplers for variance reduction. Let e be the edge connecting u, v, and b(`)e = Ãv,ux̃ (`−1) u + Ãu,vx̃ (`−1) v . It is desirable that variance of all estimators ζ (`) v is small. With this objective, we define:\nζ = ∑ ` ∑ v∈Gs ζ (`) v pv = ∑ ` ∑ v,u Ãv,u pvαu,v x̃(`)u 1v1u|v = ∑ ` ∑ e b (`) e pe 1 (`) e . (4)\nwhere 1e = 1 if e ∈ Es; 1e = 0 if e 6∈ Es. And 1v = 1 if v ∈ Vs; 1v = 0 if v 6∈ Vs. The factor pu in the first equality is present so that ζ is an unbiased estimator of the sum of all node aggregations at all layers: E (ζ) = ∑ ` ∑ v∈V E ( ζ (`) v ) . Note that 1(`)e = 1e,∀`, since once an edge is present in the sampled graph, it is present in all layers of our GCN.\nWe define the optimal edge sampler to minimize variance for every dimension of ζ. We restrict ourselves to independent edge sampling. For each e ∈ E , we make independent decision on whether it should be in Gs or not. The probability of including e is pe. We further constrain ∑ pe = m, so that the expected number of sampled edges equals to m. The budget m is a given sampling parameter. Theorem 3.2. Under independent edge sampling with budget m, the optimal edge probabilities to minimize the sum of variance of each ζ’s dimension is given by: pe = m∑\ne′ ∥∥∥∑` b(`)e′ ∥∥∥ ∥∥∥∑` b(`)e ∥∥∥.\nTo prove Theorem 3.2, we make use of the independence among graph edges, and the dependence among layer edges to obtain the covariance of 1(`)e . Then using the fact that sum of pe is a constant, we use the Cauchy-Schwarz inequality to derive the optimal pe. Details are in Appendix A.\nNote that calculating b(`)e requires computing x̃ (`−1) v , which increases the complexity of sampling. As a reasonable simplification, we ignore x̃(`)v to make the edge probability dependent on the graph topology only. Therefore, we choose pe ∝ Ãv,u + Ãu,v = 1deg(u) + 1 deg(v) .\nThe derived optimal edge sampler agrees with the intuition in Section 3.1. If two nodes u, v are connected and they have few neighbors, then u and v are likely to be influential to each other. In this case, the edge probability pu,v = pv,u should be high. The above analysis on edge samplers also inspires us to design other samplers, which are presented in Section 3.4.\nRemark We can also apply the above edge sampler to perform layer sampling. Under the independent layer sampling assumption of Chen et al. (2018b), one would sample a connection ( u(`), v(`+1) ) with probability p(`)u,v ∝ 1deg(u) + 1 deg(v) . For simplicity, assume a uniform degree graph (of degree d). Then p(`)e = p. For an already sampled u(`) to connect to layer `+ 1, at least one of its edges has to be selected by the layer `+ 1 sampler. Clearly, the probability of an input layer node to “survive” the\nL number of independent sampling process is ( 1− (1− p)d )L−1 . Such layer sampler potentially\nreturns an overly sparse minibatch for L > 1. On the other hand, connectivity within a minibatch of GraphSAINT never drops with GCN depth. If an edge is present in layer `, it is present in all layers." }, { "heading": "3.4 SAMPLERS", "text": "Based on the above variance analysis, we present several light-weight and efficient samplers that GraphSAINT has integrated. Detailed sampling algorithms are listed in Appendix B.\nRandom node sampler We sample |Vs| nodes from V randomly, according to a node probability distribution P (u) ∝ ∥∥∥Ã:,u∥∥∥2. This sampler is inspired by the layer sampler of Chen et al. (2018b).\nRandom edge sampler We perform edge sampling as described in Section 3.3.\nRandom walk based samplers Another way to analyze graph sampling based multi-layer GCN is to ignore activations. In such case, L layers can be represented as a single layer with edge weights given by B = ÃL. Following a similar approach as Section 3.3, if it were possible to pick pairs of nodes (whether or not they are directly connected in the original Ã) independently, then we would set pu,v ∝ Bu,v + Bv,u, where Bu,v can be interpreted as the probability of a random walk to start at u and end at v in L hops (and Bv,u vice-versa). Even though it is not possible to sample a subgraph where such pairs of nodes are independently selected, we still consider a random walk sampler with walk length L as a good candidate for L-layer GCNs. There are numerous random walk based samplers proposed in the literature (Ribeiro & Towsley, 2010; Leskovec & Faloutsos, 2006; Hu & Lau, 2013; Li et al., 2015). In the experiments, we implement a regular random walk sampler (with r root nodes selected uniformly at random and each walker goes h hops), and also a multi-dimensional random walk sampler defined in Ribeiro & Towsley (2010).\nFor all the above samplers, we return the subgraph induced from the sampled nodes. The induction step adds more connections into the subgraph, and empirically helps improve convergence." }, { "heading": "4 DISCUSSION", "text": "Extensions GraphSAINT admits two orthogonal extensions. First, GraphSAINT can seamlessly integrate other graph samplers. Second, the idea of training by graph sampling is applicable to many GCN architecture variants: 1. Jumping knowledge (Xu et al., 2018): since our GCNs constructed during training are complete, applying skip connections to GraphSAINT is straightforward. On the other hand, for some layer sampling methods (Chen et al., 2018b; Huang et al., 2018), extra modification to their samplers is required, since the jumping knowledge architecture requires layer-` samples to be a subset of layer-(`− 1) samples∗. 2. Attention (Veličković et al., 2017; Fey, 2019; Zhang et al., 2018): while explicit variance reduction is hard due to the dynamically updated attention values, it is reasonable to apply attention within the subgraphs which are considered as representatives of the full graph. Our loss and aggregator normalizations are also applicable†. 3. Others: To support high order layers (Zhou, 2017; Lee et al., 2018; Abu-El-Haija et al., 2019) or even more complicated networks for the task of graph classification (Ying et al., 2018b), we replace the full adjacency matrix A with the (normalized) one for the subgraph As to perform layer propagation.\nComparison GraphSAINT enjoys: 1. high scalability and efficiency, 2. high accuracy, and 3. low training complexity. Point (1) is due to the significantly reduced neighborhood size compared with Hamilton et al. (2017); Ying et al. (2018a); Chen et al. (2018a). Point (2) is due to the better interlayer connectivity compared with Chen et al. (2018b), and unbiased minibatch estimator compared with Chiang et al. (2019). Point (3) is due to the simple and trivially parallelizable pre-processing compared with the sampling of Huang et al. (2018) and clustering of Chiang et al. (2019)." }, { "heading": "5 EXPERIMENTS", "text": "Setup Experiments are under the inductive, supervised learning setting. We evaluate GraphSAINT on the following tasks: 1. classifying protein functions based on the interactions of human tissue proteins (PPI), 2. categorizing types of images based on the descriptions and common properties of online images (Flickr), 3. predicting communities of online posts based on user comments (Reddit), 4. categorizing types of businesses based on customer reviewers and friendship (Yelp), and 5. classifying product categories based on buyer reviewers and interactions (Amazon). For PPI, we use the small version for the two layer convergence comparison (Table 2 and Figure 2), since Hamilton et al. (2017) and Chen et al. (2018a) report accuracy for this version in their original papers. We use the large version for additional comparison with Chiang et al. (2019) to be consistent with its reported accuracy. All datasets follow “fixed-partition” splits. Appendix C.2 includes further details. ∗The skip-connection design proposed by Huang et al. (2018) does not have such “subset” requirement, and thus is compatible with both graph sampling and layer sampling based methods. †When applying GraphSAINT to GAT (Veličković et al., 2017), we remove the softmax step which normalizes attention values within the same neighborhood, as suggested by Huang et al. (2018). See Appendix C.3.\nWe open source GraphSAINT‡. We compare with six baselines: 1. vanilla GCN (Kipf & Welling, 2016), 2. GraphSAGE (Hamilton et al., 2017), 3. FastGCN (Chen et al., 2018b), 4. S-GCN (Chen et al., 2018a), 5. AS-GCN (Huang et al., 2018), and 6. ClusterGCN (Chiang et al., 2019). All baselines are executed with their officially released code (see Appendix C.3 for downloadable URLs and commit numbers). Baselines and GraphSAINT are all implemented in Tensorflow with Python3. We run experiments on a NVIDIA Tesla P100 GPU (see Appendix C.1 for hardware specification)." }, { "heading": "5.1 COMPARISON WITH STATE-OF-THE-ART", "text": "Table 2 and Figure 2 show the accuracy and convergence comparison of various methods. All results correspond to two-layer GCN models (for GraphSAGE, we use its mean aggregator). For a given dataset, we keep hidden dimension the same across all methods. We describe the detailed architecture and hyperparameter search procedure in Appendix C.3. The mean and confidence interval of the accuracy values in Table 2 are measured by three runs under the same hyperparameters. The training time of Figure 2 excludes the time for data loading, pre-processing, validation set evaluation and model saving. Our pre-processing incurs little overhead in training time. See Appendix D.2 for cost of graph sampling. For GraphSAINT, we implement the graph samplers described in Section 3.4. In Table 2, “Node” stands for random node sampler; “Edge” stands for random edge sampler; “RW” stands for random walk sampler; “MRW” stands for multi-dimensional random walk sampler.\nClearly, with appropriate graph samplers, GraphSAINT achieves significantly higher accuracy on all datasets. For GraphSAINT-Node, we use the same node probability as FastGCN. Thus, the accuracy improvement is mainly due to the switching from layer sampling to graph sampling (see “Remark” in Section 3.3). Compared with AS-GCN, GraphSAINT is significantly faster. The sampler of AS-GCN is expensive to execute, making its overall training time even longer than vanilla GCN. We provide detailed computation complexity analysis on the sampler in Appendix D.2. For S-GCN on Reddit, it achieves similar accuracy as GraphSAINT, at the cost of over 9× longer training time. The released code of FastGCN only supports CPU execution, so its convergence curve is dashed.\nTable 3 presents additional comparison with ClusterGCN. We use L× f to specify the architecture, where L and f denote GCN depth and hidden dimension, respectively. The four architectures are the ones used in the original paper (Chiang et al., 2019). Again, GraphSAINT achieves significant accuracy improvement. To train models with L > 2 often requires additional architectural tweaks. ClusterGCN uses its diagonal enhancement technique for the 5-layer PPI and 4-layer Reddit models. GraphSAINT uses jumping knowledge connection (Xu et al., 2018) for 4-layer Reddit.\nEvaluation on graph samplers From Table 2, random edge and random walk based samplers achieve higher accuracy than the random node sampler. Figure 3 presents sensitivity analysis on parameters of “RW”. We use the same hyperparameters (except the sampling parameters) and network architecture as those of the “RW” entries in Table 2. We fix the length of each walker to 2 (i.e., GCN depth), and vary the number of roots r from 250 to 2250. For PPI, increasing r from 250 to 750 significantly improves accuracy. Overall, for all datasets, accuracy stabilizes beyond r = 750.\n5.2 GraphSAINT ON ARCHITECTURE VARIANTS AND DEEP MODELS\nIn Figure 4, we train a 2-layer and a 4-layer model of GAT (Veličković et al., 2017) and JK-net (Xu et al., 2018), by using minibatches of GraphSAGE and GraphSAINT respectively. On the two 4-layer architectures, GraphSAINT achieves two orders of magnitude speedup than GraphSAGE, indicating much better scalability on deep models. From accuracy perspective, 4-layer GAT-SAGE and JKSAGE do not outperform the corresponding 2-layer versions, potentially due to the smoothening effect caused by the massive neighborhood size. On the other hand, with minibatches returned by our edge sampler, increasing model depth of JK-SAINT leads to noticeable accuracy improvement (from 0.966 of 2-layer to 0.970 of 4-layer). Appendix D.1 contains additional scalability results." }, { "heading": "6 CONCLUSION", "text": "We have presented GraphSAINT, a graph sampling based training method for deep GCNs on large graphs. We have analyzed bias and variance of the minibatches defined on subgraphs, and proposed\n0 1,000 2,000 0.4\n0.6\n0.8\n1\nNumber of walkers\nTe st\nF1 -m\nic ro\nPPI Flickr Reddit Yelp Amazon\nFigure 3: Sensitivity analysis 100 102 104\n0.93\n0.94\n0.95\n0.96\n0.97\nTraining time (second)\nV al\nid at\nio n\nF1 -m\nic ro\nGAT\n100 102 104\nJK-net\nGraphSAINT 2-layer GraphSAINT 4-layer GraphSAGE 2-layer GraphSAGE 4-layer\nFigure 4: GraphSAINT with JK-net and GAT (Reddit)\nnormalization techniques and sampling algorithms to improve training quality. We have conducted extensive experiments to demonstrate the advantage of GraphSAINT in accuracy and training time.\nAn interesting future direction is to develop distributed training algorithms using graph sampling based minibatches. After partitioning the training graph in distributed memory, sampling can be performed independently on each processor. Afterwards, training on the self-supportive subgraphs can significantly reduce the system-level communication cost. To ensure the overall convergence quality, data shuffling strategy for the graph nodes and edges can be developed together with each specific graph sampler. Another direction is to perform algorithm-system co-optimization to accelerate the training of GraphSAINT on heterogeneous computing platforms (Zeng et al., 2018; Zeng & Prasanna, 2019). The resolution of “neighbor explosion” by GraphSAINT not only reduces the training computation complexity, but also improves hardware utilization by significantly less data traffic to the slow memory. In addition, task-level parallelization is easy since the light-weight graph sampling is completely decoupled from the GCN layer propagation." }, { "heading": "ACKNOWLEDGEMENT", "text": "This material is based on work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract Number FA8750-17-C-0086 and National Science Foundation (NSF) under Contract Numbers CCF-1919289 and OAC-1911229. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA or NSF." }, { "heading": "A PROOFS", "text": "Proof of Proposition 3.1. Under the condition that v is sampled in a subgraph: E ( ζ(`+1)v ) =E (∑ u∈V Ãv,u αu,v x̃(`)u 1u|v )\n= ∑ u∈V Ãv,u αu,v x̃(`)u E ( 1u|v ) = ∑ u∈V Ãv,u αu,v x̃(`)u P ((u, v) sampled|v sampled)\n= ∑ u∈V Ãv,u αu,v x̃(`)u P ((u, v) sampled) P (v sampled)\n= ∑ u∈V Ãv,u αu,v x̃(`)u pu,v pv\n(5)\nwhere the second equality is due to linearity of expectation, and the third equality (conditional edge probability) is due to the initial condition that v is sampled in a subgraph.\nIt directly follows that, when αu,v = pu,v pv ,\nE ( ζ(`+1)v ) = ∑ u∈V Ãv,ux̃ (`) u\nProof of Theorem 3.2. Below, we use Cov (·) to denote covariance and Var (·) to denote variance. For independent edge sampling as defined in Section 3.3, Cov ( 1 (`1) e1 ,1 (`2) e2 ) = 0,∀e1 6= e2. And for\na full GCN on the subgraph, Cov ( 1 (`1) e ,1 (`2) e ) = pe − p2e. To start the proof, we first assume that\nthe b(`)e is one dimensional (i.e., a scalar) and denote it by b (`) e . Now,\nVar (ζ) = ∑ e,`\n( b (`) e\npe\n)2 Var ( 1 (`) e ) + 2 ∑ e,`1<`2 b (`1) e b (`2) e p2e Cov ( 1 (`1) e ,1 (`2) e )\n= ∑ e,`\n( b (`) e )2 pe − ∑ e,` ( b(`)e )2 + 2 ∑ e,`1<`2 b (`1) e b (`2) e p2e ( pe − p2e )\n= ∑ e\n(∑ ` b (`) e )2 pe − ∑ e (∑ ` b(`)e )2 (6)\nLet a given constant m = ∑\ne pe be the expected number of sampled edges. By Cauchy-Schwarz inequality: ∑ e ( ∑ ` b (`) e ) 2 pe m = ∑ e (∑ ` b (`) e√ pe )2∑ e (√ pe )2 ≥ (∑e,` b(`)e )2. The equality is achieved\nwhen ∣∣∣∑` b(`)e√pe ∣∣∣ ∝ √pe. i.e., variance is minimized when pe ∝ ∣∣∣∑` b(`)e ∣∣∣.\nIt directly follows that:\npe = m∑\ne′ ∣∣∣∑` b(`)e′ ∣∣∣ ∣∣∣∣∣∑ ` b(`)e ∣∣∣∣∣ For the multi-dimensional case of b(`)e , following similar steps as above, it is easy to show that the optimal edge probability to minimize ∑ i Var (ζi) (where i is the index for ζ’s dimensions) is:\npe = m∑\ne′ ∥∥∥∑` b(`)e′ ∥∥∥ ∥∥∥∥∥∑ ` b(`)e ∥∥∥∥∥" }, { "heading": "B SAMPLING ALGORITHM", "text": "Algorithm 2 lists the four graph samplers we have integrated into GraphSAINT. The naming of the samplers follows that of Table 2. Note that the sampling parameters n and m specify a budget rather than the actual number of nodes and edges in the subgraph Gs. Since certain nodes or edges in the training graph G may be repeatedly sampled under a single invocation of the sampler, we often have |Vs| < n for node and MRW samplers, |Vs| < 2m for edge sampler, and |Vs| < r ·h for RW sampler. Also note that the edge sampler presented in Algorithm 2 is an approximate version of the independent edge sampler defined in Section 3.4. Complexity (excluding the subgraph induction step) of the original version in Section 3.4 is O (|E|), while complexity of the approximate one is O (m). When m |E|, the approximate version leads to identical accuracy as the original one, for a given m." }, { "heading": "C DETAILED EXPERIMENTAL SETUP", "text": "C.1 HARDWARE SPECIFICATION AND ENVIRONMENT\nWe run our experiments on a single machine with Dual Intel Xeon CPUs (E5-2698 v4 @ 2.2Ghz), one NVIDIA Tesla P100 GPU (16GB of HBM2 memory) and 512GB DDR4 memory. The code is written in Python 3.6.8 (where the sampling part is written with Cython 0.29.2). We use Tensorflow 1.12.0 on CUDA 9.2 with CUDNN 7.2.1 to train the model on GPU. Since the subgraphs are sampled independently, we run the sampler in parallel on 40 CPU cores.\nAlgorithm 2 Graph sampling algorithms by GraphSAINT Input: Training graph G (V, E); Sampling parameters: node budget n; edge budget m; number of\nroots r; random walk length h Output: Sampled graph Gs (Vs, Es)\n1: function NODE(G,n) . Node sampler 2: P (v) := ∥∥∥Ã:,v∥∥∥2 /∑v′∈V ∥∥∥Ã:,v′∥∥∥2 3: Vs ← n nodes randomly sampled (with replacement) from V according to P 4: Gs ← Node induced subgraph of G from Vs 5: end function 6: function EDGE(G,m) . Edge sampler (approximate version) 7: P ((u, v)) := ( 1\ndeg(u) + 1 deg(v)\n) / ∑\n(u′,v′)∈E\n( 1\ndeg(u′) + 1 deg(v′) ) 8: Es ←m edges randomly sampled (with replacement) from E according to P 9: Vs ← Set of nodes that are end-points of edges in Es\n10: Gs ← Node induced subgraph of G from Vs 11: end function 12: function RW(G,r,h) . Random walk sampler 13: Vroot ← r root nodes sampled uniformly at random (with replacement) from V 14: Vs ← Vroot 15: for v ∈ Vroot do 16: u← v 17: for d = 1 to h do 18: u← Node sampled uniformly at random from u’s neighbor 19: Vs ← Vs ∪ {u} 20: end for 21: end for 22: Gs ← Node induced subgraph of G from Vs 23: end function 24: function MRW(G,n,r) . Multi-dimensional random walk sampler 25: VFS ← r root nodes sampled uniformly at random (with replacement) from V 26: Vs ← VFS 27: for i = r + 1 to n do 28: Select u ∈ VFS with probability deg(u)/ ∑ v∈VFS deg(v) 29: u′ ← Node randomly sampled from u’s neighbor 30: VFS ← (VFS \\ {u}) ∪ {u′} 31: Vs ← Vs ∪ {u} 32: end for 33: Gs ← Node induced subgraph of G from Vs 34: end function\nC.2 ADDITIONAL DATASET DETAILS\nHere we present the detailed procedures to prepare the Flickr, Yelp and Amazon datasets.\nThe Flickr dataset originates from NUS-wide§. The SNAP website¶ collected Flickr data from four different sources including NUS-wide, and generated an un-directed graph. One node in the graph represents one image uploaded to Flickr. If two images share some common properties (e.g., same geographic location, same gallery, comments by the same user, etc.), there is an edge between the nodes of these two images. We use as the node features the 500-dimensional bag-of-word representation of the images provided by NUS-wide. For labels, we scan over the 81 tags of each image and manually merged them to 7 classes. Each image belongs to one of the 7 classes.\nThe Yelp dataset is prepared from the raw json data of businesses, users and reviews provided in the open challenge website‖. For nodes and edges, we scan the friend list of each user in the raw json file of users. If two users are friends, we create an edge between them. We then filter out all the reviews by each user and separate the reviews into words. Each review word is converted to a 300-dimensional vector using the Word2Vec model pre-trained on GoogleNews∗∗. The word vectors of each node are added and normalized to serve as the node feature (i.e., xv). As for the node labels, we scan the raw json file of businesses, and use the categories of the businesses reviewed by a user v as the multi-class label of v.\nFor the Amazon dataset, a node is a product on the Amazon website and an edge (u, v) is created if products u and v are bought by the same customer. Each product contains text reviews (converted to 4-gram) from the buyer. We use SVD to reduce the dimensionality of the 4-gram representation to 200, and use the obtained vectors as the node feature. The labels represent the product categories (e.g., books, movies, shoes).\nFigure 5 shows the degree distribution of the five graphs. A point (k, p) in the plot means the probability of a node having degree at least k is p.\nC.3 ADDITIONAL DETAILS IN EXPERIMENTAL CONFIGURATION\nTable 4 summarizes the URLs to download the baseline codes.\nThe optimizer for GraphSAINT and all baselines is Adam (Kingma & Ba, 2014). For all baselines and datasets, we perform grid search on the hyperparameter space defined by:\n• Hidden dimension: {128, 256, 512}\n§http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm ¶https://snap.stanford.edu/data/web-flickr.html ‖https://www.yelp.com/dataset ∗∗https://code.google.com/archive/p/word2vec/\nThe hidden dimensions used for Table 2, Figure 2, Figure 3 and Figure 4 are: 512 for PPI, 256 for Flickr, 128 for Reddit, 512 for Yelp and 512 for Amazon.\nAll methods terminate after a fixed number of epochs based on convergence. We save the model producing the highest validation set F1-micro score, and reload it to evaluate the test set accuracy.\nFor vanilla GCN and AS-GCN, we set the batch size to their default value 512. For GraphSAGE, we use the mean aggregator with the default batch size 512. For S-GCN, we set the flag -cv -cvd (which stand for “control variate” and “control variate dropout”) with pre-computation of the first layer aggregation. According to the paper (Chen et al., 2018a), such pre-computation significantly reduces training time without affecting accuracy. For S-GCN, we use the default batch size 1000, and for FastGCN, we use the default value 400. For ClusterGCN, its batch size is determined by two parameters: the cluster size and the number of clusters per batch. We sweep the cluster size from 500 to 10000 with step 500, and the number of clusters per batch from {1, 10, 20, 40} to determine the optimal configuration for each dataset / architecture. Considering that for ClusterGCN, the cluster structure may be sensitive to the cluster size, and for FastGCN, the minibatch connectivity may increase with the sample size, we present additional experimental results to reveal the relation between accuracy and batch size in Appendix D.3.\nConfiguration of GraphSAINT to reproduce Table 2 results is shown in Table 5. Configuration of GraphSAINT to reproduce Table 3 results is shown in Table 6.\nBelow we describe the configuration for Figure 4.\nThe major difference between a normal GCN and a JK-net (Xu et al., 2018) is that JK-net has an additional final layer that aggregates all the output hidden features of graph convolutional layers 1 to L. Mathematically, the additional aggregation layer outputs the final embedding xJK as follows:\nxJK = σ ( W TJK · L⊕ `=1 x(`)v ) (7)\nwhere based on Xu et al. (2018), ⊕\nis the vector aggregation operator: max-pooling, concatenation or LSTM (Hochreiter & Schmidhuber, 1997) based aggregation.\nThe graph attention of GAT (Veličković et al., 2017) calculates the edge weights for neighbor aggregation by an additional neural network. With multi-head (K) attention, the layer-(`− 1) features propagate to layer-(`) as follows:\nx(`)v = ∥∥∥∥∥ K\nk=1\nσ ∑ u∈neighbor(v) αku,vW kx(`−1)v (8) where ‖ is the vector concatenation operation, and the coefficient α is calculated with the attention weights ak by:\nαku,v = LeakyReLU (( ak )T [ W kxu‖W kxv ])\n(9)\nNote that the α calculation is slightly different from the original equation in Veličković et al. (2017). Namely, GAT-SAINT does not normalize α by softmax across all neighbors of v. We make such modification since under the minibatch setting, node v does not see all its neighbors in the training graph. The removal of softmax is also seen in the attention design of Huang et al. (2018). Note that during the minibatch training, GAT-SAINT further applies another edge coefficient on top of attention for aggregator normalization.\nTable 7 shows the configuration of the GAT-SAINT and JK-SAINT curves in Figure 4.\nPP I\nFl ic\nkr\nR ed\ndi t\nY el\np\nA m\naz on\n0\n0.5\n1\n1.5\nFr ac\ntio n\nof tr\nai ni\nng tim\ne\nNode Edge RW MRW\nFigure 7: Fraction of training time on sampling" }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "D.1 TRAINING EFFICIENCY ON DEEP MODELS\nWe evaluate the training efficiency for deeper GCNs. We only compare with S-GCN, since implementations for other layer sampling based methods have not yet supported arbitrary model depth. The batch size and hidden dimension are the same as Table 2. On the two large graphs (Reddit and Yelp), we increase the number of layers and measure the average time per minibatch execution. In Figure 6, training cost of GraphSAINT is approximately linear with GCN depth. Training cost of S-GCN grows dramatically when increasing the depth. This reflects the “neighbor explosion” phenomenon (even though the expansion factor of S-GCN is just 2). On Yelp, S-GCN gives “out-of-memory” error for models beyond 5 layers.\nD.2 COST OF SAMPLING AND PRE-PROCESSING\nCost of graph samplers of GraphSAINT Graph sampling introduces little training overhead. Let ts be the average time to sample one subgraph on a multi-core machine. Let tt be the average time to perform the forward and backward propagation on one minibatch on GPU. Figure 7 shows the ratio ts/tt for various datasets. The parameters of the samplers are the same as Table 2. For Node, Edge and RW samplers, we observe that time to sample one subgraph is in most cases less than 25% of the training time. The MRW sampler is more expensive to execute. Regarding the complete pre-processing procedure, we repeatedly run the sampler for N = 50 · |V| /|Vs| times before training, to estimate the node and edge probability as discussed in Section 3.2 (where |Vs| is the average subgraph size). These sampled subgraphs are reused as training minibatches. Thus, if training runs for more than N iterations, the pre-processing is nearly zero-cost. Under the setting of Table 2, pre-processing on PPI and Yelp and Amazon does not incur any overhead in training time. Pre-processing on Flickr and Reddit (with RW sampler) takes less than 40% and 15% of their corresponding total training time.\nCost of layers sampler of AS-GCN AS-GCN uses an additional neural network to estimate the conditional sampling probability for the previous layer. For a node v already sampled in layer `, features of layer-(`− 1) corresponding to all v’s neighbors need to be fed to the sampling neural network to obtain the node probability. For sake of analysis, assume the sampling network is a single layer MLP, whose weight WMLP has the same shape as the GCN weights W (`). Then we can show, for a L-layer GCN on a degree-d graph, per epoch training complexity of AS-GCN is approximately γ = (d · L) / ∑L−1 `=0 d\n` times that of vanilla GCN. For L = 2, we have γ ≈ 2. This explains the observation that AS-GCN is slower than vanilla GCN in Figure 2. Additional, Table 8 shows the training time breakdown for AS-GCN. Clearly, its sampler is much more expensive than the graph sampler of GraphSAINT.\nCost of clustering of ClusterGCN ClusterGCN uses the highly optimized METIS software†† to perform clustering. Table 9 summarizes the time to obtain the clusters for the five graphs. On the large and dense Amazon graph, the cost of clustering increase dramatically. The pre-processing time of ClusterGCN on Amazon is more than 4× of the total training time. On the other hand, the sampling cost of GraphSAINT does not increase significantly for large graphs (see Figure 7).\nTaking into account the pre-processing time, sampling time and training time altogether, we summarize the total convergence time of GraphSAINT and ClusterGCN in Table 10 (corresponding to Table 2 configuration). On graphs that are large and dense (e.g., Amazon), GraphSAINT achieves significantly faster convergence. Note that both the sampling of GraphSAINT and clustering of ClusterGCN can be performed offline.\nD.3 EFFECT OF BATCH SIZE\nTable 11 shows the change of test set accuracy with batch sizes. For each row of Table 11, we fix the batch size, tune the other hyperparameters according to Appendix C.3, and report the highest test set accuracy achieved. For GraphSAGE, S-GCN and AS-GCN, their default batch sizes (512,1000 and 512, respectively) lead to the highest accuracy on all datasets. For FastGCN, increasing the default batch size (from 400 to 4000) leads to noticeable accuracy improvement. For ClusterGCN, different datasets correspond to different optimal batch sizes. Note that the accuracy in Section 5.1 is already tuned by identifying the optimal batch size on a per graph basis.\nFor FastGCN, intuitively, increasing batch size may help with accuracy improvement since the minibatches may become better connected. Such intuition is verified by the rows of 400 and 2000. However, increasing the batch size from 2000 to 4000 does not further improve accuracy significantly. For ClusterGCN, the optimal batch size depends on the cluster structure of the training graph. For PPI, small batches are better, while for Amazon, batch size does not have significant impact on accuracy. For GraphSAGE, overly large batches may have negative impact on accuracy due to neighbor explosion. Approximately, GraphSAGE expand 10× more neighbors per layer. For a 2-layer GCN, a size 2 × 103 minibatch would then require the support of 2 × 105 nodes from the ††http://glaros.dtc.umn.edu/gkhome/metis/metis/download ∗Default batch size ¶The training does not converge. ‡The codes throw runtime error on the large datasets (Yelp or Amazon).\ninput layer. Note that the full training graph size of Reddit is just around 1.5× 105. Thus, no matter which nodes are sampled in the output layer, GraphSAGE would almost always propagate features within the full training graph for initial layers. We suspect this would lead to difficulties in learning. For S-GCN, with batch size of 500, it fails to learn properly on Reddit and Yelp. The accuracy fluctuates in a region of very low value, even after appropriate hyperparameter tuning. For AS-GCN, its accuracy is not sensitive to the batch size, since AS-GCN addresses neighbor explosion and also ensures good inter-layer connectivity within the minibatch." } ]
2,020
GraphSAINT: GRAPH SAMPLING BASED INDUCTIVE LEARNING METHOD
SP:37e258666bfb1bbd89749be3543e3511bf3a81f7
[ "This paper studies data augmentation in the regime where labels for the augmented datapoints are known. Special emphasis is put on the study of overparametrised linear models with minimum Euclidean norm of the regression weights (a.k.a. ridgeless regression). The results of this study are then used to motivate their “X-regularization” method, a semi-supervised learning algorithm which they test on the task of improving accuracy of adversarially trained models. The authors report improvement in accuracy of adversarially trained classifiers on CIFAR-10 when X-regularization is applied.", "This paper provides some theory into the question of whether data augmentation can hurt test-set performance. To paraphrase the theory, data augmentation can hurt when it causes the model to learn a spurious local details instead of global structure, even if the augmented data comes from the same (predictive) distribution, and even if the true model lies in the hypothesis class of the learned model. Ironically these effects may be diminished in the large-sample regime, where data augmentation is less important in practice. Motivated by these issues, the authors propose \"X-regularization\" which requires that models trained on standard and augmented data produce similar predictions on unlabeled data. The paper includes a few experiments on a toy staircase regression problem as well as some ResNet experiments on CIFAR-10." ]
We study covariate-shifted data augmentation where the augmented targets are drawn from the true predictive distribution but the inputs are shifted. Empirically, some forms of data augmentation such as adversarial training improve robustness, but increase test error. We provide precise conditions under which data augmentation can increase test error for minimum norm interpolation estimators in linear regression. As a fix, we propose X-regularization which uses unlabeled data to regularize the parameters towards the non-augmented estimate. We prove that augmentation with X-regularization never increases test error in linear regression. Empirically, X-regularization consistently improves both robustness and standard test error across different adversarial training algorithms and perturbations on CIFAR-10.
[]
[ { "authors": [ "P.L. Bartlett", "P.M. Long", "G. Lugosi", "A. Tsigler" ], "title": "Benign overfitting in linear regression. arXiv, 2019", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "M. Belkin", "D. Hsu", "J. Xu" ], "title": "Two models of double descent for weak features", "venue": null, "year": 2019 }, { "authors": [ "B. Biggio", "I. Corona", "D. Maiorca", "B. Nelson", "N. Šrndić", "P. Laskov", "G. Giacinto", "F. Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2013 }, { "authors": [ "Y. Carmon", "A. Raghunathan", "L. Schmidt", "P. Liang", "J.C. Duchi" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "E.D. Cubuk", "B. Zoph", "S.S. Schoenholz", "Q.V. Le" ], "title": "Intriguing properties of adversarial examples", "venue": "arXiv preprint arXiv:1711.02846,", "year": 2017 }, { "authors": [ "S. Diamond", "S. Boyd" ], "title": "CVXPY: A Python-embedded modeling language for convex optimization", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2016 }, { "authors": [ "L. Engstrom", "B. Tran", "D. Tsipras", "L. Schmidt", "A. Madry" ], "title": "Exploring the landscape of spatial robustness", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "A. Fawzi", "O. Fawzi", "P. Frossard" ], "title": "Analysis of classifiers robustness to adversarial perturbations", "venue": "Machine Learning,", "year": 2018 }, { "authors": [ "J. Friedman", "T. Hastie", "R. Tibshirani" ], "title": "The elements of statistical learning, volume 1. Springer series in statistics New York, NY, USA: Springer series in statistics", "venue": null, "year": 2001 }, { "authors": [ "I.J. Goodfellow", "J. Shlens", "C. Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "T. Hastie", "A. Montanari", "S. Rosset", "R.J. Tibshirani" ], "title": "Surprises in high-dimensional ridgeless least squares interpolation", "venue": "arXiv preprint arXiv:1903.08560,", "year": 2019 }, { "authors": [ "P. Kovanic" ], "title": "On the pseudoinverse of a sum of symmetric matrices with applications to estimation", "venue": "Kybernetika,", "year": 1979 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "S. Laine", "T. Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "A. Lamb", "V. Verma", "J. Kannala", "Y. Bengio" ], "title": "Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy", "venue": null, "year": 2019 }, { "authors": [ "T. Liang", "A. Rakhlin" ], "title": "Just interpolate: Kernel” ridgeless” regression can generalize", "venue": "arXiv preprint arXiv:1808.00387,", "year": 2018 }, { "authors": [ "S. Ma", "R. Bassily", "M. Belkin" ], "title": "The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "A. Madry", "A. Makelov", "L. Schmidt", "D. Tsipras", "A. Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "T. Miyato", "S. Maeda", "S. Ishii", "M. Koyama" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "A. Najafi", "S. Maeda", "M. Koyama", "T. Miyato" ], "title": "Robustness to adversarial perturbations in learning from incomplete data", "venue": null, "year": 1905 }, { "authors": [ "P. Nakkiran" ], "title": "Adversarial robustness may be at odds with simplicity", "venue": "arXiv preprint arXiv:1901.00532,", "year": 2019 }, { "authors": [ "A. Oliver", "A. Odena", "C.A. Raffel", "E.D. Cubuk", "I. Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "M. Sajjadi", "M. Javanmardi", "T. Tasdizen" ], "title": "Regularization with stochastic transformations and perturbations for deep semi-supervised learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "H. Scudder" ], "title": "Probability of error of some adaptive pattern-recognition machines", "venue": "IEEE Transactions on Information Theory,", "year": 1965 }, { "authors": [ "C. Szegedy", "W. Zaremba", "I. Sutskever", "J. Bruna", "D. Erhan", "I. Goodfellow", "R. Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "D. Tsipras", "S. Santurkar", "L. Engstrom", "A. Turner", "A. Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "J. Uesato", "J. Alayrac", "P. Huang", "R. Stanforth", "A. Fawzi", "P. Kohli" ], "title": "Are labels required for improving adversarial robustness", "venue": null, "year": 1905 }, { "authors": [ "Q. Xie", "Z. Dai", "E. Hovy", "M. Luong", "Q.V. Le" ], "title": "Unsupervised data augmentation", "venue": "arXiv preprint arXiv:1904.12848,", "year": 2019 }, { "authors": [ "L. Yaeger", "R. Lyon", "B. Webb" ], "title": "Effective training of a neural network character classifier for word recognition", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 1996 }, { "authors": [ "F. Yang", "Z. Wang" ], "title": "Heinze-Deml. Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness", "venue": null, "year": 1906 }, { "authors": [ "S. Zagoruyko", "N. Komodakis" ], "title": "Wide residual networks", "venue": "In British Machine Vision Conference,", "year": 2016 }, { "authors": [ "H. Zhang", "Y. Yu", "J. Jiao", "E.P. Xing", "L.E. Ghaoui", "M.I. Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Adversarial training improves the robustness of neural networks to perturbations, commonly referred to as adversarial examples (Goodfellow et al., 2015; Szegedy et al., 2014; Biggio et al., 2013). However, adversarial training also causes an undesirable increase in the error on the unperturbed images (test error). How do we obtain both robust and accurate networks? At the core, adversarial training is a form of data augmentation where we augment the training set with worst-case perturbations of each training image within a ball defined by the attack model. In this work, we study the general question of how to train classifiers on augmented training data without causing an increase in test error, while simultaneously preserving the benefits of augmentation such as robustness.\nFirst, we analyze why some forms of data augmentation such as with adversarial `∞ perturbations increase error. Previous works (Tsipras et al., 2019; Zhang et al., 2019; Fawzi et al., 2018; Nakkiran, 2019) provide simple constructions to explain the increase in test error with adversarial perturbations, but rely on assumptions, such as incorrect labeling of the adversarial perturbations or insufficient complexity of the hypothesis class, that we do not expect to hold in practice. We seek a simple theoretical setup that can shed light on why augmentation, even with label-preserving perturbations in a well-specified setting, causes an increase in test error. On the surface, it seems like we have only added information about the label distribution, so why does the test error increase?\nIn this work, we theoretically study minimum norm interpolation in well-specified linear regression, and show that data augmentation with label-preserving perturbations can increase the test error in some regimes, even when the targets are noiseless. For example, Figure 1(a) illustrates a function interpolation problem via cubic splines which exemplifies this phenomenon. Without data augmentation, the estimated predictor (dashed blue) is a line that captures the global structure and obtains low error. Data augmentation with local perturbations (crosses) encourages the predictor to fit the local structure of the high density points but compromises the global structure on the tail (solid orange) (Figure 1(b)). We show that this tension between local and global fit stems from the estimator having the wrong inductive bias. In particular, the minimum norm estimator minimizes a generic parameter norm while the test error is measured by a possibly different norm on the parameter error vector which depends on the data distribution (Section 4.1). Further, one might expect augmentation to be most helpful in low data settings. We show that in linear regression, this is also exactly the regime where augmentation can be most harmful. On real datasets, we similarly observe that data augmentation can be more detrimental with a smaller original training set (Section 5).\nMotivated by our analysis of interpolation in linear regression, we propose a new estimator for data augmentation based on X-regularization (Section 6). X-regularization encourages the data augmented\ninterpolant to stay close to the original interpolant while fitting the extra augmented points. We prove that X-regularization eliminates the increase in test error upon data augmentation in the case of noiseless linear regression. See Figure 1(c) for its effect on the spline interpolation problem.\nX-regularization naturally extends to more general losses and complex models and is closely related to self-training (Scudder, 1965), the classical semisupervised learning algorithm. For the particular setting of robustness, X-regularization applied to adversarial training takes the form of robust self-training (RST) that was recently proposed in (Carmon et al., 2019; Najafi et al., 2019; Uesato et al., 2019). The previous works view RST as a way to beat the sample complexity barrier of robustness. By showing that RST is an instantiation of X-regularization, we motivate RST as an appropriate data dependent regularizer that improves both standard accuracy and robustness. We evaluate the effect of RST on standard and robust accuracy with different adversarial training losses and perturbations on CIFAR-10 in Section 6.3. With `∞ perturbations, we find that RST improves standard accuracy by 4−6% while maintaining or even improving the robustness achieved by the vanilla adversarial training counterpart. With random and adversarial rotations, RST improves standard accuracy by∼1% and robust accuracy by 1−3%. Our experiments suggest that RST, and more broadly using unlabeled data with X-regularization, is a promising approach to mitigate the undesirable drop in standard accuracy when training robust networks." }, { "heading": "2 RELATED WORK", "text": "The detrimental effect of data augmentation has been previously studied in the context of a “tradeoff” between accuracy and robustness.\nUnderstanding the tradeoff. In an attempt to explain the tradeoff between robustness and accuracy, Tsipras et al. (2019); Zhang et al. (2019); Fawzi et al. (2018); Nakkiran (2019) provide simple constructions that showcase an inherent tension between these objectives even in the limit of infinite data. These constructions rely on either non-label-preserving perturbations or insufficient model complexity to express a robust and accurate classifier. However, in practice, we typically augment with “imperceptible” perturbations that do not change the label, and we assume large neural networks used in practice to be expressive enough to contain a robust and accurate classifier. We address both these insufficiencies by studying covariate-shifted data augmentation where the extra data is labeled according to the same predictive distribution as the original data, and well-specified linear regression.\nMitigating the tradeoff. Existing proposals to mitigate the observed increase in standard error caused by adversarial augmentations are based on finding better architectures via Neural Architecture Search (Cubuk et al., 2017) or changing the neural net training algorithm (Lamb et al., 2019). While these methods have shown some success, they are restricted to neural networks. We complement this line of work by studying this tradeoff more generally and also provide some theoretical justification." }, { "heading": "3 SETUP", "text": "" }, { "heading": "3.1 COVARIATE-SHIFTED DATA AUGMENTATION.", "text": "Let Pxy denote the underlying distribution of (x,y) pairs, Px its marginal on Rd, and Py(· | x) the conditional distribution of the targets given inputs.\nWe refer to the training data from the underlying distribution Pxy by standard training data. Formally, we have n pairs (xi,yi)∼ Pxy forming the measurement matrix Xstd = [x1,x2,...xn]> ∈Rn×d and target vector ystd =[y1,y2,...yn]>∈Rn. Analogously , we consider augmenting the training set with “extra” training points denoted by Xext = [x̃1,x̃2,...x̃m]\n> ∈Rm×d with associated targets yext = [ỹ1,ỹ2,...ỹm]> ∈Rm. We focus on covariate-shifted data augmentations which includes most forms of data augmentations used in practice . Here, the targets yext are drawn from the same underlying predictive distribution Py(· | x) as the standard data, while the extra inputsXext could be arbitrary.\nExamples. Typically , the “extra” training points are constructed as follows: x̃i = T (xi), ỹi = yi, where T is some label-preserving transformation thereby enforcing that the extra targets are as if sampled from Py(· |x). Example transformations include translations, horizontal flips, small rotations, small `∞ perturbations in vision, and replacing words with their synonyms in NLP. However, our treatment of data augmentation in this work is more general where Xext is not necessarily obtained via transformations of Xstd. Empirically, our main focus is on mitigating the increase in test error upon adversarial training. Most popular forms of adversarial training are instances of covariate-shifted data augmentation . Consider projected gradient adversarial training (PG-AT) (Madry et al., 2018) which confers robustness to adversarial examples (Szegedy et al., 2014) but also causes an increase in standard test error. PG-AT can be viewed as a form of iterative data augmentation that falls in our framework above. Formally, letB(x) be the set of perturbations of x that we would like to be robust against. We assume that the label is constant overB(x). The transformation at step t of the training for data point xi is Tt(xi) = argmax\nx∈B(xi) `(θ̂t,xi,yi), where θ̂t are the model parameters at time step t.\nThe worst-case (max loss) perturbation is approximated via the projected gradient method." }, { "heading": "3.2 MINIMUM NORM INTERPOLATION IN WELL-SPECIFIED LINEAR REGRESSION", "text": "Consider a regression task where the targets y ∈ R are drawn from the conditional distribution Py(· |x)=N (x>θ?, σ), for some vector θ?∈Rd. Our goal is to learn a linear predictor fθ(x)=x>θ . In this work, we focus on interpolating estimators, which draws motivation from modern machine learning models that achieve near zero training loss (on both standard and extra augmented points). Interpolating estimators for linear models are analyzed in many recent works (Ma et al., 2018; Belkin et al., 2018; Hastie et al., 2019; Liang & Rakhlin, 2018; Bartlett et al., 2019). We present our results for interpolating linear regression estimators with minimum Euclidean norm, but our analysis directly applies to more general Mahalanobis norms via suitable rotations. See Appendix A. Given inputs X∈Rn×d and corresponding targets Y ∈Rn as training data, we define the following minimum norm interpolation estimator as\nθ̂=argmin θ\n{ ‖θ‖2 :Xθ=Y } . (1)\nIn particular , we compare the following estimators : (i) the standard estimator θ̂std which has [Xstd,ystd] as training data and (ii) the data augmented estimator θ̂aug with X = [Xstd;Xext],Y = [ystd;yext] as training data:\nθ̂std =argmin θ\n{ ‖θ‖2 :Xstdθ=ystd } and θ̂aug =argmin\nθ\n{ ‖θ‖2 :Xstdθ=ystd,Xextθ=yext } . (2)" }, { "heading": "4 BIAS AND VARIANCE OF MINIMUM NORM INTERPOLANTS", "text": "We evaluate the two estimators described in Equation 2 using the error on a random sample xtest drawn from Px. Let Σ be the covariance of Px. We focus on the error conditioned on Xstd,Xext , which\ndecomposes into a bias and variance term as follows\nR(θ̂)=E[(x>test(θ̂−θ?))2]=E[(θ̂−θ?)>Σ(θ̂−θ?)] =(E[θ̂]−θ?)>Σ(E[θ̂]−θ?)︸ ︷︷ ︸\nBias B(θ̂)\n+tr(Cov(θ̂)Σ)︸ ︷︷ ︸ Variance V (θ̂) , (3)\nwhere the expectation is taken over the randomness in xtest and targets ystd, yext conditioned on Xstd,Xext. In this section, we treat Xstd,Xext as fixed quantities. However, since Xstd consists of samples from Px, the number of samples dictates some structure onXstd. We touch upon this aspect in Section ?? where we study how the effect of augmentation varies with the number of samples in the original training set, both theoretically and empirically." }, { "heading": "4.1 BIAS OF MINIMUM NORM INTERPOLANTS", "text": "For minimum norm interpolant with X,Y as training data, the bias term B(θ̂) can be expressed as follows. On expectation (over targets Y ), any interpolating estimator recovers θ? in the column space of X , but is unconstrained in Null(X>X). The minimum norm interpolant sets the component in Null(X>X) to zero. Formally,\nE[θ̂]=(X>X)†X>Xθ? =⇒B(θ̂)=θ?>Π⊥XΣΠ⊥Xθ?, where Π⊥X = ( I−(X>X)†(X>X) ) is the projection matrix onto Null(X>X).\nWe now compare the bias of the standard and data augmented estimators (defined in Equation 2). It is convenient to define Σstd =X>stdXstd and Σaug =X > stdXstd +X > extXext. Let Π ⊥ std = I−Σ†stdΣstd and Π⊥aug =I−Σ†augΣaug be the projection matrices onto Null(Σstd) and Null(Σaug) respectively. Then, B(θ̂std)=θ ?>Π⊥stdΣΠ ⊥ stdθ ? and B(θ̂aug)=θ? >Π⊥augΣΠ ⊥ augθ ?. (4)\nRemark 1. Note that the bias in parameter error is ‖E[θ̂] − θ?‖22 = ‖Π⊥Xθ?‖22. Since Null(Σaug) ⊆ Null(Σstd), we have ‖Π⊥augθ?‖22 ≤ ‖Π⊥stdθ?‖22, and data augmentation always reduces the bias in parameter error. However, the bias in test error upon augmentationB(θ̂aug) could be larger or smaller thanB(θ̂std)." }, { "heading": "4.1.1 SIMPLE LINEAR PROBLEM IN R3 WHERE ADDING DATA INCREASES BIAS", "text": "The following example illustrates how the interaction between the column spaces of standard inputsXstd, extra inputsXext and the underlying true parameter θ? could cause data augmentation to increase bias.\nFor simplicity, we choose Σ = diag([λ1, λ2, λ3]) with λ2 λ1, Xstd = e3 and additional data Xext = e1 +e2 where e1,e2,e3 denote the standard bases in R3. For brevity in notation, we denote E[θ̂std] by θ̂std and E[θ̂aug] by θ̂aug.\nRecall that by virtue of being minimum norm interpolants, (θ̂std − θ?) ∈ Null(Σstd) = R2 and (θ̂aug−θ?)∈Null(Σaug)={ρ(e1−e2) : ρ∈R}. Figure 2 depicts these parameter errors for different choices of θ?.\nBias under different settings of θ?. Plugging these terms into the bias expression in Equation (4) yields\nB(θ̂std)=θ ? 1 2λ1+θ ? 2 2λ2 and B(θ̂aug)=(1/4)(θ?1−θ?2)2λ1+(1/4)(θ?1−θ?2)2λ2. Since in our construction of Σ we have λ2 λ1, the bias expression is dominated by the coefficient on λ2, which is the projection of the parameter error on e2 (red lines in Figure 2). Depending on θ?, B(θ̂aug) could be larger or smaller thanB(θ̂std). In particular,\n(i) when θ?1 θ?2 as in Fig. 2 (a), augmenting with Xext can increase bias B(θ̂aug) B(θ̂std). Even though the augmented estimator has lower parameter error overall (‖θ̂aug−θ?‖2≤‖θ̂std−θ?‖2), the increase in parameter error along e2 dominates the effect on the bias because λ2 λ1. (ii) when θ?2 θ?1 as in Fig. 2 (b), the same Xext causes B(θ̂aug) to be smaller than B(θ̂std). Here the augmented estimator has smaller parameter error along e2 and hence decreasing bias despite an increase along e1.\nIn summary, the minimum norm interpolant treats all unobserved dimensions in the null space “equally”. In contrast, the bias is dominated by the components of the parameter error along the top eigenvectors of Σ. This mismatch could lead to settings where decreasing the null space of the training points via augmentation increases the error along top eigenvectors of Σ. We formalize this intuition and present a general characterization below." }, { "heading": "4.1.2 GENERAL CHARACTERIZATIONS", "text": "We now study the biases of the standard and augmented estimators in general. Recall that Π⊥std and Π ⊥ aug are the projection matrices onto Null(Σstd) and Null(Σaug) respectively, where Σstd =X>stdXstd and Σaug =X > stdXstd+X > extXext. Since Null(Σaug)⊆Null(Σstd), we can decompose Π⊥stdθ? into orthogonal components v = Π⊥augθ ? and w= Π⊥stdΠaugθ\n?. Substituting this decomposition into Equation 4, we get the following exact characterization for when data augmentation increases bias.\nTheorem 1. The augmented estimator θ̂aug has larger bias i.e.,B(θ̂aug)>B(θ̂std) if and only if\nv>Σv<−2w>Σv, (5)\nwhere v=Π⊥stdΠaugθ ? andw=Π⊥augθ ?.\nThe proof of Theorem 1 is in Appendix B.1. We see that condition (5) depends on θ? which is typically unknown. Hence, we cannot determine apriori if a particular form of augmentation would be “safe” and not increase error (like random translations in CIFAR-10) or harmful (like random rotations in CIFAR-10). However, we can make the following statements about when data augmentation is safe (for any θ?) in some restricted settings.\n1. When Σ = I , the condition (5) is always met (since w ⊥ v) and hence data augmentation is always safe and never increases bias for any θ?. This suggests that data augmentation increases bias when there is a mismatch between the norm being minimized during interpolation and the norm of the parameter error that determines test error.\n2. WhenXext spans the entire nullspace of Σstd such that Σaug is invertible,w=0 for all θ? and data augmentation never increases bias.\n3. In the simple case where Xext is rank-one, data augmentation is safe for all θ? if and only if Π⊥stdXext is an eigenvector of Σ. See Appendix B.4 for a proof.\nFinally, we illustrate the safe augmentation directionsXext in the nullspace of Σstd for the simple 3-D problem discussed above for two different choices of Σ and a fixed θ? (Figure 2 (c), (d)). The safe augmentations lie in cones around the eigenvectors of Σ while the width and alignment of the cones depends on the alignment between θ? and the eigenvectors of Σ. As the eigenvalues of Σ become more skewed,\nthe space of safe augmentations shrinks. We present a dual perspective on Theorem 1 in Appendix B which characterizes the effect of augmentation on the bias in terms of properties of the true parameter θ?.\nLocal vs. global structure. Finally, we tie our analysis back to the spline staircase problem from Figure 1. The inputs can be appropriately rotated so that the cubic spline interpolant is the minimum Euclidean norm interpolant (as in Equations 2). Under this rotation, the different eigenvectors of Null(Σstd) measure either the fit in the “local” high frequency components or “global” low frequency components (See Figure 3). Any augmentation that encourages fitting local components in Null(Σstd) could lead to a worse fit of the global structure, leading to increased test error (See Figure 3). This is suggestive of a similar trade-off phenomenon in practice where adversarial training (with say `∞ perturbations) encourages neural networks to fit the high-frequency components of the signal while compromising on the overall global structure causing an increase in test error." }, { "heading": "4.2 VARIANCE OF MINIMUM NORM INTERPOLANTS", "text": "The main focus of this work is on the effect of data augmentation on the bias of minimum norm interpolants. For completeness, we present some conditions under which the data augmentation increases or decreases variance. For a more complete treatment, please refer to Appendix C. Let Xstd,ystd be the standard training data, with extra points Xext,yext and let Π⊥std denote the projection matrix onto Null(X>stdXstd).\nTheorem 2. For the minimum norm interpolants defined in Equation 2, the following hold.\n1. When Π⊥stdXext = 0 such that the extra points lie in the column space of original standard training data, V (θ̂aug)≤V (θ̂std).\n2. When Xext ⊥ Xstd, such that the extra points lie entirely in Null(X>stdXstd), we have V (θ̂aug)≥V (θ̂std).\nIn general, when Xext ∈ Null(X>stdXstd), we see that both the bias and variance of minimum norm interpolants could increase upon data augmentation, shedding some light on why data augmentation sometimes increases standard error in practice." }, { "heading": "5 EFFECT OF SIZE OF THE ORIGINAL TRAINING SET", "text": "In this section, we study how the effect of data augmentation varies as we vary the size of the original standard training set. We first briefly study this in the setting of minimum norm interpolation in linear regression. We then empirically evaluate the effect of adversarial training as we vary the number of original training samples in CIFAR-10 and observe that the empirical trends mirror the trends dictated by our analysis in linear regression." }, { "heading": "5.1 MINIMUM NORM INTERPOLATION—SMALL AND LARGE DATA REGIMES.", "text": "Without loss of generality, we assume that Σ, the population covariance of Px is invertible since the test error only depends on columnspace of Σ.\nLarge data regime. In the large data regime where the number of standard training points n→∞, the empirical covariance of the original training points Σstd =X>stdXstd≈nΣ is invertible with Π⊥std = 0. Both θ̂std and θ̂aug are unbiased (from Equation 4). From Theorem 2, variance of θ̂aug is never larger than\nthat of θ̂std. Putting together, the test error never increases upon augmentation in the large sample regime matching the common intuition that more data from the correct target distribution should never hurt.\nSmall data regime. In the small data regime, where n is much smaller than d, the empirical covariance Σstd could be far from invertible. As we increase the number of samples, the null space Null(Σstd) shrinks. Our analysis in Section 4 shows that for a fixed θ?, the magnitude of possible increase in both bias and variance decreases as Π⊥stdXext decreases (See Appendix E) for details). This suggests that as the size ofXstd increases, the increase in test error due to augmentation decreases. We run simulations of the spline staircase example from Figure 1 and find that this trend holds (Figure 4(a); see Appendix D)." }, { "heading": "5.2 EMPIRICAL OBSERVATIONS ON THE EFFECT OF SAMPLE SIZE.", "text": "Do the trends in linear regression also hold for classification with more complex models and real world datasets? For our empirical study, we focus on adversarial training (Madry et al., 2018), iterative data augmentations with adversarial `∞ perturbations of different magnitudes . We train a WideResNet-40-2 (Zagoruyko & Komodakis, 2016) on CIFAR-10 training set subsampled by varying amounts. We plot the difference in the test errors of the augmented and standard estimators as a function of training set size in Figure 4. We find that augmentation is less detrimental to test error with increase in sample size (Figure 4(b)), matching the trends predicted by our analysis of linear regression. Extrapolating the plot, we see that there should be no tradeoff between robustness and accuracy in the infinite data limit—contradicting the toy examples studied in (Tsipras et al., 2019; Zhang et al., 2019; Nakkiran, 2019) which we discussed in Section 2." }, { "heading": "6 MITIGATING THE INCREASE IN BIAS UPON AUGMENTATION", "text": "To this point, the paper focuses on understanding why data augmentation could increase test error by analysing the setting of minimum norm interpolation in linear regression. However, data augmentation (e.g., via adversarial perturbations) often comes with desirable benefits such as robustness of estimators. In this section, we leverage our understanding from linear regression to design a new estimator for interpolating augmented data that mitigates the increase in test error while preserving the benefits. To this end, we introduce X-regularization, and prove that X-regularization eliminates the increase in bias upon augmentation for noiseless linear regression (Section 6.1. In Section 6.2, we show that this estimator naturally generalizes to arbitrary losses and complex models and is closely connected to the classical semi-supervised self-training algorithm (Scudder, 1965). We empirically evaluate the performance of this general X-regularization on adversarial training in Section 6.3. In a nutshell, X-regularization causes a smaller increase in standard error while maintaining or simultaneously improving the robustness of neural networks trained on CIFAR-10." }, { "heading": "6.1 X-REGULARIZATION FOR LINEAR REGRESSION", "text": "Our development considers a stylized setting of interpolation in linear regression with noiseless observations from a linear model, y=x>θ?, where the dimension is (much) larger than the number of observations. Let θint-std interpolate the initial data, satisfying Xstdθint-std = ystd. We use θint-std to construct a new data augmented estimator θ̂x-aug that interpolates both the standard data and augmented extra data while satisfyingR(θ̂x-aug)≤R(θint-std). Given Σ and an initial interpolant θint-std, we propose the X-regularized data augmentation estimator\nθ̂x-aug∈argmin θ\n{ (θ−θint-std)>Σ(θ−θint-std) :Xstdθ=ystd, Xextθ=yext } . (6)\nThe X-regularized estimator θ̂x-aug optimizes for small error on the labeled data (Xstd, ystd) and (Xext,yext) while keeping the predictions of θ̂x-aug close to those of θint-std over unlabeled inputs drawn from Px. To motivate our estimator, recall our discussion on the effect of data augmentation on the bias (test error in noiseless case) of minimum norm interpolants in Section 4.1. By fitting extra dimensions of Xext, the data augmented estimator could have a larger parameter error than θint-std in important directions of Σ, and consequently higher test error (Figure 2). A natural strategy to mitigate this increase, then, is to fitXext while staying close to θint-std weighted by Σ, which leads to the estimator defined in Equation 6. This intuition can be formalized to prove that the X-regularized interpolant θ̂x-aug never has higher test error than θint-std.\nTheorem 3. Assume the noiseless linear model y=x>θ?. Let θint-std be an arbitrary interpolant of the standard data, i.e.Xstdθint-std =ystd. Let θ̂x-aug be the X-regularized interpolant (6). Then\nR ( θ̂x-aug)≤R(θint-std).\nSee Appendix ?? for proof. To provide some graphical intuition for the result, consider the spline interpolant θ̂std Fig. 1 illustrates whereXext consists of local perturbations. The X-regularized estimator matches the standard interpolant θ̂std on points outside the training set, thereby capturing global structure while simultaneously fitting local structure on the training set viaXext.\nNote that our discussion on the effect of sample sizes in Section 5 suggests that a larger labeled standard training set would mitigate the drop in standard error due to augmentation. However, our development of X-regularization suggests that we only need unlabeled data, which is much cheaper to obtain in practice. Unlabeled data is often used to improve standard test error in the semi-supervised learning paradigm. Here, we motivate the use of unlabeled data to mitigate the possible increase in test error from data augmentation." }, { "heading": "6.2 ROBUST SELF-TRAINING AS X-REGULARIZATION FOR ROBUSTNESS", "text": "To motivate using X-regularization for general models and losses, note that the expression of the objective function for the X-regularized estimator in (6) can be rewritten in a more general form:\n(θ−θint-std)>Σ(θ−θint-std)=EPx [(x>θ−x>θint-std)2]=EPx [`sq(fθ(x),fθint-std(x))],\nwhere `sq is the squared loss between the predictions of the model fθ(x) with the predictions (pseudo-labels) of a given interpolant fθint-std(x). A generalized version of X-regularization replaces `sq with some general loss ` that include classification losses such as the logistic loss. Written this way, X-regularization regularizes the predictions of an augmented estimator towards the predictions of the standard interpolant, similarly to the classical semi-supervised self-training algorithm (Scudder, 1965).\nThe main motivation of our work is to fix the the drop in standard test performance from augmentation strategies that seek to enforce robustness by adding label-preserving transformations of existing inputs. With augmentations take the form of some label-preserving transformations T , it is natural to consider transformations of both the labeled and unlabeled data as the set of “extra” points, that constituteXext. Generalizing the linear regression estimator of Equation 6 to use arbitrary losses and transformation\nbased augmentations, we get,\nθ̂x-aug :=argmin θ\n{ N−1 ∑ (x,y)∈[Xstd,ystd] `(fθ(x),y)+β ˜̀(fθ(T (x)),y)\n+λm−1 m∑ i=1 `(fθ(x̃i),fθ̂std(x̃i))+β ˜̀(fθ(T (x̃i)),fθ̂std(x̃i)) } . (7)\nWe note that the general X-regularized estimator above is the Robust Self-Training (RST) algorithm proposed and studied recently by Carmon et al. (2019), when applied to arbitrary transformations. Variants of RST were also studied in (Najafi et al., 2019; Uesato et al., 2019). By deriving RST via Xregularization, we provide some theoretical justification for why RST would improve standard accuracy." }, { "heading": "6.3 EMPIRICAL EVALUATION OF X-REGULARIZATION AS RST", "text": "For our empirical investigation, we evaluate the effect of X-regularization on the commonly observed tradeoff between robustness and standard accuracy when augmenting the training set with transformations. We recall that X-regularization when applied to transformation based augmentations leads to Robust Self Training. Therefore, we refer to X-regularization and RST synonymously through this section. We instantiate the robust self-training estimator defined in Equation 7 on both PG-AT and TRADES; exact loss functions appear in Appendix F.2. All of our experiments are on CIFAR-10, and RST estimators use 500K unlabeled images sourced from Tiny Images in (Carmon et al., 2019) in addition to the labeled training set.\nIn our first experiment, we use the same settings as our experiments on effect of sample size in CIFAR-10 in Section 5. We compare the error of RST+PG-AT and standard training in Figure 4. We find that RST+PG-AT has lower standard test error than the standard training, while simultaneously fitting the training data robustly and achieving higher robustness (see Appendix G.2.1). We also see the maximum gains from RST+PG-AT are in the small data regime where adversarial training has the most affect on standard error.\nIn our next experiment, we compare with other methods in the literature. We train a larger WRN-2810 model with the entire labeled CIFAR-10 training set and 500K unlabeled images. Table 1(left) summarizes the results. While there is a still a drop in accuracy compared to standard training, RST has higher standard accuracy than the vanilla counterparts without sacrificing robustness, in both PG-AT and TRADES. The gains are comparable to gains from other measures to improve the tradeoff between\nrobustness and accuracy such as Interpolated Adversarial Training and Neural Architecture Search. RST could be integrated with the above training algorithms to see further gains; we leave this to future work.\nFinally, we test the effect of RST on a different family of perturbations. We consider adversarial (worst of 10) and random augmentations using simultaneous rotations and translations of the input image. Table 1(right) presents the results. While the augmented estimators marginally improve standard accuracy for these perturbations, applying RST increases both robust and standard accuracies beyond that of the augmented estimator in both cases. This shows that RST is beneficial even when data augmentation does not decrease standard test error." }, { "heading": "7 DISCUSSION", "text": "Semi-supervised setting. General X-regularization leverages unlabeled data to mitigate the possible harmful effect of data augmentation. The traditional setup of semi-supervised learning involves only one objective: improve the standard accuracy on the underlying population. We revisit the semi-supervised setting with a different focus. For several applications, standard supervised deep learning provides highly accurate classifiers, but they are surprisingly brittle. Attempts to improve robustness typically lower accuracy. Training robust and accurate classifiers remains an open challenge, and semi-supervised learning is emerging as a promising approach. Recent work (Carmon et al., 2019; Najafi et al., 2019; Uesato et al., 2019) has studied the benefits of semi-supervised learning in improving robustness. In our work, we bolster this line of work by demonstrating that semi-supervised learning can simultaneously improve the accuracy, while maintaining robustness.\nSelf-training. In this work, we study X-regularization which is closely related to self-training, perhaps the oldest semi-supervised learning algorithm (Scudder, 1965). In the traditional setting of improving standard accuracy, self-training has shown some success, but other approaches perform significantly better; see survey (Oliver et al., 2018). However, in the regime where we care about both robustness and accuracy, we see that self-training based approaches such as X-regularization offer significant benefits. We provide a detailed comparison of X-regularization to other semi-supervised learning algorithms in Appendix H. Variants of self-training are also gaining prominence in the related but different setting of unsupervised domain adaptation. Here, the unlabeled data is from the “target” distribution, while the labeled data is from a different “source” distribution and the goal is to perform well on the target distribution.\nInterpolation in linear regression. Analysis of the interpolation regime has recently gained prominence with the observation that neural networks obtain zero training error. Previous works (Hastie et al., 2019; Bartlett et al., 2019; Belkin et al., 2019) study the performance of minimum norm interpolants in overparameterized linear regression in an attempt to explain the generalization properties of neural networks that are not explained by the classical perspective on interpolation and overfitting. In our work, we show that the same overparameterized setting also sheds light on the empirical observation that data augmentation sometimes helps and sometimes harms test performance. In contrast, in the classical underparameterized regime, data augmentation never harms test performance as common statistical intuition would suggest. Studying interpolation in the overparameterized regime even in simple settings such as linear regression thus seems to be valuable in understanding the properties of neural networks in practice.\nConclusion. We studied adversarial training through the lens of data augmentation with the goal of training robust and accurate classifiers. We analyzed general data augmentation in a stylized setting and proved that unlabeled data can eliminate possible increase in test error. This motivated a general estimator based on self-training combined with adversarial training that shows promise in improving both the accuracy and robustness of neural networks in practice. While using unlabeled data via simple self-training has shown to improve both accuracy and robustness, how to best utilize unlabeled data in this context is an open question. Further, can we obtain highly robust and accurate networks by simply using a large amount of unlabeled data, or do we need further innovations in neural network architectures and training?" }, { "heading": "A TRANSFORMATIONS TO HANDLE ARBITRARY MATRIX NORMS", "text": "Consider a more general minimum norm estimator of the following form. Given inputs X and corresponding targets Y as training data, we study the interpolation estimator,\nθ̂=argmin θ\n{ θ>Mθ :Xθ=Y } , (8)\nwhere M is a positive definite (PD) matrix that incorporates prior knowledge about the true model. For simplicitly, we present our results in terms of the `2 norm (ridgeless regression) as defined in Equation 8. However, all our results hold for arbitrary M–norms via appropriate rotations. Given an arbitrary PD matrix M , the rotated covariates x←M−1/2x and rotated parameters θ←M1/2θ maintain Y =Xθ+σN (0,I) and theM -norm of parameters simplifies to ‖θ‖2." }, { "heading": "B BIAS OF MINIMUM NORM INTERPOLANTS", "text": "" }, { "heading": "B.1 PROOF OF THEOREM 1", "text": "Inequality (5) follows from\nB(θ̂aug)−B(θ̂std)=(θ?−θ̂aug)>Σ(θ?−θ̂aug)−(θ?−θ̂std)>Σ(θ?−θ̂std) =(Π⊥augθ ?)>ΣΠ⊥augθ ?−(Π⊥stdθ?)>ΣΠ⊥stdθ?\n=w>Σw−(w+v)>Σ(w+v) =−2w>Σv−v>Σv (9)\nby decomposition of Π⊥stdθ ?=v+w where v=Π⊥stdΠaugθ ? andw=Π⊥stdΠ ⊥ augθ ?. We also want to note that the bias difference does scale with ‖θ?‖2." }, { "heading": "B.2 BIAS INCREASE REQUIRES COMPLEX TRUE ESTIMATORS", "text": "A dual perspective on Theorem 1 leads the following proposition that characterizes the properties of the true function θ? that leads to harmful augmentations.\nProposition 1. For a givenXstd,Xext,Σ, a bias increase ofB(θ̂aug)−B(θ̂std)=c>0 via augmentation withXext is possible only if θ? is sufficiently more complex than θ̂std in the `2 norm, i.e.\n‖θ?‖22−‖θ̂std‖22>γc (10) for some scalar γ>0 that depends onXstd,Xext,Σ." }, { "heading": "B.2.1 PROOF OF INEQUALITY (10)", "text": "The proof of inequality (10) is based on the following two lemmas that are also useful for characterization purposes in Corollary 1. Lemma 1. If a PSD matrix Σ has non-equal eigenvalues, one can find two unit vectorsw,v for which the following holds\nw>v=0 and w>Σv 6=0 (11) Hence, there exists a combination of original and augmentation dataset Xstd, Xext such that condition (11) holds for two directions v∈Col(Π⊥stdΠaug) andw∈Col(Π⊥stdΠ⊥aug)=Col(Π⊥aug).\nNote that neither w nor v can be eigenvectors of Σ in order for both conditions in equation (11) to hold. Given a population covariance, fixed original and augmentation data for which condition (11) holds, we can now explicitly construct θ? for which augmentation hurts bias. Lemma 2. Assume Σ, Xstd, Xext are fixed. Then condition (11) holds for two directions v ∈ Col(Π⊥stdΠaug) andw∈Col(Π⊥stdΠ⊥aug) iff there exists a θ? such thatB(θ̂aug)−B(θ̂std)≥c for some c>0. Furthermore, the `2 norm of θ? needs to satisfy the following lower bounds with c1 :=‖θ̂aug‖2−‖θ̂std‖2\n‖θ?‖2−‖θ̂aug‖2≥β1c1+β2 c2\nc1\n‖θ?‖2−‖θ̂std‖2≥(β1+1)c1+β2 c2\nc1 (12)\nwhere βi are constants that depend onXstd,Xext,Σ.\nInequality (10) follows directly from the second statement of Lemma 2 by minimizing the bound (12) with respect to c1 which is a free parameter to be chosen during construction of θ? (see proof of Lemma (2). The minimum is attained for c1 =2 √ (β1+1)(β2c2). We hence conclude that θ? needs to be sufficiently more complex than a good standard solution, i.e. ‖θ?‖22−‖θ̂std‖22>γcwhere γ>0 is a constant that depends on theXstd,Xext." }, { "heading": "B.3 PROOF OF TECHNICAL LEMMAS", "text": "In this section we prove the technical lemmas that are used to prove Theorem 1." }, { "heading": "B.3.1 PROOF OF LEMMA 2", "text": "Any vector Π⊥stdθ ∈ Null(Σstd) can be decomposed into orthogonal components Π⊥stdθ = Π⊥stdΠ ⊥ augθ + Π ⊥ stdΠaugθ. Using the minimum-norm property, we can then always decompose the (rotated) augmented estimator θ̂aug∈Col(Π⊥aug)=Col(Π⊥stdΠ⊥aug) and true parameter θ? by\nθ̂aug = θ̂std+ ∑ vi∈ext ζivi\nθ?= θ̂aug+ ∑ wj∈rest ξjwj ,\nwhere we define “ext” as the set of basis vectors which span Col(Π⊥stdΠaug) and respectively “rest” for Null(Σaug). Requiring the bias increase to be some constant c>0 can be rewritten using identity (9) as follows\nB(θ̂aug)−B(θ̂std)=c ⇐⇒ ( ∑ vi∈ext ζivi) >Σ( ∑ vi∈ext ζivi)+c=−2( ∑ wj∈rest ξjwj)Σ( ∑ vi∈ext ζivi)\n⇐⇒ ( ∑ vi∈ext ζivi) >Σ( ∑ vi∈ext ζivi)+c=−2 ∑ wj∈rest,vi∈ext ξjζiw > j Σvi (13)\nThe left hand side of equation (13) is always positive, hence it is necessary for this equality to hold with any c>0, that there exists at least one pair i,j such that w>j Σvi 6= 0 and one direction of the iff statement is proved.\nFor the other direction, we show that if there exist v ∈ Col(Π⊥stdΠaug) and w ∈ Col(Π⊥stdΠ⊥aug) for which condition (11) holds (wlog we assume that thew>Σv<0) we can construct a θ? for which the inequality (5) in Theorem 1 holds as follows:\nIt is then necessary by our assumption that ξjζiw>j Σvi>0 for at least some i,j. We can then set ζi>0 such that ‖θ̂aug−θ̂std‖2 =‖ζ‖2 =c1>0, i.e. that the augmented estimator is not equal to the standard estimator (else obviously there can be no difference in bias and equality (13) cannot be satisfied for any desired bias increase c>0).\nThe choice of ξ minimizing ‖θ? − θ̂aug‖2 = ∑ j ξ 2 j that also satisfies equation (13) is an appropriately scaled vector in the direction of x = W>ΣV ζ where we define W := [w1, ... ,w|rest|] and V :=[v1,...,v|ext|]. Defining c0 =ζ>V >ΣV ζ for convenience and then setting\nξ=− c0+c 2‖x‖22 x (14)\nwhich is well-defined since x 6= 0, yields a θ? such that augmentation hurts. It is thus necessary for B(θ̂aug)−B(θ̂std)=c that∑\nj\nξ2j = (c0+c)\n2 4‖W>ΣV ζ‖2 = (ζ>V >ΣV ζ+c)2 4ζ>V >ΣWW>ΣV ζ\n≥ (ζ >V >ΣV ζ)2\n4ζ>V >ΣWW>ΣV ζ +\nc2\n4ζ>V >ΣWW>ΣV ζ\n≥ c1 4\nλ2min(V >ΣV )\nλ2max(W >ΣV )\n+ c2\n4c1λ2max(W >ΣV )\n.\nBy assuming existence of i,j such that ξjζiw>j Σvi 6=0, we are guaranteed that λ2max(W>ΣV )>0.\nNote due to construction we have ‖θ?‖22 = ‖θ̂std‖22 + ∑ iζ 2 i + ∑ jξ 2 j and plugging in the choice of ξj in equation (14) we have\n‖θ?‖22−‖θ̂std‖22≥c1 [ 1+ λ2min(V >ΣV )\n4λ2max(W >ΣV )\n] +\nc2\n4λ2max(W >ΣV )\n1 c1 .\nSetting β1 = [ 1+ λ2min(V >ΣV )\n4λ2max(W >ΣV ) ] , β2 = 14λ2max(W>ΣV ) yields the result." }, { "heading": "B.3.2 PROOF OF LEMMA 1", "text": "Let λ1,...,λm be the m non-zero eigenvalues of Σ and ui be the corresponding eigenvectors. Then choose v to be any combination of the eigenvectors v=UβwhereU=[u1,...,um] where at leastβi,βj 6=\n0 for λi 6=λj . We next constructw=Uα by choosingα as follows such that the inequality in (11) holds:\nαi= βj\nβ2i +β 2 j\nαj= −βi\nβ2i +β 2 j\nand αk=0 for k 6= i,j. Then we have that α>β=0 and hencew>v=0. Simultaneously w>Σv=λiβiαi+λjβjαj\n=(λi−λj) βiβj β2i +β 2 j 6=0\nwhich concludes the proof of the first statement.\nWe now prove the second statement by constructing Σstd = X>stdXstd,Σext = X > extXext using w,v. We can then obtain Xstd,Xext using any standard decomposition method to obtain Xstd,Xext. We construct Σstd,Σext using w, v. Without loss of generality, we can make them simultaneously diagonalizable. We construct a set of eigenvectors that is the same for both matrices paired with different eigenvalues. Let the shared eigenvectors include w,v. Then if we set the corresponding eigenvalues λw(Σext) = 0,λv(Σext)> 0 and λw(Σstd) = 0,λv(Σstd) = 0, then λw(Σaug) = 0 such that w∈Col(Π⊥stdΠ⊥aug) and v∈Col(Π⊥stdΠaug). This shows the second statement. With this, we can design a θ? for which augmentation hurts as in Lemma 2." }, { "heading": "B.4 CHARACTERIZATION COROLLARY 1", "text": "A simpler case to analyze is when we only augment with one extra data point. The following corollary characterizes which single augmentation directions lead to higher prediction error for the augmented estimator. Corollary 1. The following characterizations hold for augmentation directions that do not cause the bias of the augmented estimator to be higher than the original estimator.\n(a) (in terms of ratios of inner products) For a given θ?, data augmentation does not increase the bias of the augmented estimator for a single augmentation direction xext if\nx>extΠ ⊥ stdΣΠ ⊥ stdxext\nx>extΠ ⊥ stdxext\n−2(Π ⊥ stdxext) >ΣΠ⊥stdθ ?\nx>extΠ ⊥ stdθ\n? ≤0 (15)\n(b) (in terms of eigenvectors) Data augmentation does not increase bias for any θ? if Π⊥stdxext is an eigenvector of Σ. However if one augments in the direction of a mixture of eigenvectors of Σ with different eigenvalues, there exists θ? such that augmentation hurts.\n(c) (depending on well-conditioning of Σ) If λmax(Σ)λmin(Σ) ≤2 and Π ⊥ stdθ ? is an eigenvector of Σ, then no augmentations xext increase bias.\nThe form in Equation (15) compares ratios of inner products of Π⊥stdxext and Π ⊥ stdθ ? in two spaces: the one in the numerator is weighted by Σ whereas the denominator is the standard inner product. Thus, if Σ scales and rotates rather inhomogeneously, then augmenting with xext may hurt bias. Here again, if Σ=γI for γ>0, then the condition must hold." }, { "heading": "B.4.1 PROOF OF COROLLARY 1 (A)", "text": "Note that for a single augmentation point Xext = x>ext, the orthogonal decomposition of Π ⊥ stdθ ? into Col(Π⊥aug) and Col(Π ⊥ stdΠaug) is defined by v = Π⊥stdxext > θ?\n‖Π⊥stdxext‖2 Π⊥stdxext and w = Π ⊥ stdθ ?− v respectively. Plugging back into into identity (9) then yields the following condition for safe augmentations:\n2(v−Π⊥stdθ?)>Σv−v>Σv≤0 (16) v>Σv−2(Π⊥stdθ?)>Σv≤0\n⇐⇒Π⊥stdxext > ΣΠ⊥stdxext≤2(Π⊥stdθ?)>ΣΠ⊥stdxext · ‖Π⊥stdxext‖2\nΠ⊥stdxext > θ?\nRearranging the terms yields inequality (15).\nSafe augmentation directions for specific choices of θ? and Σ are illustrated in Figure 2." }, { "heading": "B.4.2 PROOF OF COROLLARY 1 (B)", "text": "Assume that Π⊥stdxext is an eigevector of Σ with eigenvalue λ>0. We have\nx>extΠ ⊥ stdΣΠ ⊥ stdxext\nx>extΠ ⊥ stdxext\n−2(Π ⊥ stdxext) >ΣΠ⊥stdθ ?\nx>extΠ ⊥ stdθ\n? =−λ<0\nfor any θ?. Hence by Corollary 1 (a), the bias doesn’t increase by augmenting with eigenvectors of Σ for any θ?.\nWhen the single augmentation direction v is not an eigenvector of Σ, by Lemma 1 one can findw such that w>Σv 6= 0. The proof in Lemma 1 gives an explicit construction for w such that condition (11) holds and the result then follows directly by Lemma 2." }, { "heading": "B.4.3 PROOF OF COROLLARY 1 (C)", "text": "Suppose ΣΠ⊥stdθ ?=λΠ⊥stdθ ? for some λmin(Σ)≤λ≤λmax(Σ). Then starting with the expression (15), x>extΠ ⊥ stdΣΠ ⊥ stdxext\nx>extΠ ⊥ stdxext\n−2(Π ⊥ stdxext) >ΣΠ⊥stdθ ?\nx>extΠ ⊥ stdθ\n? = x>extΠ ⊥ stdΣΠ ⊥ stdxext\nx>extΠ ⊥ stdxext\n−2λ\n≤λmax(Σ)−2λ<0 by applying λmax(Σ)λmin(Σ) ≤2. Thus when Π ⊥ stdθ\n? is an eigenvector of Σ, there are no augmentations xext that increase the bias.\nC VARIANCE OF MINIMUM NORM INTERPOLANTS\nIn this section, we consider the case where the noise is non-zero, and compute the variances of the two estimators of interest: the standard estimator θ̂std and data augmented estimator θ̂aug. The following theorem provides a general characterization of the relation between variance of the standard estimator and variance of the augmented estimator. Theorem 2 is a corollary of this general result that we present first. Theorem 4 (Variance). The difference in the variances of a standard and augmented estimator can be expressed as follows:\n1\nσ2 (V (θ̂aug)−V (θ̂std))=tr\n( ΣX†ext(X † ext) >)︸ ︷︷ ︸\nT1: Variance increase\n−tr ( ΣΣ†stdX > ext(I+XextΣ † stdX > ext) −1XextΣ † std )︸ ︷︷ ︸ T2: Variance reduction , (17)\nwhereXext def = Π⊥stdXext, is the component ofXext in the null space of Σstd.\nProof. Recall from (3) that the V (θ̂)=tr(Cov(θ̂ |XstdXext)Σ). For the minimum norm interpolation estimators θ̂std and θ̂aug in Equation 2, we have the following expressions for the variances of the estimators.\nV (θ̂std)=σ 2tr ( Σ†stdΣ ) ,\nV (θ̂aug)=σ 2tr ( Σ†augΣ ) ,\nwhere Σstd =X>stdXstd and Σaug =X > stdXstd+X > extXext. Note that since Σstd,Σaug are unnormalized, the quantities Σ†std and Σ † aug decay with nwith the variances V (θ̂std),V (θ̂aug) also decay with n as expected. In order to compare V (θ̂std) and V (θ̂aug), we need to compare Σ † std and Σ † aug. Iorder to do this, we leverage the result from Kovanic (1979) on the pseudo-inverse of the sum of two symmetric matrices:\n(Σstd+X > extXext) †=Σ†std−Σ†stdX>ext(I+XextΣ†stdX>ext)−1XextΣ†std+X†ext(X†ext)>, where recall that Xext is the component of Xext in the null space of Σstd. Multiplying each term by Σ and using linearity of trace, we get the required expression.\nProof of Theorem 2. Theorem 2 follows directly from the general result above (Theorem 4). Note that the terms T1 and T2 are traces of PSD matrices and hence non-negative, and capture the magnitude of variance increase and variance reduction respectively. From Theorem 4, we see that (i) ifXext is entirely in the span of Σstd makingXext =0, T1 =0 making V (θ̂aug)≤V (θ̂std) (ii) On the other extreme, ifXext is entirely in the null space with Σ † stdXext =0, T2 =0 and hence V (θ̂aug)≥V (θ̂std)." }, { "heading": "D DETAILS FOR SPLINE STAIRCASE", "text": "We describe the data distribution, augmentations, and model details for the spline experiment in Figure 4 and toy scenario in Figure 1. Finally, we show that we can construct a simplified family of spline problems where the ratio between test errors of the augmented and standard estimators increases unboundedly as the number of stairs." }, { "heading": "D.1 TRUE MODEL", "text": "We consider a finite input domain\nT ={0, ,1,1+ ,...,s−1,s−1+ } (18) for some integer s corresponding to the total number of “stairs” in the staircase problem. Let Tline⊂T ={0,1,...,s−1}. We define the underlying function f? :R 7→R as f?(t)=btc. This function takes a staircase shape, and is linear when restricted to Tline. Sampling training dataXstd We describe the data distribution in terms of the one-dimensional input t, and by the one-to-one correspondence with spline basis features x=X(t), this also defines the distribution of spline features x∈X . Letw∈∆s define a distribution over Tline where ∆s is the probability simplex of dimension s. We define the data distribution with the following generative process for one sample t. First, sample a point i from Tline according to the categorical distribution described by w, such that i∼Categorical(w). Second, sample t by perturbing iwith probability δ such that\nt= { i w.p. 1−δ i+ w.p. δ.\nThe sampled t is inTline with probability 1−δ andT cline with probability δ, where we choose δ to be small. Sampling augmented points Xext For each element ti in the training set, we augment with T̃i=[u\nu.a.r∼ B(ti)], an input chosen uniformly at random fromB(ti)={btic,btic+ }. Recall that in our work, we consider data augmentation where the targets associated with the augmented points are from the ground truth oracle. Notice that by definition, f?(t̃i) = f?(ti) for all t̃∈B(ti), and thus we can set the augmented targets to be ỹi=yi. This is similar to random data augmentation in images (Yaeger et al., 1996; Krizhevsky et al., 2012), where inputs are perturbed in a way that preserves the label." }, { "heading": "D.2 SPLINE MODEL", "text": "We parameterize the spline predictors as fθ(t) = θ>X(t) where X : R→Rd is the cubic B-spline feature mapping (Friedman et al., 2001) and the norm of fθ(t) can be expressed as θ>Mθ for a matrix M that penalizes a large second derivative norm where [M ]ij = ∫ X ′′ i (u)X ′′\nj (u)u. . Notice that the splines problem is a linear regression problem from Rd to R in the feature domain X(t), allowing direct application of Theorem 1. As a linear regression problem, we define the finite domain as X ={X(t) : t∈T } containing 2s elements in Rd. There is a one-to-one correspondence between t and X(t), such thatX−1 is well-defined. We define the features that correspond to inputs in Tline asXline = {x :X−1(x)∈Tline}. Using this feature mapping, there exists a θ? such that fθ?(t)=f?(t) for t∈T . Our hypothesis class is the family of cubic B-splines as defined in (Friedman et al., 2001). Cubic B-splines are piecewise cubic functions, where the endpoints of each cubic function are called the knots. In our example, we fix the knots to be [0, ,1,...,s−1,s−1+ ], which places a knot on every point in T . This ensures that the function class contains an interpolating function on all t∈T , i.e. for some θ?,\nfθ?(t)=θ ?>X(t)=f?(t)=btc.\nWe solve the minimum norm problem\nθ̂std =argmin θ {θ>Mθ :Xstdθ=ystd} (19)\nfor the standard estimator and the corresponding augmented problem to obtain the augmented estimator." }, { "heading": "D.3 EVALUATING COROLLARY 1 (A) FOR SPLINES", "text": "We now illustrate the characterization for the effect of augmentation with different single points in Theorem 1 (a) on the splines problem. We assume the domain to T as defined in equation 18 with s=10 and our training data to beXstd ={X(t) : t∈{0,1,2,3,4}}. Let local perturbations be spline features for t̃ /∈Tline where t̃= t+ is away from some t∈ {0,1,2,3,4} from the training set. We examine all possible single augmentation points in Figure 5 (a) and plot the calculated predictive test error difference as defined in equation (16). Figure 5 shows that augmenting with an additional point from {X(t) : t∈Tline} does not affect the bias, but adding any perturbation point in {X(t̃) : t̃∈{2.5,3.5,4.5}} where t̃ /∈ Tline increases the error significantly by changing the direction in which the estimator extrapolates. Particularly, local augmentations near the boundary of the original dataset hurt the most while other augmentations do not significantly affect the bias of the augmented estimator." }, { "heading": "D.3.1 LOCAL AND GLOBAL STRUCTURE IN THE SPLINE STAIRCASE", "text": "In the spline staircase, the local perturbations can be thought of as fitting high frequency noise in the function space, where fitting them causes a global change in the function.\nTo see this, we transform the problem to minimum `2 norm linear interpolation using features XM (t) =X(t)M\n−1/2 so that the results from Section 4.1.2 apply directly. Let Σ be the population covariance of XM for a uniform distribution over the discrete domain consisting of s stairs and their perturbations (Figure 1). Let Q= [qi]2si=1 be the eigenvectors of Σ in decreasing order of their corresponding eigenvalues. The visualization in Figure 3 shows that qi are wave functions in the original input space; the “frequency” of the wave increases as i increases.\nSuppose the original training set consists of two points,Xstd =[XM (0),XM (1)]>. We study the effect of augmenting pointxext in terms of qi above. First, we find that the first two eigenvectors corresponding to linear functions satisfy Π⊥stdq1 =Π ⊥ stdq2 =0. Intuitively, this is because the standard estimator is linear. For ease of visualization, we consider the 2D space in Null(Σ) spanned by Π⊥stdq3 (global direction, low frequency) and Π⊥stdq2s (local direction, high frequency). The matrix Πlg = [Π ⊥ stdq3, Π ⊥ stdq2s] > projects onto this space. Note that the same results hold when projecting onto all Π⊥stdqi in Null(Σ).\nIn terms of the simple 3-D example in Section 4.1.1, the global direction corresponds to the costly direction with large eigenvalue, as changes in global structure heavily affect the predictive test error. Figure 6 plots the projections Πlgθ? and ΠlgXext for different Xext. When θ? has high frequency variations and is complex, Πlgθ?=(θ?−θ̂std) is aligned with the local dimension. For xext immediately local to training points, the projection Πlgxext (orange vector in Figure 6) has both local and global components. Augmenting these local perturbations introduces error in the global component. For other xext farther from training points, Πlgxext (blue vector in Figure 6) is almost entirely global and perpendicular to θ?− θ̂std, leaving bias unchanged. Thus, augmenting data close to original data cause estimators to fit local components at the cost of the costly global component which changes overall structure of the predictor like in Figure 1(middle). The choice of inductive bias in theM–norm being minimized results in eigenvectors of Σ that correspond to local and global components, dictating this tradeoff." }, { "heading": "D.4 DATA AUGMENTATION CAN BE QUITE PAINFUL FOR SPLINES", "text": "We construct a family of spline problems such that as the number the augmented estimator has much higher error than the standard estimator. We assume that our predictors are from the full family of cubic splines." }, { "heading": "Sampling distribution", "text": "We define a modified domain with continuous intervals T =∪s−1t=0 [t,t+ ]. Considering only swhich is a multiple of 2, we sample the original data set as described in Section D.1 with the following probability massw:\nw(t)= { 1−γ s/2 t<s/2,t∈Tline γ s/2 t≥s/2,t∈Tline.\n(20)\nfor γ ∈ [0,1). We define a probability distribution PT on T for a random variable T by setting T =Z+S(Z) whereZ∼Categorical(w) and theZ-dependent perturbation S(z) is defined as\nS(z)∼ {\nUniform([z,z+ ]) w.p. δ z, w.p. 1−δ. (21)\nWe obtain the training datasetXstd ={X(t1),...,X(tn)} by sampling ti∼PT ." }, { "heading": "Augmenting with an interval", "text": "Consider a modified augmented estimator for the splines problem, where for each point ti we augment with the entire interval [btic,btic+ ] with ∈ [0,1/2) and the estimator is enforced to output fθ̂(x)=yi=btic for all x in the interval [btic,btic+ ]. Additionally, suppose that the ratio s/n=O(1) between the number of stairs s and the number of samples n is constant.\nIn this simplified setting, we can show that the test error of the augmented estimator grows while the test error of the standard estimator decays to 0.\nTheorem 5. Let the setting be defined as above. Then with the choice of δ= log(s 7)−log(s7−1)\ns and γ=c/s for a constant c∈ [0,1), the ratio between test errors is lower bounded as\nR(θ̂aug) R(θ̂std) =Ω(s2) (22)\nwhich goes to infinity as s→∞. Furthermore,R(θ̂std)→0 as s→∞.\nProof. We first lower bound the test error of the augmented estimator. DefineE1 as the event that only the lower half of the stairs is sampled, i.e. {t : t<s/2}, which occurs with probability (1−γ)n. Let t?=maxibtic be the largest “stair” value seen in the training set. Note that the min-norm augmented estimator will extrapolate with zero derivative for t ≥ maxibtic. This is because on the interval [t?,t?+ ], the augmented estimator is forced to have zero derivative, and the solution minimizing the second derivative of the prediction continues with zero derivative for all t≥ t?. In the event E1, t?≤ s/2−1, where t∗= s/2−1 achieves the lowest error in this event. As a result, on the points in the second half of the staircase, i.e. t={t∈T : t> s2−1}, the augmented estimator incurs large error:\nR(θ̂aug |E1)≥ s∑\nt=s/2\n(t−(s/2−1))2 · γ s/2\n= s/2∑ t=1 t2 · γ s/2 = γ 6 (s2+2s+1).\nTherefore the expected risk of the augmented estimator is bounded by\nR(θ̂aug)≥R(θ̂aug |E1)P (E1)= γ\n6 (s2+2s+1)(1−γ)n\n≥ 1 6 γ(1−γn)(s2+2s+1) =Ω( c−c2 s (s2+2s+1))=Ω(s)\nwhere in the first line, we note that the error on each interval is the same and the probability of each interval is (1−δ) γs/2 + δ · γ s/2 = γ s/2 .\nNext we upper bound the test error of the standard estimator. DefineE2 to be the event where all points are sampled from Tline, which occurs with probability (1−δ)n. In this case, the standard estimator is linear and fits the points on Tline with zero error, while incurring error for all points not in Tline. Note that the probability density of sampling a point not in Tline is either δ · 1−γ s/2 or δ · γ s/2 , which we upper bound as δ · 1s/2 .\nR(θ̂std |E2)= s−1∑ t=1 δ · 1 s/2 ∫ 0 u2du= δ · 1 s/2 O(s 3)\n=O(δ)\nTherefore for eventE2, the expected error is bounded as\nR(θ̂std |E2)P (E2)=O(δ)(1−δ)n\n=O(δ)e−δn\n=O(δ · s 7−1 s7 ) =O(δ)=O( log(s7)−log(s7−1)\ns )=O(1/s)\nsince log(s7)−log(s7−1)≤1 for s≥2. For the complementary eventEc2, note that cubic spline predictors can grow only asO(t3), with errorO(t6). Therefore the expected error for caseEc2 is bounded as\nR(θ̂std |Ec2)P (Ec2)≤O(t6)(1−e−δn)\n=O(t6)O( 1\ns7 )=O(1/s)\nPutting the parts together yields\nR(θ̂std)=R(θ̂std |E2)P (E2)+R(θ̂std |Ec2)P (Ec2) ≤O(1/s)+O(1/s)=O(1/s).\nThus overall,R(θ̂std)=O(1/s) and combining the bounds yields the result." }, { "heading": "E EFFECT OF SAMPLE SIZE ON ERROR INCREASE VIA AUGMENTATION", "text": "In this section, we discuss what our analysis of the effect of augmentation on the error of mininum interpolants says about the trends with respect to varying sample sizes of the standard training set (Xstd,ystd).\nTrends in variance. We refer the reader to the precise expression for the difference in variance provided in Theorem 4. Let us consider a case where data augmentation causes an increase in variance. For simplicity,Xext⊥Xstd, across all sample sizes in the small sample regime. For a fixedXext, we see that the magnitude of variance increase is governed by Π⊥stdXext which decreases as we get more standard training points.\nTrends in bias. Recall the expressions for B(θ̂std) and B(θ̂aug) from Equation 4. Using the same notation as that of Theorem 5, we have the following expression for the amount of bias increase. Let v=Π⊥stdΠaugθ ? and v=Π⊥augθ ?. We have,\nB(θ̂aug)−B(θ̂std)=−v>Σv−2w>Σv, (23)\nwhere w>Σv is a negative quantity when data augmentation causes an increase in bias. Recall that we are in the small sample regime where Σstd is not invertible for a range of sample sizes. Suppose we augment withXext such thatXext∈Null(Σstd) in this regime of interest. In this case, we can write v = Π⊥stdΠ ⊥ augθ\n? = u>θ?. For a fixed problem setting θ?, we see that v is fixed. Let us now look at w=Π⊥augθ ?. Recall that Π⊥aug is the projection matrix onto Σstd+X > extXext. For a fixedXext, as the null space of Σstd shrinks with more training points,w decreases. Note that the magnitude of increase in bias decreases as w decreases (for a fixed v). This suggests that the effect of augmentation on bias should decrease as we get more samples, in the small data regime.\nOur heuristic calculations in some settings ofXext in minimum norm interpolation in linear regression suggest that the overall increase in test error should decrease as we increase the sample size of the original training set. Empirically, we find similar trends when performing adversarial data augmentations with varying training set sizes.\nF X-REGULARIZATION\nF.1 X-REGULARIZATION FOR LINEAR REGRESSION\nIn this section, we prove Theorem 3, which we reproduce here.\nTheorem 3. Assume the noiseless linear model y=x>θ?. Let θint-std be an arbitrary interpolant of the standard data, i.e.Xstdθint-std =ystd. Let θ̂x-aug be the X-regularized interpolant (6). Then\nR ( θ̂x-aug)≤R(θint-std).\nProof. Let {ui} be an orthonormal basis of the kernel Null(Σstd + X>extXext) and {vi} be an orthonormal basis for Null(Σstd) \\ span({ui}). Let U and V be the linear operators defined by Uw= ∑ iuiwi and V w= ∑ iviwi, respectively, noting thatU\n>V =0. Defining Π⊥std :=(I−Σ†stdΣstd) to be the projection onto the null space ofXstd, we see that there are unique vectors ρ,α such that\nθ?=(I−Π⊥std)θ?+Uρ+V α. (24a) As θint-std interpolates the standard data, we also have\nθint-std =(I−Π⊥std)θ?+Uw+V z, (24b)\nasXstdUw=XstdV z=0, and finally, θ̂x-aug =(I−Π⊥std)θ?+Uρ+V λ (24c) where we note the common ρ between Eqs. (24a) and (24c).\nUsing the representations (24) we may provide an alternative formulation for the augmented estimator (6), using this to prove the theorem. Indeed, writing θint-std−θ̂x-aug =U(w−ρ)+V (z−λ), we immediately have that the estimator has the form (24c), with the choice\nλ=argmin λ\n{ (U(w−ρ)+V (z−λ))>Σ(U(w−ρ)+V (z−λ)) } .\nThe optimality conditions for this quadratic imply that V >ΣV (λ−z)=V >ΣU(w−ρ). (25)\nNow, recall that the predictive test error of a vector θ isR(θ)=(θ−θ?)>Σ(θ−θ?)=‖θ−θ?‖2Σ, using Mahalanobis norm notation. In particular, a few quadratic expansions yield\nR(θint-std)−R(θ̂x-aug) =‖U(w−ρ)+V (z−α)‖2Σ−‖V (λ−α)‖ 2 Σ =‖U(w−ρ)+V z‖2Σ+‖V α‖ 2 Σ−2(U(w−ρ)+V z)>ΣV α−‖V λ‖ 2 Σ−‖V α‖ 2 Σ+2(V λ) >ΣV α\n(i) = ‖U(w−ρ)+V z‖2Σ−2(V λ)>ΣV α−‖V λ‖ 2 Σ+2(V λ) >V α =‖U(w−ρ)+V z‖2Σ−‖V λ‖ 2 Σ, (26) where step (i) used that (U(w−ρ))>ΣV =(V (λ−z))>ΣV from the optimality conditions (25). Finally, we consider the rightmost term in equality (26). Again using the optimality conditions (25), we have ‖V λ‖2Σ =λ>V >Σ1/2Σ1/2(U(w−ρ)+V z)≤‖V λ‖Σ‖U(w−ρ)+V z‖Σ by Cauchy-Schwarz. Revisiting equality (26), we obtain\nR(θint-std)−R(θ̂x-aug)=‖U(w−ρ)+V z‖2Σ− ‖V λ‖4Σ ‖V λ‖2Σ\n≥‖U(w−ρ)+V z‖2Σ− ‖V λ‖2Σ‖U(w−ρ)+V z‖ 2 Σ\n‖V λ‖2Σ =0,\nas desired.\nF.2 X-REGULARIZATION FOR DATA AUGMENTATIONS THAT PROMOTE ROBUSTNESS\nThe main motivation of our work is to provide a method to perform data augmentation such that the benefits such as robustness are preserved, without seeing the undesirable drop in standard accuracy. The general X-regularized estimator (Equation 7) holds for any form of augmentation. We now write out the exact loss functions when we apply X-regularization to two forms of adversarial training: Projected Gradient Adversarial Training of (Madry et al., 2018) and TRADES (Zhang et al., 2019). Throughout, we assume the same notation as that used in the definition of the general estimator. Xstd,ystd denote the standard training set and we have access tom unlabeled points x̃i,i=1,...m." }, { "heading": "F.2.1 PROJECTED GRADIENT ADVERSARIAL TRAINING", "text": "Note that the unlabeled data can be perturbed to obtain more extra data, because of the special structure of the extra points added: every training point generates a perturbed extra training point. This leads to the following natural generalization, where we obtain adversarial perturbations from the unlabeled data, and label them with the pseudo-label generated from the standard trained model. As the distance measure, we use the same loss that is used for classification. Put together, we have\nθ̂x-aug :=argmin θ\n{ N−1 ∑ (x,y)∈[Xstd,ystd] `(fθ(x),y)+β `(fθ(xadv),y)\n+λm−1 m∑ i=1 `(fθ(x̃i),fθ̂std(x̃i))+β `(fθ(x̃advi),fθ̂std(x̃i)) } , (27)\nIn practice, xadv is found by performing a few steps of projected gradient method on `(fθ(x),y), and similarly x̃adv by performing a few steps of projected gradient method on `(fθ(x̃),fθ̂std(x̃))." }, { "heading": "F.2.2 TRADES", "text": "TRADES was a modification of the projected gradient adversarial training algorithm of (Madry et al., 2018). Here, the loss function is modified in the following way, instead of operating on the label directly, the robustness term operates on the normalized logits, which can be thought of as probabilities of different labels. Using their modified loss and applying X-regularization leads to the following.\nθ̂x-aug :=argmin θ\n{ N−1 ∑ (x,y)∈[Xstd,ystd] `(fθ(x),y)+β KL(pθ(xadv)||pθ(x))\n+λm−1 m∑ i=1 `(fθ(x̃i),fθ̂std(x̃i))+β KL(pθ(x̃advi)||pθ̂std(x̃i)) } , (28)\nwhere KL(pθ(x),pθ(xadv) is the KL divergence between the probability over class labels assigned to x and xadv." }, { "heading": "G EXPERIMENTAL DETAILS", "text": "" }, { "heading": "G.1 SPLINE SIMULATIONS", "text": "For spline simulations in Figure 1 and Figure 4, we implement the optimization of the standard and robust objectives using the basis described in (Friedman et al., 2001). The penalty matrixM computes second-order finite differences of the parameters θ. We solve the min-norm objective directly using CVXPY (Diamond & Boyd, 2016). Each point in Figure 4(a) represents the average test error over 25 trials of randomly sampled training datasets between 22 and 1000 samples. Shaded regions represent 1 standard deviation.\nG.2 X-REGULARIZATION AS ROBUST SELF-TRAINING\nIn the adversarial training setting where augmentations are generated through transformations of existing data, it is natural to instantiate X-regularization as robust self-training, as discussed in Section F. We evaluate the performance of X-regularization applied to `∞ adversarial perturbations, adversarial rotations, and random rotations." }, { "heading": "G.2.1 SUBSAMPLING CIFAR-10", "text": "We augment with `∞ adversarial perturbations of various sizes. In each epoch, we find the augmented examples via Projected Gradient Ascent on the multiclass logistic loss (cross-entropy loss) of the incorrect class. Training the augmented estimator in this setup uses essentially the adversarial training procedure of (Madry et al., 2018), with equal weight on both the ”clean” and adversarial examples during training.\nWe instantiate the general X-regularization estimator as robust self-training defined in (27). We compare the test error of the augmented estimator with an estimator trained using RST. We apply RST to adversarial training algorithms in CIFAR-10 using 500k unlabeled examples sourced from Tiny Images, as in (Carmon et al., 2019).\nWe use Wide ResNet 40-2 models (Zagoruyko & Komodakis, 2016) while varying the number of samples in CIFAR-10. We sub-sample CIFAR-10 by factors of {1,2,5,8,10,20,40} in Figure 4(a) and {1,2,5,8,10} in Figure 4(b). For sub-sample factors 1 to 20, we report results averaged from 2 trials for each model. For sub-sample factors greater than 20, we average over 5 trials. All models are trained for 200 epochs with respect to the size of the labeled training dataset and all achieve almost 100% standard and robust training accuracy.\nWe evaluate the robustness of models to the strong PGD-attack with 40 steps and 5 restarts. In Figure 4(b), we used a simple heuristic to set the regularization strength λ in Equation (27) to be λ= min(0.9,γ)/(1−min(0.9,γ)) where γ ∈ [0,1] is the fraction of the original CIFAR-10 dataset\nsampled. Intuitively, we give more weight to the unlabeled data when the original dataset is larger, meaning that the standard estimator produces more accurate pseudo-labels. We fix β=5.\nFigure 7 shows that the robust accuracy of the RST model stays within 2% of the robust model (trained using PGD adversarial training) for all subsamples, and even improves upon the robust model on the full dataset (Tables 2,3).\nNote that we cannot directly compare the empirical performance of RST+adversarial training on CIFAR-10 with other methods to obtain robust models that are modifications of vanilla adversarial training. We use a smaller model due to computational constraints enforced by adversarial training. Since the model is small, we could only fit adversarially augmented examples with small =2/255, while existing baselines use =8/255. Note that even for =2/255, adversarial data augmentation leads to an increase in error. We show that RST can fix this. While ensuring models are robust is an important goal in itself, in this work, we view adversarial training through the lens of covariate-shifted data augmentation and study how to use augmented data without increasing test error. We show that X-regularization based methods like RST preserve the other benefits of some kinds of data augmentation like increased robustness to adversarial examples." }, { "heading": "G.2.2 `∞ ADVERSARIAL PERTURBATIONS", "text": "In Table 1, we evaluate X-regularization as robust self-training applied to PGD and TRADES adversarial training. The models are trained on the full CIFAR-10 dataset, and models which use\nunlabeled data (self-training and X-reg) also use 500k unlabeled examples from Tiny Images. All models except the Interpolated AT and Neural Architecture Search model use the same base model WideResNet 28-10. To evaluate robust accuracy, we use a strong PGD-attack with 40 steps and 5 restarts against `∞ perturbations of size 8/255. For X-regularization models, we set λ=9 and β=5 in Equation (27) and Equation (28), following the heuristic λ=min(0.9,γ)/(1−min(0.9,γ)) for γ=1. We train for 200 epochs such that 100% training standard accuracy is attained." }, { "heading": "G.2.3 ADVERSARIAL AND RANDOM ROTATION/TRANSLATIONS", "text": "In Table 1 (right), we instantiate X-regularization as robust self-training for adversarial and random rotation/translations, using these transformations as xadv in Equation (27). The attack model is a grid of rotations of up to 30 degrees and translations of up to∼10% of the image size. The grid consists of 31 linearly spaced rotations and 5 linearly spaced translations in both dimensions. The Worst-of-10 model samples 10 uniformly random transformations of each input and augment with the one where the model performs the worst (causes an incorrect prediction, if it exists). The Random model samples 1 random transformation as the augmented input. All models (besides cited models) use the WRN-40-2 architecture and are trained for 200 epochs. We use the same hyperparametersλ,β as in G.2.2 for Equation (27)." }, { "heading": "H COMPARISON TO STANDARD SELF-TRAINING ALGORITHMS", "text": "The main objective of X-regularization is to allow to perform data augmentation without sacrificing standard accuracy. This is done by smoothing an augmented estimator to provide labels close to a standard non-augmented estimator on the unlabeled data. This is closely related to but different two broad kinds of semi-supervised learning.\n1. Self-training (pseudo-labeling): Classical self-training does not deal with data augmentation or robustness. We view X-regularization as a a generalization of self-training in the context of data augmentations. Here the pseudolabels are generated by a standard non-augmented estimator that is not trained on the labeled augmented points. In contrast, standard self-training would just use all labeled data to generate pseudo-labels. However, since some augmentations cause a drop in standard accuracy, and hence this would generate worse pseudo-labels than the X-regularized version.\n2. Consistency based regularization: Another popular semi-supervised learning strategy is based on enforcing consistency in a model’s predictions across various perturbations of the unlabeled data (Miyato et al., 2018; Xie et al., 2019; Sajjadi et al., 2016; Laine & Aila, 2017)). X-regularization is similar in spirit, but has an additional crucial component. We generate pseudo-labels first by performing standard training, and rather than enforcing simply consistency across perturbations, we enforce that the unlabeled data and perturbations are matched with the pseudo-labels generated." } ]
2,019
null
SP:720aa05838e9926dafd1161847b197b8f2f8a64a
[ "This paper investigates the problem of learning new branching heuristics in SAT solvers. The idea is very simple: take MiniSat, remove the usual VSIDS heuristic, and replace it with a variable selection policy that has been trained from a deep reinforcement learning algorithm. The architecture advocated in the present study is based on GNNs coupled with usual DQN techniques. The resulting GQSAT heuristic is endowed with attractive properties: on random SAT instances, it outperforms VSIDS and generalizes relatively well to other SAT distributions. ", "The paper proposes learning a branching heuristic to be used inside the SAT solver MiniSat using reinforcement learning. The state is represented as a graph representation of the Boolean formula as in previous works, and the policy is parameterized as a graph neural network. At each step of an episode the policy selects a variable to branch on and assigns a value to it. The episode terminates once the solver finds a satisfying assignment or proves unsatisfiability. The reward function encourages the policy to reach terminal state in as few steps as possible. The policy is trained using DQN. Results on randomly generated SAT instances show that the learned policy is able to solve problems with fewer steps than VSIDS, the branching heuristic commonly used by state-of-the-art solvers." ]
We present GQSAT, a branching heuristic in a Boolean SAT solver trained with value-based reinforcement learning (RL) using Graph Neural Networks for function approximation. Solvers using GQSAT are complete SAT solvers that either provide a satisfying assignment or a proof of unsatisfiability, which is required for many SAT applications. The branching heuristic commonly used in SAT solvers today suffers from bad decisions during their warm-up period, whereas GQSAT has been trained to examine the structure of the particular problem instance to make better decisions at the beginning of the search. Training GQSAT is data efficient and does not require elaborate dataset preparation or feature engineering to train. We train GQSAT on small SAT problems using RL interfacing with an existing SAT solver. We show that GQSAT is able to reduce the number of iterations required to solve SAT problems by 2-3X, and it generalizes to unsatisfiable SAT instances, as well as to problems with 5X more variables than it was trained on. We also show that, to a lesser extent, it generalizes to SAT problems from different domains by evaluating it on graph coloring. Our experiments show that augmenting SAT solvers with agents trained with RL and graph neural networks can improve performance on the SAT search problem.
[]
[ { "authors": [ "Akshat Agarwal", "Sumit Kumar", "Katia Sycara" ], "title": "Learning transferable cooperative behavior in multi-agent teams, 2019", "venue": null, "year": 2019 }, { "authors": [ "Saeed Amizadeh", "Sergiy Matusevych", "Markus Weimer" ], "title": "Learning to solve circuit-sat: An unsupervised differentiable approach", "venue": null, "year": 2018 }, { "authors": [ "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Carl Doersch", "Kimberly L. Stachenfeld", "Pushmeet Kohli", "Peter W. Battaglia", "Jessica B. Hamrick" ], "title": "Structured agents for physical construction, 2019", "venue": null, "year": 2019 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Roberto J Bayardo Jr.", "Robert Schrag" ], "title": "Using csp look-back techniques to solve real-world sat instances", "venue": "In Aaai/iaai,", "year": 1997 }, { "authors": [ "Irwan Bello", "Hieu Pham", "Quoc V Le", "Mohammad Norouzi", "Samy Bengio" ], "title": "Neural combinatorial optimization with reinforcement learning", "venue": "arXiv preprint arXiv:1611.09940,", "year": 2016 }, { "authors": [ "Qingpeng Cai", "Will Hang", "Azalia Mirhoseini", "George Tucker", "Jingtao Wang", "Wei Wei" ], "title": "Reinforcement learning driven heuristic optimization, 2019", "venue": null, "year": 2019 }, { "authors": [ "Victor Carbune", "Thierry Coppey", "Alexander Daryin", "Thomas Deselaers", "Nikhil Sarda", "Jay Yagnik" ], "title": "Smartchoices: Hybridizing programming and machine learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Peter C Cheeseman", "Bob Kanefsky", "William M Taylor" ], "title": "Where the really hard problems are", "venue": "In IJCAI,", "year": 1991 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Alex Flint", "Matthew Blaschko" ], "title": "Perceptron learning of sat", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Cristian Grozea", "Marius Popescu" ], "title": "Can machine learning learn a decision oracle for np problems? a test on sat", "venue": "Fundamenta Informaticae,", "year": 2014 }, { "authors": [ "Shai Haim", "Toby Walsh" ], "title": "Restart strategy selection using machine learning techniques", "venue": "In International Conference on Theory and Applications of Satisfiability Testing,", "year": 2009 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Holger H Hoos", "Thomas Stützle" ], "title": "Satlib: An online resource for research on sat", "venue": "Sat, 2000:283–", "year": 2000 }, { "authors": [ "Sebastian Jaszczur", "uszczyk Micha", "Henryk Michalewski" ], "title": "Neural heuristics for sat solving, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jiechuan Jiang", "Chen Dun", "Zongqing Lu" ], "title": "Graph convolutional reinforcement learning for multiagent cooperation, 2018", "venue": null, "year": 2018 }, { "authors": [ "Richard M Karp" ], "title": "Reducibility among combinatorial problems", "venue": "In Complexity of computer computations,", "year": 1972 }, { "authors": [ "Hadi Katebi", "Karem A Sakallah", "João P Marques-Silva" ], "title": "Empirical study of the anatomy of modern sat solvers", "venue": "In International Conference on Theory and Applications of Satisfiability Testing,", "year": 2011 }, { "authors": [ "Elias Khalil", "Hanjun Dai", "Yuyu Zhang", "Bistra Dilkina", "Le Song" ], "title": "Learning combinatorial optimization algorithms over graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Scott Kirkpatrick", "C Daniel Gelatt", "Mario P Vecchi" ], "title": "Optimization by simulated annealing", "venue": null, "year": 1983 }, { "authors": [ "Gil Lederman", "Markus N. Rabe", "Edward A. Lee", "Sanjit A. Seshia" ], "title": "Learning heuristics for automated reasoning through deep reinforcement learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jia Hui Liang", "Vijay Ganesh", "Ed Zulkoski", "Atulan Zaman", "Krzysztof Czarnecki" ], "title": "Understanding vsids branching heuristics in conflict-driven clause-learning sat solvers", "venue": "In Haifa Verification Conference,", "year": 2015 }, { "authors": [ "Jia Hui Liang", "Vijay Ganesh", "Pascal Poupart", "Krzysztof Czarnecki" ], "title": "Learning rate based branching heuristic for sat solvers", "venue": "In International Conference on Theory and Applications of Satisfiability Testing,", "year": 2016 }, { "authors": [ "Aleksandra Malysheva", "Tegg Taekyong Sung", "Chae-Bong Sohn", "Daniel Kudenko", "Aleksei Shpilman" ], "title": "Deep multi-agent reinforcement learning with relevance graphs, 2018", "venue": null, "year": 2018 }, { "authors": [ "Joao P Marques-Silva", "Karem A Sakallah" ], "title": "Grasp: A search algorithm for propositional satisfiability", "venue": "IEEE Transactions on Computers,", "year": 1999 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Matthew W Moskewicz", "Conor F Madigan", "Ying Zhao", "Lintao Zhang", "Sharad Malik" ], "title": "Chaff: Engineering an efficient sat solver", "venue": "In Proceedings of the 38th annual Design Automation Conference,", "year": 2001 }, { "authors": [ "Zack Newsham", "Vijay Ganesh", "Sebastian Fischmeister", "Gilles Audemard", "Laurent Simon" ], "title": "Impact of community structure on sat solver performance", "venue": "In International Conference on Theory and Applications of Satisfiability Testing,", "year": 2014 }, { "authors": [ "Olga Ohrimenko", "Peter J Stuckey", "Michael Codish" ], "title": "Propagation via lazy clause generation", "venue": null, "year": 2009 }, { "authors": [ "Aditya Paliwal", "Sarah Loos", "Markus Rabe", "Kshitij Bansal", "Christian Szegedy" ], "title": "Graph representations for higher-order logic and theorem proving", "venue": null, "year": 1905 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In NIPS Autodiff Workshop,", "year": 2017 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin Riedmiller", "Raia Hadsell", "Peter Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": "arXiv preprint arXiv:1806.01242,", "year": 2018 }, { "authors": [ "Daniel Selsam", "Nikolaj Bjørner" ], "title": "Guiding high-performance sat solvers with unsat-core predictions", "venue": "In International Conference on Theory and Applications of Satisfiability Testing,", "year": 2019 }, { "authors": [ "Daniel Selsam", "Matthew Lamm", "Benedikt Bünz", "Percy Liang", "Leonardo de Moura", "David L Dill" ], "title": "Learning a sat solver from single-bit supervision", "venue": "arXiv preprint arXiv:1802.03685,", "year": 2018 }, { "authors": [ "Rishabh Singh", "Joseph P Near", "Vijay Ganesh", "Martin Rinard" ], "title": "Avatarsat: An auto-tuning boolean sat solver", "venue": null, "year": 2009 }, { "authors": [ "Niklas Sorensson", "Niklas Een. Minisat v" ], "title": "13-a sat solver with conflict-clause minimization", "venue": null, "year": 2005 }, { "authors": [ "Fei Wang", "Tiark Rompf" ], "title": "From gameplay to symbolic reasoning", "venue": null, "year": 2018 }, { "authors": [ "Tingwu Wang", "Renjie Liao", "Jimmy Ba", "Sanja Fidler" ], "title": "Nervenet: Learning structured policy with graph neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Lin Xu", "Frank Hutter", "Holger H Hoos", "Kevin Leyton-Brown" ], "title": "Satzilla: portfolio-based algorithm selection for sat", "venue": "Journal of artificial intelligence research,", "year": 2008 }, { "authors": [ "Battaglia" ], "title": "Encoder and decoder are independent graph networks, i.e. MLPs taking whole vertex or edge feature matrix as a batch without message passing. We call the middle part ’the core’. The output of the core is concatenated with the output of the encoder and gets fed to the core again. We describe all hyperparameters in Appendix B.3", "venue": "We use Encoder-Process-Decode architecture", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Boolean satisfiability (SAT) is an important problem for both industry and academia impacting various fields, including circuit design, computer security, artificial intelligence, automatic theorem proving, and combinatorial optimization. As a result, modern SAT solvers are well-crafted, sophisticated, reliable pieces of software that can scale to problems with hundreds of thousands of variables (Ohrimenko et al., 2009).\nSAT is known to be NP-complete (Karp, 1972), and most state-of-the-art open-source and commercial solvers rely on multiple heuristics to speed up the exhaustive search, which is otherwise intractable. These heuristics are usually meticulously crafted using expert domain knowledge and are often iterated on using trial and error. In this paper, we investigate how we can use machine learning to improve upon an existing branching heuristic without leveraging domain expertise.\nWe present Graph-Q-SAT (GQSAT), a branching heuristic in a Conflict Driven Clause Learning (Marques-Silva & Sakallah, 1999; Bayardo Jr & Schrag, 1997, CDCL) SAT solver trained with value-based reinforcement learning (RL), based on DQN (Mnih et al., 2015). GQSAT uses a graph representation of SAT problems similar to Selsam et al. (2018) which provides permutation and variable relabeling invariance. It uses a Graph Neural Network (Gori et al., 2005; Battaglia et al., 2018, GNN) as a function approximator to provide generalization as well as support for a dynamic state-action space. GQSAT uses a simple state representation and a binary reward that requires no feature engineering or problem domain knowledge. GQSAT modifies only part of the CDCL based solver, keeping it complete, i.e. always leading to a correct solution.\nWe demonstrate that GQSAT outperforms Variable State Independent Decaying Sum (Moskewicz et al., 2001, VSIDS), most frequently used CDCL branching heuristic, reducing the number of iterations required to solve SAT problems by 2-3X. GQSAT is trained to examine the structure of the particular problem instance to make better decisions at the beginning of the search whereas the VSIDS heuristic suffers from bad decision during the warm-up period. We show that our method generalizes to problems five times larger than those it was trained on. We also show that our method\ngeneralizes across problem types from SAT to unSAT. We also show, to a lesser extent, it generalizes to SAT problems from different domains, such as graph colouring. Finally, we show that some of these improvements are achieved even when training is limited to single SAT problem demonstrating data efficiency of our method. We believe GQSAT is a stepping stone to a new generation of SAT solvers leveraging data to build better heuristics learned from past experience." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 SAT PROBLEM", "text": "A SAT problem involves finding variable assignments such that a propositional logic formula is satisfied or showing that such an assignment does not exist. A propositional formula is a Boolean expression, including Boolen variables, ANDs, ORs and negations. ’x’ or ’NOT x’ make up a literal. It is convenient to represent Boolean formulas in conjunctive normal form (CNF), i.e., conjunctions (AND) of clauses, where a clause is a disjunction (OR) of literals. An example of a CNF is (x1 ∨ ¬x2) ∧ (x2 ∨ ¬x3), where ∧,∨,¬ are AND, OR, and negation respectively. This CNF has two clauses: (x1 ∨ ¬x2) and (x2 ∨ ¬x3). In this work, we use SAT to both denote the Boolean Satisfiability problem and a satisfiable instance, which should be clear from the context. We use unSAT to denote unsatisfiable instances.\nThere are many types of SAT solvers. In this work, we focus on CDCL solvers, MiniSat (Sorensson & Een, 2005) in particular, because it is an open source, minimal, but powerful implementation. A CDCL solver repeats the following steps: every iteration it picks a literal, assigns a variable a binary value. This is called a decision. After deciding, the solver simplifies the formula building an implication graph, and checks whether a conflict emerged. Given a conflict, it can infer (learn) new clauses and backtrack to the variable assignments where the newly learned clause becomes unit (consisting of a single literal) forcing a variable assignment which avoids the previous conflict. Sometimes, CDCL solver undoes all the variable assignments keeping the learned clauses to escape futile regions of the search space. This is called a restart.\nWe focus on the branching heuristic because it is one of the most heavily used during the solution procedure. The branching heuristic is responsible for picking the variable and assigning some value to it. VSIDS (Moskewicz et al., 2001) is one of the most used CDCL branching heuristics. It is a counter-based heuristic which keeps a scalar value for each literal or variable (MiniSat uses the latter). These values are increased every time a variable gets involved in a conflict. The algorithm behaves greedily with respect to these values called activities. Activities are usually initialized with zeroes (Liang et al., 2015)." }, { "heading": "2.2 REINFORCEMENT LEARNING", "text": "We formulate the RL problem as a Markov decision process (MDP). An MDP is a tuple 〈S,A,R, T , ρ, γ〉with a set of states S, a set of actionsA, a reward functionR = R(s, a, s′) and the transition function T = p(s, a, s′), where p(s, a, s′) is a probability distribution, s, s′ ∈ S, a ∈ A. ρ is the probability distribution over initial states. γ ∈ [0, 1) is the discount factor responsible for trading off the preferences between the current immediate reward and the future reward. In the case of episodic tasks, the state space is split into the set of non-terminal states and the terminal state S+. To solve an MDP means to find an optimal policy, a mapping which outputs an action or distribution over actions given the state, such that we maximize the expected discounted cumulative return R = E[ ∑∞ t=0 γ trt], where rt = R(st, at, st+1) is the reward for the transition from st to st+1.\nIn Section 3 we apply deep Q-networks (Mnih et al., 2015, DQN), a value-based RL algorithm that approximates an optimal Q-function, an action-value function that estimates the sum of future rewards after taking an action a in state s and following the optimal policy π thereafter: Q∗(s, a) = Eπ,T ,ρ[R(s, a, s′) + γmaxa′ Q∗(s′, a′)]. A mean squared temporal difference (TD) error is used to make an update step: L(θ) = (Qθ(s, a)− r − γmaxa′ Qθ̄(s′, a′))2. Qθ̄ is called a target network (Mnih et al., 2015). It is used to stabilise DQN by splitting the decision and evaluation operations. Its weights are copied from the main network Qθ after each k minibatch updates." }, { "heading": "2.3 GRAPH NEURAL NETWORKS", "text": "We use Graph Neural Networks (Gori et al., 2005, GNN) to approximate ourQ-function due to their input size, structure, and permutation invariance. We use the formalism of Battaglia et al. (2018) which unifies most existing GNN approaches. Under this formalism, GNN is a set of functions that take a labeled graph as input and output a graph with modified labels but the same topology.\nHere, a graph is a directed graph 〈V,E,U〉, where V is the set of vertices, E is the set of edges with e = (s, r) ∈ E, s, r ∈ V , and U is a global attribute. The global attribute contains information, relevant to the whole graph. We call vertices, edges and the global attribute entities. Each entity has its features vectors. A GNN changes this features as a result of its operations.\nA graph network can be seen as a set of six functions: three update functions and three aggregation functions. The information propagates between vertices along graph edges. Update functions compute new entity labels. Aggregation functions exist to ensure the GNN’s ability to process graphs of arbitrary topologies, compressing multiple entities features into vectors of fixed size. GNN blocks can be combined such that the output of one becomes input of the other. For example, the EncodeProcess-Decode architecture (Battaglia et al., 2018) processes the graph in a recurrent way, enabling information propagation between remote vertices." }, { "heading": "3 GQSAT", "text": "As noted in Section 2.2, we use the MDP formalism for our purposes. Each state of our MDP consists of unassigned variables and unsatisfied clauses containing these variables. The initial state distribution is a distribution over all possible SAT problems. Our problem has an episodic nature with a clear terminal state: when a satisfying assignment is found or the algorithm has exhausted all the possible options proving unSAT. The action set includes two actions for each unassigned variable: assigning it to true or false. We modify the MiniSat-based environment of Wang & Rompf (2018) which is responsible for the transition function. It takes the actions, modifies its implication graph internally and returns a new state, containing newly learned clauses and without the variables removed after the propagation. Strictly speaking, this state is not fully observable. In the case of a conflict, the solver undoes the assignments for variables that are not in the agent’s observation. However, in practice, this should not inhibit the goal of quickly pruning the search tree: the information in the state is enough to pick a variable that leads to more propagations in the remaining formula.\nWe use a simple reward function: the agent gets a negative reinforcement p for each non-terminal transition and 0 for reaching the terminal state. This reward encourages an agent to finish an episode as quickly as possible and does not require elaborate reward shaping to start using GQSAT." }, { "heading": "3.1 STATE REPRESENTATION", "text": "We represent a SAT problem as a graph similar to Selsam et al. (2018). We make it more compact, using vertices to denote variables instead of literals. We use nodes to encode clauses as well.\nOur state representation is simple and does not require scrupulous feature engineering. An edge (xi, ci) means that a clause ci contains literal xi. If a literal contains a negation, a corresponding edge has a [1, 0] label and [0, 1] otherwise. GNN process directed graphs so we create two directed edges with the same labels: from a variable to a clause and vice-versa. Vertex features are two dimensional one-hot vectors, denoting either a variable or a clause. We do not provide any other information to the model. The global attribute input is empty and is only used for message passing. Figure 1a gives an example of the state for (x1 ∨ x2) ∧ (¬x2 ∨ x3)." }, { "heading": "3.2 Q-FUNCTION REPRESENTATION", "text": "We use the encode-process-decode architecture (Battaglia et al., 2018), which we discuss in more detail in Appendix B.1. Similarly to Bapst et al. (2019), our GNN labels variable vertices with Qvalues. Each variable vertex has two actions: pick the variable and set it to true or false as shown on Figure 1b. We choose the action which gives the maximum Q-value across all variable vertices. The graph contains only unassigned variables so all actions are valid. We use common DQN techniques such as memory replay, target network and -greedy exploration. To expose the agent to more episodes and prevent it from getting stuck, we cap the maximum number of actions per episode. This is similar to the episode length parameter in gym (Brockman et al., 2016)." }, { "heading": "3.3 TRAINING AND EVALUATION PROTOCOL", "text": "We train our agent using Random 3-SAT instances from the SATLIB benchmark (Hoos & Stützle, 2000). To measure generalization, we split these data into train, validation and test sets. The train set includes 800 problems, while the validation and test sets are 100 problems each. We provide more details about dataset in Appendinx B.2.\nTo illustrate the problem complexities, Table 1 provides the number of steps it takes MiniSat to solve the problem. Each random 3-SAT problem is denoted as SAT-X-Y or unSAT-X-Y, where SAT means that all problems are satisfiable, unSAT stands for all problems are unsatisfiable. X and Y stands for the number of variables and clauses in the initial formula.\nWhile random 3-SAT problems have relatively small number of variables/clauses, they have an interesting property which makes them more challenging for a solver. For this dataset, the ratio of clauses to variables is close to 4.3:1 which is near the phase transition, when it is hard to say whether the problem is SAT or unSAT (Cheeseman et al., 1991). In 3-SAT problems each clause has exactly 3 variables, however, learned clauses might be of arbitrary size and GQSAT is able to deal with it.\nWe use Median Relative Iteration Reduction (MRIR) w.r.t. MiniSat as our main performance metric which is a number of iterations it takes MiniSat to solve a problem divided by GQSAT’s number of iterations. Similarly to the median human normalised score adopted in the Atari domain, we use the median instead of the mean to avoid the situation when the outliers skew the mean considerably. By one iteration we mean one decision, i.e. choosing a variable and setting it to a value. MRIR is the median across all the problems in the dataset. We compare ourselves with the best MiniSat results having run MiniSat with and without restarts. We cap the number of decisions our method takes at the beginning of the solution procedure and then we give control to MiniSat.\nWhen training, we evaluate the model every 1000 batch updates on the validation subsets of the same distribution as the train dataset and pick the one with the best validation results. After that, we evaluate this model on the test dataset and report the results. For each of the model we do 5 training runs and report the average MRIR results, the maximum and the minimum.\nWe implement our models using Pytorch (Paszke et al., 2017) and Pytorch Geometric (Fey & Lenssen, 2019). We provide all the hyperparameters needed to reproduce our results in Appendix B. We will release our experimental code as well as the MiniSat gym environment." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "4.1 IMPROVING UPON VSIDS", "text": "In our first experiment, we consider whether it is possible improve upon VSIDS using no domain knowledge, a simple state representation, and a simple reward function. The first row in Table 2 gives us a positive answer to that question. DQN equipped with GNN as a function approximation solves the problems in fewer than half the iterations of MiniSat.\nGQSAT makes decisions resulting in more propagations, i.e., inferring variable values based on other variable assignments and clauses. This helps GQSAT prune the search tree faster. For SAT50-218, GQSAT does on average 2.44 more propagations than MiniSat (6.62 versus 4.18). We plot the average number of variable assignments for each problem individually in the Appendix A.\nThese results raise the question: Why does GQSAT outperform VSIDS? VSIDS is a counter-based heuristic that takes time to warm up. Our model, on the other hand, perceives the whole problem structure and can make more informed decisions from step one. To check this hypothesis, we vary the number of decisions our model makes at the beginning of the solution procedure before we hand the control back to VSIDS. The results of the experiment in Figure 2 support this hypothesis. Even if our model is used for only the first ten iterations, it still improves performance over VSIDS.\nOne strength of GQSAT is that VSIDS keeps being updated while the decisions are made with GQSAT. We believe that GQSAT complements VSIDS by providing better quality decisions in the initial phase while VSIDS is warming up. Capping the number of model calls can significantly reduce the main bottleneck of our approach – wall clock time spent on model evaluation. Optimizing for speed was not our focus, however even with the current unoptimized implementation, if we use the model for the first 500 iterations and assuming this gives us a 2x reduction in total iterations, our approach is competitive if it takes more than 20 seconds for a base solver to solve the problem." }, { "heading": "4.2 GENERALIZATION PROPERTIES OF GQSAT", "text": "" }, { "heading": "4.2.1 GENERALIZATION ACROSS PROBLEM SIZES", "text": "Table 2 shows that GQSAT has no difficulties generalizing to bigger problems, showing almost 4x improvement in iterations for the dataset 5 times bigger than the training set. GQSAT on average leads to more variable assignments changes per step, e.g., 7.58 vs 5.89 on SAT-100-430. It might seem surprising that the model performs better for larger problems. However, our performance metric is relative. An increase in score for different problem sizes might also mean that the base solver scales worse than our method does for this benchmark." }, { "heading": "4.2.2 GENERALIZATION FROM SAT TO UNSAT", "text": "An important characteristic of GQSAT is that the problem formulation and representation makes it possible to solve unSAT problems when training only on SAT, which was problematic for some of the existing approaches (Selsam et al., 2018).\nThe performance is, however, worse than the performance on satisfiable problems. On the one hand, SAT and unSAT problems are different. When the solver finds one satisfying assignment, the problem is solved. For unSAT, the algorithm needs to exhaust all possible options to prove that there is no such assignment. On the other hand, there is one important similarity between these two types of problems – an algorithm has to prune the search tree as fast as possible. Our measurements of average propagations per step demonstrate that GQSAT learns how to prune the tree more efficiently than VSIDS (6.36 vs 4.17 for unSAT-50-218)." }, { "heading": "4.2.3 GENERALIZATION ACROSS PROBLEM STRUCTURES", "text": "SAT problems have distinct structures. The graph representation of a random 3-SAT problem looks much different than that of a graph coloring problem. To investigate how much our model, trained on SAT-50, can generalize to problems of different structures, we evaluate it on the flat graph coloring benchmark from SATLIB (Hoos & Stützle, 2000). All the problems in the benchmark are satisfiable.\nTable 3 shows a decrease in GQSAT performance when generalizing to another problem distribution. We believe there are two potential reasons. First, different SAT problem distributions have different graph properties that are not captured during training on another distribution. Second, this might be related to our model selection process which does not favor generalization across problem structures.\nTable 3 shows that graph coloring problems have more variables. We conducted an experiment investigating GQSAT’s ability to scale to larger problems (more variables, more clauses). We trained GQSAT on flat75-180 with problems of 225 variables and 840 clauses. Graph Coloring benchmarks have only 100 problems each, so we do not split them into train/validation/test dataset using flat-7580 for training and flat-100-239 to do model selection. We use the same hyperparameters as in all previous experiments changing only the gradient clipping parameter to 0.1. The results on Table 4 show that GQSAT can scale to bigger problems on the flat graph coloring benchmark.\nApart from scaling to bigger graphs, we could test scaling for longer episodes. Table 1 shows exponential growth in the number of iterations it takes MiniSat to solve larger problems. Our preliminary experiments show that generalizing is easier than learning. Learning on SAT-100-430 requires more resources, does not generalize as well, and is generally less stable than training on SAT-50-218. This is most likely related to higher variance in the returns caused by longer episodes, challenges for temporal credit assignment, and difficulties with exploration, motivating further research. It also motivates curriculum learning as the next step of GQSAT development. Bapst et al. (2019) shows a positive effect of curriculum learning on RL with GNN." }, { "heading": "4.3 DATA EFFICIENCY", "text": "We design our next experiment to understand how many different SAT problems GQSAT needs to learn from. We varied the SAT-50-218 train set from a single problem to 800 problems. Figure 3 demonstrates that GQSAT is extremely data efficient. Having more data helps in most cases but, even with a single problem, GQSAT generalizes across problem sizes and to unSAT instances. This should allow GQSAT to generalize to new benchmarks without access to many problems from it. We suppose that GQSAT’s data efficiency is one of the benefits of using RL. The environment allows the agent to explore diverse state-action space regions making it possible to learn useful policies even from a single instance. In supervised learning, the data diversity is addressed at the training data generation step." }, { "heading": "5 RELATED WORK", "text": "Using machine learning for the SAT problem is not a new idea (Haim & Walsh, 2009; Grozea & Popescu, 2014; Flint & Blaschko, 2012; Singh et al., 2009). Xu et al. (2008) propose a portfoliobased approach which yielded strong results in 2007 SAT competition. Liang et al. (2016) treat each SAT problem as a multi-armed bandit problem capturing variables’ ability to generate learnt clauses.\nRecently, SAT has attracted interest in the deep learning community. There are two main approaches: solving a problem end-to-end or learning heuristics while keeping the algorithm backbone the same. Selsam et al. (2018, NeuroSAT) take an end-to-end supervised learning approach demonstrating that GNN can generalize to SAT problems bigger than those used for training. NeuroSAT finds satisfying assignments for the SAT formulae and thus cannot generalize from SAT to unSAT problems. Moreover, the method is incomplete and might generate incorrect results, which is extremely important especially for unSAT problems. Selsam & Bjørner (2019) modify NeuroSAT and integrate it into popular SAT solvers to improve timing on SATCOMP-2018 benchmark. While the approach shows its potential to scale to large problems, it requires an extensive training set including over 150,000 data points. Amizadeh et al. (2018) propose an end-to-end GNN architecture to solve circuit-SAT problems. While their model never produces false positives, it cannot solve unSAT problems.\nThe following methods take the second approach learning a branching heuristic instead of learning an algorithm end-to-end. Jaszczur et al. (2019) take the supervised-learning approach using the same\ngraph representation as in Selsam et al. (2018). The authors show a positive effect of combining DPLL/CDCL solver with the learnt model. As in Selsam et al. (2018), their approach requires a diligent process of the test set crafting. Also, the authors do not compare their approach to the VSIDS heuristic, which is known to be crucial component of CDCL (Katebi et al., 2011).\nWang & Rompf (2018), whose environment we took as a starting point, show that DQN does not generalize for 20-91 3-SAT problems, whereas Alpha(GO) Zero does. Our results show that the issue is related to the state representation. They use CNNs, which are not invariant to variable renaming or permutations. Moreover, CNNs require a fixed input size which makes it infeasible when applying to problems with different number of variables or clauses.\nThe work of Lederman et al. (2018) is closest to ours. They train a REINFORCE (Williams, 1992) agent to replace the branching heuristic for Quantified Boolean Formulas using GNNs for function approximation. They note positive generalization properties across the problem size for problems from similar distributions. Besides the base RL algorithm and some minor differences, our approaches differ mainly in the state representation. They use 30 variables for the global state encoding and seven variables for vertex feature vectors. GQSAT does not require feature engineering to construct the state. We use only two bits to distinguish variables from clauses and encode literal polarities. Also, Lederman et al. (2018) use separate vertices for x and ¬x in the graph representation. Vinyals et al. (2015) introduce a recurrent architecture for approximately solving complex geometric problems, such as the Traveling Salesman Problem (TSP), approaching it in a supervised way. Bello et al. (2016) consider combinatorial optimization problems with RL, showing results on TSP and the Knapsack Problem. Khalil et al. (2017) approach combinatorial optimization using GNNs and DQN, learning a heuristic that is later used greedily. It is slightly different from the approach we take since their heuristic is effectively the algorithm itself. We augment only a part of the algorithm – the branching heuristic. Paliwal et al. (2019) use GNN with imitation learning for theorem proving. Carbune et al. (2018) propose a general framework of injecting an RL agent into existing algorithms.\nCai et al. (2019) use RL to find a suboptimal solution that is further refined by another optimization algorithm, simulated annealing (Kirkpatrick et al., 1983, SA) in their case. The method is not limited with SA, and this modularity is valuable. However, there is one important drawback of the approach. The second optimization algorithm might benefit more from the first algorithm if they are interleaved. For instance, GQSAT can guide search before VSIDS overcomes its initialization bias.\nRecently, GNN received a lot of attention in the RL community, enabling the study of RL agents in state/action spaces of dynamic size, which is crucial for generalization beyond the given task. Wang et al. (2018) and Sanchez-Gonzalez et al. (2018) consider GNN for the generalization of the control problem. Bapst et al. (2019) investigate graph-based representation for the construction task and notice high generalization capabilities of their agents. Jiang et al. (2018); Malysheva et al. (2018); Agarwal et al. (2019) study generalization of the behaviour in multi-agent systems, noting the GNN benefits due to their invariance to the number of agents in the team or other environmental entities." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this paper, we introduced GQSAT, a branching heuristic of a SAT solver that causes more variable propagations per step solving the SAT problem in fewer iterations comparing to VSIDS. GQSAT uses a simple state representation and does not require elaborate reward shaping. We demonstrated its generalization abilities, showing more than 2-3X reduction in iterations for the problems up to 5X larger and 1.5-2X from SAT to unSAT. We showed how GQSAT improves VSIDS and showed that our method is data efficient. While our method generalizes across problem structures to a lesser extent, we showed that training on data from other distributions might lead to further performance improvements. Our findings lay the groundwork for future research that we outline below.\nScaling GQSAT to larger problems. Industrial-sized benchmarks have millions of variables. Our experiments training on SAT-100 and graph coloring show that increases in problem complexity makes our method less stable due to typical RL challenges: longer credit assignment spans, reward shaping, etc. Further research will focus on scaling GQSAT using latest stabilizing techniques (Hessel et al., 2018), more sophisticated exploration methods and curriculum learning.\nFrom reducing iterations to speeding up. SAT heuristics are good because they are fast. It takes constant time to make a decision with VSIDS. GNN inference takes much longer. However, our experiments show that GQSAT can show an improvement using only the first k steps. Reducing the network polling frequency and replacing the variable activities with GQSAT’s output similarly to Selsam & Bjørner (2019) is another interesting avenue of the future research. An efficient C++ implementation of our method should also help.\nInterpretation of the results. Newsham et al. (2014) show that the community structure of SAT problems is related to the problem complexity. We are interested in understanding how graph structure influences the performance of GQSAT and how we can exploit this knowledge to improve GQSAT.\nAlthough we showed the powerful generalization properties of graph-based RL, we believe the problem is still far from solved and our work is just one stepping stone towards a new generation of solvers that can discover and exploit heuristics that are too difficult for a human to design." }, { "heading": "A PROPAGATIONS PER STEP", "text": "" }, { "heading": "B REPRODUCIBILITY", "text": "B.1 MODEL ARCHITECTURE\nWe use Encoder-Process-Decode architecture from Battaglia et al. (2018). Encoder and decoder are independent graph networks, i.e. MLPs taking whole vertex or edge feature matrix as a batch without message passing. We call the middle part ’the core’. The output of the core is concatenated with the output of the encoder and gets fed to the core again. We describe all hyperparameters in Appendix B.3. We also plan to release the experimental code and the modified version of MiniSat to use as a gym environment.\nB.2 DATASET\nWe split SAT-50-218 into three subsets: 800 training problems, 100 validation and 100 test problems. For generalization experiments, we use 100 problems from all the other benchmarks.\nFor graph colouring experiments, we train our models using all problems from flat-75-180 dataset. We select a model, given performance on all 100 problems from flat-100-239. So, evaluation on this two datasets should not be used to judge the performance of the method and they are shown separately in Table 4. All the data from the second part of the table was not seen by the model during training (flat-30-60, flat-50-115, flat-125-301, flat-150-360, flat-175-417, flat-200-479).\nB.3 HYPERPARAMETERS\nAverage assignments change per step, SAT 50-218\nAverage assignments change per step, unSAT 50-218\nHyperparameter Value Comment DQN\n– Batch updates 50 000 – Learning rate 0.00002 – Batch size 64 – Memory replay size 20 000 – Initial exploration 1.0 – Final exploration 0.01 – Exploration decay 30 000 Environment steps. – Enitial exploration steps 5000 Environment steps, filling the buffer, no training. – Discounting γ 0.99 – Update frequency 4 Every 4th environment step. – Target update frequency 10 – Max decisions allowed for training 500 Used a safety against being stuck at the episode. – Max decisions allowed for testing 500 Varied among [0, 10, 50, 100, 300, 500, 1000] for the experiment on Figure 2. – Step penalty size p -0.1\nOptimization\n– Optimizer Adam – Adam betas 0.9, 0.999 Pytorch default. – Adam eps 1e-08 Pytorch default. – Gradient clipping 1.0 0.1 for training on the graph coloring dataset. – Gradient clipping norm L2 – Evaluation frequency 1000\nGraph Network\n– Message passing iterations 4 – Number of hidden layers for GN core 1 – Number of units in GN core 64 – Encoder output dimensions 32 For vertex, edge and global updater. – Core output dimensions 64,64,32 For vertex, edge and global respectively. – Decoder output dimensions 32 For vertex updater, since only Q values are used, no need for edge/global updater. – Activation function ReLU For everything but the output transformation. – Edge to vertex aggregator sum – Variable to global aggregator average – Edge to global aggregator average – Normalisation Layer Normalisation After each GN updater" } ]
2,019
null
SP:4ae68ed1b9175b904a6f026277e9ff8bb288797b
[ "The paper presents new theory to develop understanding about why adversarially robust neural networks show lower test performance compared to their standard counterparts despite being more robust to perturbations in the data. The main hypothesis is that the degradation in performance in adversarially robust networks is due to many samples being concentrated around the decision boundary, which makes the network less confident about its decisions. The paper studies this hypothesis by deriving a bound on the generalization error based on the margin between the samples in the training set and the decision boundary. The paper then presents empirical demonstrations that aim to illustrate the theoretical findings. ", "This paper focuses on analyzing the regularization of adversarial robustness (AR) on neural networks (NNs). They establish a generalization error (GE) bound characterizing the regularization of AR, and identify two quantities: margin distributions and singular values of NNs' weight matrices. With empirical studies, they show that AR is achieved by regularizing NNs towards less confident solutions and making feature space changes smoother uniformly in all directions, which prevents sudden change wrt perturbations but leads to performance degradation." ]
The problem of adversarial examples has shown that modern Neural Network (NN) models could be rather fragile. Among the most promising techniques to solve the problem, one is to require the model to be -adversarially robust (AR); that is, to require the model not to change predicted labels when any given input examples are perturbed within a certain range. However, it is observed that such methods would lead to standard performance degradation, i.e., the degradation on natural examples. In this work, we study the degradation through the regularization perspective. We identify quantities from generalization analysis of NNs; with the identified quantities we empirically find that AR is achieved by regularizing/biasing NNs towards less confident solutions by making the changes in the feature space (induced by changes in the instance space) of most layers smoother uniformly in all directions; so to a certain extent, it prevents sudden change in prediction w.r.t. perturbations. However, the end result of such smoothing concentrates samples around decision boundaries, resulting in less confident solutions, and leads to worse standard performance. Our studies suggest that one might consider ways that build AR into NNs in a gentler way to avoid the problematic regularization.
[]
[ { "authors": [ "Idan Attias", "Aryeh Kontorovich", "Yishay Mansour" ], "title": "Improved generalization bounds for robust learning", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Peter Bartlett", "Dylan J. Foster", "Matus Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Gary Bécigneul" ], "title": "On the effect of pooling on the geometry of representations", "venue": "Technical report,", "year": 2017 }, { "authors": [ "Daniel Cullina", "Arjun Nitin Bhagoji", "Prateek Mittal" ], "title": "PAC-learning in the presence of evasion adversaries", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In AISTATS,", "year": 2010 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep Sparse Rectifier", "venue": "Neural Networks. In AISTATS,", "year": 2011 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity Mappings in Deep Residual Networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Kui Jia", "Shuai Li", "Yuxin Wen", "Tongliang Liu", "Dacheng Tao" ], "title": "Orthogonal Deep Neural Networks", "venue": "Technical report,", "year": 2019 }, { "authors": [ "Justin Khim", "Po-Ling Loh" ], "title": "Adversarial Risk Bounds for Binary Classification via Function Transformation", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "ImageNet Classification with Deep Convolutional Neural Networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial Machine Learning at Scale", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Yao Li", "Martin Renqiang Min", "Wenchao Yu", "Cho-Jui Hsieh", "Thomas C.M. Lee", "Erik Kruus" ], "title": "Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Chunchuan Lyu", "Kaizhu Huang", "Hai Ning Liang" ], "title": "A unified gradient regularization family for adversarial examples", "venue": "In ICDM,", "year": 2015 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards Deep Learning Models Resistant to Adversarial Attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Shin Ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning", "venue": "PAMI, pages", "year": 1939 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "The Role Of Over-parametrization In Generalization Of Neural Networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "F W Pfeiffer" ], "title": "Automatic differentiation in prose", "venue": "In ICLR Workshop,", "year": 2017 }, { "authors": [ "Ludwig Schmidt", "Shibani Santurkar", "Dimitris Tsipras", "Kunal Talwar", "Aleksander M Madry" ], "title": "Adversarially Robust Generalization Requires More Data", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Hanie Sedghi", "Vineet Gupta", "Philip M. Long" ], "title": "The Singular Values of Convolutional Layers", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "S. Shalev-Shwartz", "S. Ben-David" ], "title": "Understanding Machine Learning: From Theory to Algorithms. Understanding Machine Learning: From Theory to Algorithms", "venue": null, "year": 2014 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying Some Distributional Robustness with Principled Adversarial Training", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Jure Sokolic", "Raja Giryes", "Guillermo Sapiro", "Miguel R.D. Rodrigues" ], "title": "Robust Large Margin Deep Neural Networks", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Dong Su", "Huan Zhang", "Hongge Chen", "Jinfeng Yi", "C V Aug" ], "title": "Is Robustness the Cost of Accuracy ? – A Comprehensive Study on the Robustness of", "venue": null, "year": 2018 }, { "authors": [ "Christian Szegedy", "W Zaremba", "I Sutskever" ], "title": "Intriguing properties of neural networks", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Ma̧dry" ], "title": "Robustness May Be at Odds with Accuracy", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Lipschitz-Margin Training : Scalable Certification of Perturbation Invariance for Deep Neural Networks", "venue": null, "year": 2018 }, { "authors": [ "Nakul Verma" ], "title": "Distance Preserving Embeddings for General n-Dimensional Manifolds", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Huan Xu", "Shie Mannor" ], "title": "Robustness and generalization", "venue": "Machine Learning,", "year": 2012 }, { "authors": [ "Dong Yin", "Peter Bartlett" ], "title": "Rademacher Complexity for Adversarially Robust", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide Residual Networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan" ], "title": "Theoretically Principled Trade-off between Robustness and Accuracy", "venue": null, "year": 2019 }, { "authors": [ "Hongyi Zhang", "Yann N. Dauphin", "Tengyu Ma" ], "title": "Fixup Initialization: Residual Learning Without Normalization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Xu", "Mannor (Xu", "Mannor" ], "title": "If a learning algorithm is (K, (·))-robust and L is bounded, a.k.a", "venue": "(Xu & Mannor (Xu and Mannor,", "year": 2012 }, { "authors": [ "Jia" ], "title": "2019), when we only have max pooling layers and ReLU as nonlinear layer in NNs, J(x) is a linear operator at a local region around x", "venue": "For terminology concerning regions,", "year": 2020 }, { "authors": [ "Lee" ], "title": "2015): during training, we zero-pad 4 pixels along each image side, and sample a 32 × 32 region cropped from the padded image or its horizontal flip", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the remarkable performance (Krizhevsky et al., 2012) of Deep Neural Networks (NNs), they are found to be rather fragile and easily fooled by adversarial examples (Szegedy et al., 2014). More intriguingly, these adversarial examples are generated by adding imperceptible noise to normal examples, and thus are indistinguishable for humans. NNs that are more robust to adversarial examples tend to have lower standard accuracy (Su et al., 2018), i.e., the accuracy measured on natural examples. The trade-off between robustness and accuracy has been observed (Kurakin et al., 2017; Madry et al., 2018; Tsipras et al., 2019). To understand such a phenomenon, Tsipras et al. (2019) show that for linear models, if examples are closed to decision boundaries, robustness provably conflicts with accuracy, though the proof seems unlikely to generalize to NNs. Zhang et al. (2019) show that a gap exists between surrogate risk gap and 0-1 risk gap if many examples are close to decision boundaries, and better robustness can be achieved by pushing examples away from decision boundaries. But pushing examples away again degrades NN performance in their experiments. A more established remedy is developed to require NNs to be -adversarially robust (AR), e.g., via Adversarial Training (Madry et al., 2018), Lipschitz-Margin Training (Tsuzuku et al., 2018); that is, they require the model not to change predicted labels when any given input examples are perturbed within a certain range. Note that such hard requirement is different from penalties on the risk function employed by Lyu et al. (2015) and Miyato et al. (2018), which is not our subject of investigation (more discussion in appendix A). In practice, hard-requirement methods are found to lead to worse performance measured in standard classification accuracy. We aim to study this branch of methods.\nWe investigate how adversarial robustness influence the behaviors of NNs to make them more robust but have lower performance. In an earlier time (Szegedy et al., 2014), adversarial training has been suggested as a form of regularization: it augments the training of NNs with adversarial examples, thus might improve the generalization of the end models. How does a possible improvement in generalization end up degrading performance? It prompts us to analyze the regularization effects of AR on NNs. A successful regularization technique is expected to improve test performance,\nbut an improved performance is only one of the possible outcomes of improved generalization. Technically, improved generalization implies the reduction in gap between training errors and test errors. Regularization achieves the gap reduction by reducing the size of the hypothesis space, which reduces the variance, but meanwhile increases the bias of prediction made — a constant classifier can have zero generalization errors, but also have low test performance. Thus, when a hypothesis space is improperly reduced, another possible outcome is biased poorly performing models with reduced generalization gaps.\nKey results. Through a series of theoretically motivated experiments, we find that AR is achieved by regularizing/biasing NNs towards less confident solutions by making the changes in the feature space of most layers (which are induced by changes in the instance space) smoother uniformly in all directions; so to a certain extent, it prevents sudden change in prediction w.r.t. perturbations. However, the end result of such smoothing concentrates examples around decision boundaries and leads to worse standard performance. We elaborate the above statement in details shortly in section 1.1.\nOverall, the investigation of generalization behaviors of NNs points out possible directions where we might go if we are to resolve the issue of the test performance degradation done by AR. The main result shows that the hypothesis space of NNs is improperly reduced, thus we might investigate how to avoid it when enforcing AR. Though beyond the scope of this work, we conjecture that the improper reduction comes from the indistinguishability of the change induced in the intermediate layers of NNs by adversarial noise and that by inter-class difference. To guarantee AR, NNs are asked to smoothe out difference uniformly in all directions in a high dimensional space, and thus are biased towards diffident solutions that make similar/concentrated predictions. We leave the investigation of the conjecture as future works." }, { "heading": "1.1 AR LEADS TO DIFFIDENT NNS WITH MORE INDECISIVE MISCLASSIFICATIONS", "text": "This section elaborates the key results we briefly present previously." }, { "heading": "1. AR reduces the variance of (norms of) the activation/outputs (compared with NNs with different", "text": "AR strength) at most layers that are emitted/induced by feeding perturbations (of any directions) to that layer from the previous layer. Through a series of theoretically motivated experiments, the results prompt us to look at the singular value distributions of the weight matrix of each layer of the NNs. Shown in fig. 1a, we find that overall the standard deviation (STD) of singular values associated with a layer of the NN trained with lower AR strength 4 is larger than that of the NN with higher AR strength 161 — the green dots are mostly below the red dots. Note that given a matrix W and an example x, singular values of W determine how the norm ||Wx|| is changed 1The AR strength is characterized by the maximally allowed l∞ norm of adversarial examples that are used to train the NNs — we use adversarial training (Madry et al., 2018) to build adversarial robustness into NNs. Details can be found in appendix B.1\nwhen compared with ||x||. More specifically, let σmin, σmax be the maximal and minimal singular values, if x is not in the null space of W , then we have ||Wx|| ∈ [σmin||x||, σmax||x||], where || · || denotes 2-norm. This applies to norm ||δx|| of a perturbation as well; that is, given possible changes δx of x of the same norm ||δx|| = c, where c is a constant, the variance of σ(W ) roughly determines the variance of ||W δx||, where σ(W ) denotes all singular values {σi} of W . In more details, note that by SVD decomposition, W δx = ∑ i σiuiv T i δx, thus σi determines how the component vTi δx in the direction of vi is amplified. To see an example, suppose that σmin = σmax = σ0, then the variance of σ(W ) is zero, and ||W δx|| = σ0||δx||. In this case, the variance of ||W δx|| (given an ensemble of perturbations δx of the same norm c) is zero as well. The conclusion holds as well for ReLU(W δx), where W here is a weight matrix of a layer of a NN, and ReLU denotes Rectifier Linear Unit activation function (proved by applying Cauchy interlacing law by row deletion (Chafai) to lemma 4.1). Consequently, by reducing the variance of singular values of weight matrix of a layer of the NN, AR reduces the norm variance of layer activations induced by input perturbations. 2. The reduced norm variance induced by example perturbations concentrates examples, and it empirically concentrates them around decision boundaries; that is, predictions are more diffident. The reduced variance implies that the outputs of each layer of the NN are more concentrated, but it does not tell where they are concentrated. Note that in the previous paragraph, the variance relationship discussed between ||W δx|| and ||δx|| equally applies to ||Wx|| and ||x||, where x is an actual example instead of perturbations. Thus, to find out the concentration of perturbations, we can look at the concentration of samples. Technically, we look at margins of examples. In a multi-class setting, suppose a NN computes a score function f : Rd → RL, where L is the number of classes; a way to convert this to a classifier is to select the output coordinate with the largest magnitude, meaning x 7→ argmaxi fi(x). The confidence of such a classifier could be quantified by margins. It measures the gap between the output for the correct label and other labels, meaning fy(x)−maxi 6=y fi(x). Margin piece-wise linearly depends on the scores, thus the variance of margins is also in a piece-wise linear relationship with the variance of the scores, which are computed linearly from the activation of a NN layer. Thus, the consequence of concentration of activation discussed in the previous paragraph can be observed in the distribution of margins. More details of the connection between singular values and margins are discussed in section 5.2.2, after we present lemma 4.1. A zero margin implies that a classifier has equal propensity to classify an example to two classes, and the example is on the decision boundary. We plot the margin distribution of the test set of CIFAR10 in fig. 1b, and find that margins are increasingly concentrated around zero — that is, the decision boundaries — as AR strength grows. 3. The sample concentration around decision boundaries smoothes sudden changes induced perturbations, but also increases indecisive misclassification. The concentration of test set margins implies that the induced change in margins by the perturbation in the instance space is reduced by AR. The statement may not be immediately obvious, so we explain in details as follows. Given two examples x,x′ from the test set, δx = x− x′ can be taken as a significant perturbation that changes the example x to x′. The concentration of overall margins implies the change induced by δx is smaller statistically in NNs with higher AR strength. Thus, for an adversarial perturbation applied on x, statistically the change of margins is smaller as well — experimentally it is reflected in the increased adversarial robustness of the network, as shown in the increasing curve in fig. 1c. That is, the sudden changes of margins originally induced by adversarial perturbations are smoothed (to change slowly). However, the cost of such smoothness is lower confidence in prediction, and more test examples are slightly/indecisively moved to the wrong sides of the decision boundaries — incurring lower accuracy, as shown in the decreasing curve in fig. 1c.\nLastly, we note that experiments in this section are used to illustrate our main arguments in this section. Further consistent quality results are reported in section 5 by conducting experiments on CIFAR10/100 and Tiny-ImageNet with networks of varied capacity." }, { "heading": "1.2 OUTLINE AND CONTRIBUTIONS", "text": "As briefly discussed at the beginning, this work carries out generalization analysis on NNs with AR. The quantities we investigate in the previous section are identified by the generalization errors (GE) upper bound we establish at theorem 4.1, which characterizes the regularization of AR on NNs. The key result is actually obtained at the end of a series of analysis, thus we present the outline of the analysis here.\nOutline. After presenting some preliminaries in section 3, we proceed to analyze the regularization of AR on NNs, and establish a GE upper bound in section 4. The bound prompts us to look at the GE gaps in experiments. In section 5.1, we find that for NNs trained with higher AR strength, the surrogate risk gaps (GE gaps) decrease for a range of datasets, i.e., CIFAR10/100 and Tiny-ImageNet (ImageNet, 2018). It implies AR effectively regularizes NNs. We go further to study the finer behavior change of NNs that might lead to such a gap reduction. Again, we follow the guidance of theorem 4.1. We look at the margins in section 5.2.1, then at the singular value distribution in section 5.2.2, and discover the main results described in section 1.1. More corroborative experiments are run in appendix B.4 to show that such phenomenon exists in a broad range of NNs with varied capacity, and more complementary results are present in appendix B.3 to explain some seemingly abnormal observations. More related works are present in section 2.\nContributions. Overall, the core contribution in this work is to show that adversarial robustness (AR) regularizes NNs in a way that hurts its capacity to learn to perform in test. More specifically:\n• We establish a generalization error (GE) bound that characterizes the regularization of AR on NNs. The bound connects margin with adversarial robustness radius via singular values of weight matrices of NNs, thus suggesting the two quantities that guide us to investigate the regularization effects of AR empirically. • Our empirical analysis tells that AR effectively regularizes NNs to reduce the GE gaps. To understand how reduced GE gaps turns out to degrade test performance, we study variance of singular values of layer-wise weight matrices of NNs and distributions of margins of samples, when different strength of AR are applied on NNs. • The study shows that AR is achieved by regularizing/biasing NNs towards less confident solutions\nby making the changes in the feature space of most layers (which are induced by changes in the instance space) smoother uniformly in all directions; so to a certain extent, it prevents sudden change in prediction w.r.t. perturbations. However, the end result of such smoothing concentrates samples around decision boundaries and leads to worse standard performance." }, { "heading": "2 RELATED WORKS", "text": "Robustness in machine learning models is a large field. We review some more works that analyze robustness from the statistical perspective. The majority of works that study adversarial robustness from the generalization perspective study the generalization behaviors of machine learning models under adversarial risk. The works that study adversarial risk include Attias et al. (2018); Schmidt et al. (2018); Cullina et al. (2018); Yin and Bartlett (2018); Khim and Loh (2018); Sinha et al. (2018). The bounds obtained under the setting of adversarial risk characterize the risk gap introduced by adversarial examples, thus, it is intuitive that a larger risk gap would be obtained for a larger allowed perturbation limit , which is roughly among the conclusions obtained in those bounds. That is to say, the conclusion normally leads to a larger generalization error as an algorithm is asked to handle more adversarial examples, for that it focuses on characterizing the error of adversarial examples, not that of natural examples. However, adversarial risk is not our focus. In this paper, we study when a classifier needs to accommodate adversarial examples, what is the influence that the accommodation has on generalization behaviors of empirical risk of natural data." }, { "heading": "3 PRELIMINARIES", "text": "Assume an instance space Z = X × Y , where X is the space of input data, and Y is the label space. Z := (X,Y ) are the random variables with an unknown distribution µ, from which we draw samples. We use Sm = {zi = (xi, yi)}mi=1 to denote the training set of size m whose examples are drawn independently and identically distributed (i.i.d.) by sampling Z. Given a loss function l, the goal of learning is to identify a function T : X 7→ Y in a hypothesis space (a class T of functions) that minimizes the expected risk R(l ◦ T ) = EZ∼µ [l (T (X), Y )] , Since µ is unknown, the observable quantity serving as the proxy to the expected risk R is the empirical risk\nRm(l ◦ T ) = 1\nm m∑ i=1 l (T (xi), yi) .\nOur goal is to study the discrepancy between R and Rm, which is termed as generalization error — it is also sometimes termed as generalization gap in the literature\nGE(l ◦ T ) = |R(l ◦ T )−Rm(l ◦ T )|. (1)\nDefinition 1 (Covering number). Given a metric space (S, ρ), and a subset S̃ ⊂ S, we say that a subset Ŝ of S̃ is a -cover of S̃ , if ∀s̃ ∈ S̃ , ∃ŝ ∈ Ŝ such that ρ(s̃, ŝ) ≤ . The -covering number of S̃ is" }, { "heading": "N (S̃, ρ) = min{|Ŝ| : Ŝ is an -covering of S̃}.", "text": "Various notions of adversarial robustness have been studied in existing works (Madry et al., 2018; Tsipras et al., 2019; Zhang et al., 2019). They are conceptually similar; in this work, we formalize its definition to make clear the object for study. Definition 2 ((ρ, )-adversarial robustness). Given a multi-class classifier f : X → RL, and a metric ρ on X , where L is the number of classes, f is said to be adversarially robust w.r.t. adversarial perturbation of strength , if there exists an > 0 such that ∀z = (x, y) ∈ Z and δx ∈ {ρ(δx) ≤ }, we have fŷ(x+ δx)− fi(x+ δx) ≥ 0, where ŷ = argmaxj fj(x) and i 6= ŷ ∈ Y . is called adversarial robustness radius. When the metric used is clear, we also refer (ρ, )-adversarial robustness as -adversarial robustness.\nNote that the definition is an example-wise one; that is, it requires each example to have a guarding area, in which all examples are of the same class. Also note that the robustness is w.r.t. the predicted class, since ground-truth label is unknown for a x in test.\nWe characterize the GE with ramp risk, which is a typical risk to undertake theoretical analysis (Bartlett et al., 2017; Neyshabur et al., 2018b). Definition 3 (Margin Operator). A margin operatorM : RL × {1, . . . , L} → R is defined as\nM(s, y) := sy −max i 6=y si\nDefinition 4 (Ramp Loss). The ramp loss lγ : R→ R+ is defined as\nlγ(r) := 0 r < −γ 1 + r/γ r ∈ [−γ, 0] 1 r > 0\nDefinition 5 (Ramp Risk). Given a classifier f , ramp risk is the risk defined as Rγ(f) := E(lγ(−M(f(X), Y ))),\nwhere X,Y are random variables in the instance space Z previously.\nWe will use a different notion of margin in theorem 4.1, and formalize its definition as follows. We reserve the unqualified word “margin” specifically for the margin discussed previously — the output of margin operator for classification. We call this margin to-be-introduced instance-space margin (IM). Definition 6 (Smallest Instance-space Margin). Given an element z = (x, y) ∈ Z , let v(x) be the distance from x to its closest point on the decision boundary, i.e., the instance-space margin (IM) of example x. Given an -covering of Z , let\nvmin = min x∈{x∈X | ||x−xi||2≤ ,∀xi∈Sm} v(x). (2)\nvmin is the smallest instance-space margin of elements in the covering balls that contain training examples." }, { "heading": "4 THEORETICAL INSTRUMENTS FOR EMPIRICAL STUDIES ON AR", "text": "In this section, we rigorously establish the bound mentioned in the introduction. We study the map T defined in section 3 as a NN (though technically, T now is a map from X to RL, instead of to Y , such an abuse of notation should be clear in the context). To begin with, we introduce an assumption, before we state the generalization error bound guaranteed by adversarial robustness.\nAssumption 4.1 (Monotony). Given a point x ∈ X , let x′ be the point on the decision boundary of a NN T that is closest to x. Then, for all x′′ on the line segment x+ t(x′−x), t ∈ [0, 1], the margin M(Tx′′, y) decreases monotonously.\nThe assumption is a regularity condition on the classifier that rules out undesired oscillation between x and x′. To see how, notice that the margin defined in definition 3 reflects how confident the decision is made. Since x′ is on the decision boundary, it means the classifier is unsure how it should be classified. Thus, when the difference x′ − x is gradually added to x, ideally we want the confidence that we have on classifying x to decrease in a consistent way to reflect the uncertainty. Theorem 4.1. Let T denote a NN with ReLU and MaxPooling nonlinear activation functions (a definition is put at eq. (6) for readers’ convenience), lγ the ramp loss defined at definition 4, and Z the instance space assumed in section 4. Assume that Z is a k-dimensional regular manifold that accepts an -covering with covering number (CX )\nk, and assumption 4.1 holds. If T is 0-adversarially robust (defined at definition 2), ≤ 0, and denote vmin the smallest IM margin in the covering balls that contain training examples (defined at definition 6), σimin the smallest singular values of weight matrices Wi, i = 1, . . . , L− 1 of a NN, {wi}i=1,...,|Y| the set of vectors made up with ith rows of WL (the last layer’s weight matrix), then given an i.i.d. training sample Sm = {zi = (xi, yi)}mi=1 drawn from Z , its generalization error GE(l ◦ T ) (defined at eq. (1)) satisfies that, for any η > 0, with probability at least 1− η\nGE(lγ ◦ T ) ≤ max{0, 1− umin γ }+\n√ 2 log(2)CkX\nkm +\n2 log(1/η)\nm (3)\nwhere\numin = min y,ŷ∈Y,y 6=ŷ ||wy −wŷ||2 L−1∏ i=1 σiminvmin (4)\nis a lower bound of margins of examples in covering balls that contain training samples.\nThe proof of theorem 4.1 is in appendix C. The bound identifies quantities that would be studied experimentally in section 5 to understand the regularization of AR on NNs. The first term in eq. (3) in theorem 4.1 suggests that quantities related to the lower bound of margin umin might be useful to study how -adversarial robustness ( -AR) regularizes NNs. However, -AR is guaranteed in the instance space that determines the smallest instance-space margin vmin. To relate GE bound with -AR, we characterize in eq. (4) the relationship between margin with IM, via smallest singular values of NNs’ weight matrices, suggesting that quantities related to singular values of NNs’ weight matrices might be useful to study how AR regularizes NNs as well. An illustration on how AR could influence generalization of NNs through IM is also given in fig. 2a. The rightmost term in eq. (3) is a standard term in robust framework (Xu and Mannor, 2012) in learning theory, and is not very relevant to the discussion. The remaining of this paper are empirical studies that are based on the quantities, e.g., margin distributions and singular values of NNs’ weight matrices, that are related to the identified quantities, i.e., umin, σimin. These studies aim to illuminate with empirical evidence on the phenomena that AR regularizes NNs, reduces GE gaps, but degrades test performance.2\nBefore turning into empirical study, we further present a lemma to illustrate the relation characterized in eq. (4) without the need to jump into proof of theorem 4.1. It would motivate our experiments later in section 5.2.2. We state the following lemma that relates distances between elements in the instance space with those in the feature space of any intermediate network layers. Lemma 4.1. Given two instances x,x′ ∈ X , let Il(x) be the activation g(Wlg(Wl−1 . . . g(W1x))) at layer l of x (c.f. definition of NNs at appendix C.2), then there exist n ∈ N sets of matrices {W qji }i=1...l, j = 1 . . . n, that each of the matrix W qj i is obtained by setting some rows of Wi to zero, and {qj}j=1...n are arbitrary distinctive symbols indexed by j that index W qj i , such that\n||Il(x)− Il(x′)|| = n∑ j=1 ∫ ej sj ∣∣∣∣∣ ∣∣∣∣∣ l∏ i=1 W qj i dt(x− x ′) ∣∣∣∣∣ ∣∣∣∣∣\nwhere s1 = 0, sj+1 = ej , en = 1, sj , ej ∈ [0, 1] — each [sj , ej ] is a segment in the line segment parameterized by t that connects x and x′.\nIts proof is in appendix C, and an illustration is given in fig. 2b. Essentially, it states that difference in the feature space of a NN, induced by the difference between elements in the instance space, is a summation of the norms of the linear transformation ( ∏l i=1 W qj i ) applied on segments of the line segment that connects x,x′ in the instance space. Since W qji is obtained by setting rows of Wi to zero, the singular values of these induced matrices are intimately related to weight matrices Wi of NN by Cauchy interlacing law by row deletion (Chafai). Since the margin of an example x is a linear transform of the difference between IL−1(x) and the IL−1(x′) of an element x′ on the decision boundary, singular values of {Wi}i=1...L−1 determine the amplification/shrinkage of the IM x− x′." }, { "heading": "5 EMPIRICAL STUDIES ON REGULARIZATION OF ADVERSARIAL ROBUSTNESS", "text": "In this section, guided by theorem 4.1, we undertake empirical studies to explore AR’s regularization effects on NNs. We first investigate the behaviors of off-the-shelf architectures of fixed capacity on various datasets in section 5.1 and 5.2. More corroborative controlled studies that explore the regularization effects of AR on NNs with varied capacity are present in appendix B.4." }, { "heading": "5.1 ADVERSARIAL ROBUSTNESS EFFECTIVELY REGULARIZES NNS ON VARIOUS DATASETS", "text": "This section aims to explore whether AR can effectively reduce generalization errors — more specifically, the surrogate risk gaps. We use adversarial training (Madry et al., 2018) to build\n2 Note that in the previous paragraph, though we identifies quantities umin and σimin related to the upper bound of GE, the quantities we actually would study empirically are margin distribution and all singular values that characterize the GE of all samples, not just the extreme case (upper bound). The analytic characterization of the GE of all samples is not possible since we do not have enough information (at least we do not know the true distribution of samples). That’s why to arrive at close-form analytic characterization of GE, we resort to the extreme non-asymptotic large-sample behaviors. The analytic form is a neat way to present how relevant quantities influence GE. In the rest of the paper, we would carry on empirical study on the distributions of margins and singular values mostly to investigate AR’s influence on GE of all samples.\nadversarial robustness into NNs. The AR strength is characterized by the maximally allowed l∞ norm of adversarial examples that are used to train the NNs. Details on the technique to build adversarial robustness into NNs is given in appendix B.1.\nOur experiments are conducted on CIFAR10, CIFAR100, and Tiny-ImageNet (ImageNet, 2018) that represent learning tasks of increased difficulties. We use ResNet-56 and ResNet-110 (He et al., 2016) for CIFAR10/100, and Wide ResNet (WRN-50-2-bottleneck) (Zagoruyko and Komodakis, 2016) for Tiny-ImageNet (ImageNet, 2018). These networks are trained with increasing AR strength. Results are plotted in fig. 3, where Net A stands for ResNet56, and Net B for ResNet-110.\nRegularization of AR on NNs. We observe in fig. 3a (shown as blue lines marked by circles) that GE gaps (the gaps between training and test losses) decrease as strength of AR increase; we also observe in fig. 3a that training losses increase as AR strength increase; these results (and more results in subsequent fig. 6) imply that AR does regularize training of NNs by reducing their capacities to fit training samples. Interestingly, in the CIFAR10/100 results in fig. 3b, the test losses show a decreasing trend even when test error rates increase. It suggests that the network actually performs better measured in test loss as contrast to the performance measured in test error rates. This phenomenon results from that more diffident wrong predictions are made by NNs thanks to adversarial training, which will be explained in details in section 5.2, when we carry on finer analysis. We note that on Tiny-ImageNet, the test loss does not decrease as those on CIFAR10/100. It is likely because the task is considerably harder, and regularization hurts NNs even measured in test loss.\nTrade-off between regularization of AR and test error rates. The error rate curves in fig. 3b also tell that the end result of AR regularization leads to biased-performing NNs that achieve degraded test performance. These results are consistent across datasets and networks.\nSeemingly abnormal phenomenon. An seemingly abnormal phenomenon in CIFAR10 observed in fig. 3a is that the error rate gap actually increases. It results from the same underlying behaviors of NNs, which we would introduce in section 5.2, and an overfitting phenomenon that AR cannot control. Since it would be a digress to explain, it is put in appendix B.3.\nWe finally note that the adversarial robustness training reproduced is relevant, of which the defense effect is comparable with existing works. One may refer to fig. 11 in appendix D.2 for the details. We can see from it that similar adversarial robustness to Madry et al. (2018) and Li et al. (2018) is achieved for CIFAR10/100, Tiny-ImageNet in the NNs we reproduce." }, { "heading": "5.2 REFINED ANALYSIS THROUGH MARGINS AND SINGULAR VALUES", "text": "The experiments in the previous sections confirm that AR reduces GE, but decreases accuracy. We study the underlying behaviors of NNs to analyze what have led to it here. More specifically, we show that adversarial training implements -adversarial robustness by making NNs biased towards less confident solutions; that is, the key finding we present in section 1.1 that explains both the prevented sudden change in prediction w.r.t. sample perturbation (i.e., the achieved AR), and the reduced test accuracy." }, { "heading": "5.2.1 MARGINS THAT CONCENTRATE MORE AROUND ZERO LEAD TO REDUCED GE GAP", "text": "To study how GE gaps are reduced, theorem 4.1 suggests we first look at the margins of examples — a lower bound of margins is umin in eq. (4). The analysis on margins has been a widely used tool in learning theory (Bartlett et al., 2017). It reflects the confidence that a classifier has on an example, which after being transformed by a loss function, is the surrogate loss. Thus, the loss difference between examples are intuitively reflected in the difference in confidence characterized by margins. To study how AR influences generalization of NNs, we plot in fig. 4 the margin distributions of samples which are obtained by training ResNet-56 on CIFAR10 and CIFAR100 with increased AR strength (the same setting as for fig. 3). Applying the same network of ResNet-56 respectively on CIFAR-10 and CIFAR-100 of different learning difficulties creates learning settings of larger- and smaller-capacity NNs.\nConcentration and reduced accuracy. In fig. 4, we can see that in both CIFAR10/100, the distributions of margins become more concentrated around zero as AR grows. The concentration moves the mode of margin distribution towards zero and more examples slightly across the decision boundaries, where the margins are zero, which explains the reduced accuracy.\nConcentration and reduced loss/GE gap. The concentration has different consequences on training and test losses. Before describing the consequences, to directly relate the concentration to loss gap, we further introduce estimated probabilities of examples. This is because though we use ramp loss in theoretical analysis, in the experiments, we explore the behaviors of more practically used cross entropy loss. The loss maps one-to-one to estimated probability, but not to margin, though they both serve as a measure of confidence. Suppose p(x) is the output of the softmax function of dimension L (L is the number of target classes), and y is the target label. The estimated probability of x would be the y-th dimension of (p(x)), i.e., (p(x))y. On the training sets, since the NNs are optimized to perform well on the sets, only a tiny fraction of them are classified wrongly. To concentrate the margin distribution more around zero, is to make almost all of predictions that are correct more diffident. Thus, a higher expected training loss ensues. On the test sets, the estimated probabilities of the target class concentrate more around middle values, resulting from lower confidence/margins in predictions made by NNs, as shown in fig. 5a (but the majority of values are still at the ends). Note that wrong predictions away from decision boundaries (with large negative margins) map to large loss values in the surrogate loss function. Thus, though NNs with larger AR strength have lower accuracy, they give more predictions whose estimated probabilities are at the middle (compared with NNs with smaller AR strength). These predictions, even if relatively more of them are wrong, maps to smaller loss values, as shown in fig. 5b, where we plot the histogram of loss values of test samples. In the end, it results in expected test losses that are lower, or increase in a lower rate than the training\nlosses on CIFAR10/100, Tiny-ImageNet, as shown in fig. 3b. The reduced GE gap results from the increased training losses, and decreased or less increased test losses." }, { "heading": "5.2.2 AR MAKES NNS SMOOTHE PREDICTIONS W.R.T. INPUT PERTURABTIONS IN ALL DIRECTIONS", "text": "The observation in section 5.2.1 shows that AR make NNs just less confident by reducing the variance of predictions made and concentrate margins more around zero. In this section, we study the underlying factors of AR that make NNs become less confident.\nTo begin with, we show that the singular values of the weight matrix of each layer determine the perturbation in margins of samples induced by perturbations in the instance space. Such a connection between singular values and the perturbation of outputs of a single layer, i.e., ReLU(W δx), has been discussed in section 1.1. In the following, with lemma 4.1, we describe how the relatively more complex connection between margins and singular values of each weight matrix of layers of NNs holds. Observe that margins are obtained by applying a piece-wise linear mapping (c.f. the margin operator in definition 3) to the activation of the last layer of a NN. It implies the perturbations in activation of the last layer induce changes in margins in a piece-wise linear way. Meanwhile, the perturbation in the activation of the last layer (induced by perturbation in the instance space) is determined by the weight matrix’s singular values of each layer of NNs. More specifically, this is explained as follows. Lemma 4.1 shows that the perturbation δI induced by δx, is given by∑n j=1 ∫ ej sj\n∣∣∣∣∣∣∏li=1 W qji δxdt∣∣∣∣∣∣. Note that for each i, W qii is a matrix. By Cauchy interlacing law by row deletion (Chafai), the singular values of Wi, the weight matrix of layer i, determine the singular values of W qji . Thus, suppose l = 1, we have the change (measured in norm) induced by perturbation as ∑n j=1 ∫ ej sj\n∣∣∣∣W qj1 δxdt∣∣∣∣. The singular values of W1 would determine the variance (of norms) of activation change induced by perturbations δx, similarly as explained in section 1.1 except that the norm change now is obtained by a summation of n terms\n∣∣∣∣W qj1 δxdt∣∣∣∣ (each of which is the exact form discussed in section 1.1) weighted by 1/(ej − sj). Similarly, for the case where l = 2 . . . L− 1, the singular values of Wl determine the variance of changes induced by the perturbation of the previous layer (induced by perturbation from further previous layer recursively) of layer l. Consequently, we choose to study these singular values.\nWe plot the standard deviation of singular values of each layer of ResNet56 trained on CIFAR10/100 earlier, shown in fig. 5c 5d. Overall, we can see that the standard deviation of singular values associated with a layer of the NN trained with AR strength 4 is mostly larger than that of the NN with AR strength 16. The STD reduction in CIFAR100 is relatively smaller than CIFAR10, since as observed in fig. 4b, the AR induced concentration effect of margin distributions is also relatively less obvious than that in fig. 4a. More quantitative analysis is given in appendix B.2. This leads us to our key results described in section 1.1." }, { "heading": "APPENDICES", "text": "" }, { "heading": "A FURTHER RELATED WORKS", "text": "Hard and soft adversarial robust regularization. We study the behaviors of NNs that are trained in the way that adversarial examples are required to be classified correctly. We note that the adversarial robustness required can also be built in NNs in a soft way by adding a penalty term in the risk function. Relevant works includes Lyu et al. (2015) and Miyato et al. (2018). This line of works is not our subject of investigation. They focus on increasing test performance instead of defense performance. The focus of our works is to study the behaviors that lead to standard performance degradation when a network is trained to has a reasonable defense ability to adversarial examples. For example, a 50% accuracy on adversarial examples generated by PGD methods (Madry et al., 2018) in fig. 11 is a defense ability that can serve as a baseline for a reasonable defense performance. It is natural that in the setting where the requirement to defend against adversarial examples is dropped, the regularization can be weakened (added as a penalty term) to only aim to improve the test performance of the network. In this case, no performance degradation would occur, but the defense performance is also poor." }, { "heading": "B FURTHER EMPIRICAL STUDIES ON ADVERSARIAL ROBUSTNESS", "text": "" }, { "heading": "B.1 TECHNIQUE TO BUILD ADVERSARIAL ROBUSTNESS", "text": "To begin with, we describe the technique that we use to build AR into NNs. As mentioned in the caption of fig. 1, we choose arguably the most well received technique, i.e., the adversarial training method (Madry et al., 2018). Specifically, we use l∞-PGD (Madry et al., 2018) untargeted attack adversary, which creates an adversarial example by performing projected gradient descent starting from a random perturbation around a natural example. Then, NNs are trained with adversarial examples. NNs with different AR strength are obtained by training them with increasingly stronger adversarial examples. The adversarial strength of adversarial examples is measured in the l∞ norm of the perturbation applied to examples. l∞-norm is rescaled to the range of 0 − 255 to present perturbations applied to different datasets in a comparable way; that means in fig. 1 3 4 5 6 11, AR is measured in this scale. We use 10 steps of size 2/255 and maximum of = [4/255, 8/255, 16/255] respectively for different defensive strength in experiments. For example, a NN with AR strength 8 is a NN trained with adversarial examples generated by perturbations whose l∞ norm are at most 8. Lastly, we note that although adversarial training could not precisely guarantee an adversarial robustness radius of , a larger l∞ norm used in training would make NNs adversarially robust in a larger ball around examples. Thus, though the precise adversarial robustness radius is not known, we know that we are making NNs adversarially robust w.r.t. a larger . Consequently, it enables us to study the influence of -AR on NNs by studying NNs trained with increasing l∞ norm." }, { "heading": "B.2 QUANTITATIVE ANALYSIS OF VARIANCE REDUCTION IN SINGULAR VALUES", "text": "Here, we provide more quantitative analysis on fig. 5c and fig. 5d, as noted previously in section 5.2.2.\nQuantitatively, we can look at the accumulated standard deviation (STD) difference in all layers. We separate the layers into two group: the group that the STD (denoted σ4i ) of singular values of layer i (of the NN trained) with AR strength 4 that is larger than that (denoted σ16i ) of AR strength 16; and the group that is smaller. In CIFAR10, for the first group, the summation of the difference/increments of STD of the two networks ( ∑ i σ 4 i − σ16i ) is 4.7465, and the average is 0.1158. For the second\ngroups, the summation ( ∑ i σ 16 i − σ4i ) is 0.4372, and the average is 0.0312. In CIFAR100, the summation of the first group is 3.7511, and the average is 0.09618; the summation of the second group is 0.4372, and the average is 0.1103. The quantitative comparison shows that the accumulated STD decrease in layers that have their singular value STDs decreased (comparing STD of the NN with AR strength 16 with STD of the NN with AR strength 4) is a magnitude larger the accumulated STD increase in the layers that have their singular value STDs increased. The magnitude difference is significant since the STDs of singular values of most layers are around 1." }, { "heading": "B.3 DISCREPANCY BETWEEN TRENDS OF LOSS AND ERROR RATE GAPS IN LARGE CAPACITY", "text": "NNS\nIn section 5.1, we have noted an inconsistent behaviors of CIFAR10, compared with that of CIFAR100 and Tiny-ImageNet: the error gap reduces for CIFAR100 and Tiny-ImageNet, but increases for CIFAR10. It might suggest that AR does not effectively regularize NNs in the case of CIFAR10. However, we show in this section that the abnormal behaviors of CIFAR10 are derived from the same margin concentration phenomenon observed in section 5.2.1 due to capacity difference, and compared with the error gaps, the GE/loss gaps are more faithfully representatives of the generalization ability of the NNs. Thus, the seemingly abnormal phenomenon corroborate, not contradict, the key results present in section 1.\nUsing CIFAR10 and CIFAR100 as examples and evidence in the previous sections, we explain how the discrepancy emerges from AR’s influence on margin distributions of the same network trained on tasks with different difficulties. Further evidence that the discrepancy arises from capacity difference would be shown at appendix B.4, where we run experiments to investigate GE gap of NNs with varied capacities on the same task/dataset." }, { "heading": "1. On CIFAR10, the margin distribution of training sets not only concentrate more around zero, but", "text": "also skews towards zero. As shown in the margin distribution on training sets of CIFAR10 in fig. 4c, we find that the large error gap is caused by the high training accuracy that is achieved with a high concentration of training samples just slightly beyond the decision boundary. This phenomenon does not happen in CIFAR100. Comparing margin distribution on the test set in fig. 4a, the margin distribution on the training set in fig. 4c is highly skewed, i.e., asymmetrically distributed w.r.t. mean. While the margin distributions of CIFAR100 training set in fig. 4d is clearly less skewed, and looks much more like a normal distribution, as that of the margin distribution on the test set. 2. The high skewness results from the fact that the NN trained on CIFAR10 is of large enough capacity to overfit the training set. As known, CIFAR100 is a more difficult task w.r.t. CIFAR10 with more classes and less training examples in each class. Thus, relatively, even the same ResNet56 network is used, the capacity of the network trained on CIFAR10 is larger than the one trained on CIFAR100. Recall that NNs have a remarkable ability to overfit training samples (Zhang et al., 2016). And note that though AR requires in a ball around an example, the examples in the ball should be of the same class, since the ball is supposed only to include imperceptible perturbation to the example, few of the training samples are likely in the same ball. Thus, the ability to overfit the training set is not regularized by AR: if NNs can overfit all training samples, it can still overfit some more examples that are almost imperceptibly different. For CIFAR10, since NNs have enough capacity, the NN simply overfits the training set. 3. However, as shown in the observed overfitting phenomenon in fig. 4c, the high training accuracy is made up of correct predictions with relatively lower confidence (compared with NNs with lower AR), which is bad and not characterized by the error rate; and the low test accuracy are made up of wrong predictions with relatively lower confidence as well (as explain in section 5.2.1), which is good, and not characterized by error rate as well. Thus, the error gap in this case does not characterize the generalization ability (measured in term of prediction confidence) of NNs well, while the GE gap more faithfully characterizes the generalization ability, and show that AR effectively regularizes NNs. In the end, AR still leads to biased poorly performing solutions — since the overfitting in training set does not prevent the test margin distribution concentrating more around zero, which leads to higher test errors of CIFAR10 as shown in 3b. It further suggests that the damage AR done to the hypothesis space is not recovered by increasing capacity, however the ability of NNs to fit arbitrary labels is not hampered by AR." }, { "heading": "B.4 FURTHER EVIDENCE OF REGULARIZATION EFFECTS ON NNS WITH VARIED CAPACITY", "text": "In previous sections, we observe AR consistently effectively regularizes NNs; meanwhile, we also observe that in the case where a NN has a large capacity, it can spuriously overfit training samples and lead to an increased error gap. In this section, we present additional results by applying AR to networks of controlled capacities. This is to ensure that our observations and analysis in previous sections exist not just at some singular points, but also in a continuous area in the hypothesis space.\nTo control capacities of NNs quantitatively, we choose the measure based on spectral norm (Bartlett et al., 2017; Neyshabur et al., 2018a). In spectral norm based capacity measure bound (Bartlett et al., 2017; Neyshabur et al., 2018a), the NN capacity is normally proportional to a quantity called spectral complexity (SC), which is defined as follows. Definition 7 (Spectral Complexity). Spectral complexity SC(T ) of a NN T is the multiplication of spectral norms of weight matrices of layers in a NN.\nSC(T ) = L∏ i=1 ||Wi||2\nwhere {Wi}i=1...L denotes weight matrices of layers of the NN.\nTo control SC, we apply the spectral normalization (SN) (Sedghi et al., 2018) on NNs. The technique renormalizes the spectral norms of the weight matrices of a NN to a designated value after certain iterations. We carry out the normalization at the end of each epoch. We train ResNet56 with increasingly strong AR and with increasingly strong spectral normalization. The results are shown in fig. 6.\nAs can be seen, as the capacity of NNs decreases (from upper left to bottom right in each subfigure), the error gap between training and test gradually changes from an increasing trend to a decreasing trend, while the loss gap keeps a consistent decreasing trend. It suggests that the overfitting phenomenon is gradually prevented by another regularization techniques, i.e., the spectral normalization. As a result, the regularization effect of AR starts to emerge even in the error gap, which previously manifests only in the loss gap. The other curves corroborate our previous observations and analysis as well." }, { "heading": "B.5 FURTHER EVIDENCE ON THE SMOOTHING EFFECT OF ADVERSARIAL ROBUSTNESS", "text": "We quantitatively measure the smoothing effect around examples here by measuring the average maximal loss change/variation induced by the perturbation (of a fixed infinity norm) applied on examples. We found that the loss variation decreases as networks become increasingly adversarially robust. Note that the loss of an example is a proxy to the confidence of the example — it is the logarithm of the estimated probability (a characterization of confidence) of the NN classifier.\nFor a given maximal perturbation range characterized by the infinity norm, we generate adversarial examples within that norm for all test samples. For each example, the maximal loss variation/change of the adversarial example w.r.t. the natural example is computed for networks with different adversarial strength. To obtain statistical behaviors, we compute the average and standard deviation of such maxima of all test samples. The results are shown in fig. 7. The exact data can be found in table 1.\nWe can see that the average loss variation decreases with adversarial robustness. The standard deviation decreases with network adversarial robustness as well. The phenomenon that the standard deviation is comparably large with the mean might need some explanation. This is because different examples have different losses, thus the loss varies in relatively different regimens — the more wrongly classified examples vary in a larger magnitude, and vice versa for more correctly classified examples. This phenomenon leads to the large standard deviation of the loss variation." }, { "heading": "B.6 FURTHER EMPIRICAL STUDY ON USING FGSM IN ADVERSARIAL TRAINING TO BUILD ADVERSARIAL ROBUSTNESS", "text": "We also use FGSM (Goodfellow et al., 2015) in the adversarial training to build adversarial robustness into NNs. The results are consistent with the results obtained using PGD. The experiments are carried on CIFAR10/100. We present key plots that support the results obtained in the main content here. All the setting are same with that described in appendix B.1 of PGD, except that we replace PGD with FGSM.\nAdversarial robustness reduces generalization gap and standard test performance. In section 5.1, we find that NNs with stronger adversarial robustness tend to have smaller loss/generalization gap between training and test sets. Consistent phenomenon has been observed in networks adversarially trained with FGSM on CIFAR10/100, as shown in fig. 8a. Consistent standard test performance degradation has been observed in adversarially trained with FGSM on CIFAR10/100 as well, as shown in fig. 8b. The exact data can be found in table 2.\nAdversarial robustness concentrates examples around decision boundaries. In section 5.2.1, we find that the distributions of margins become more concentrated around zero as AR grows.\nThe phenomenon has been observed consistently in networks adversarially trained with FGSM on CIFAR10/100, as shown in fig. 9. Phenomenon in fig. 5a 5b is also reproduced consistently in fig. 10a 10b. Please refer to section 5.2.1 for the analysis of the results. Here we mainly present counterparts of the results analyzed there.\nAdversarial robustness reduces the standard deviation of singular values of weight matrices in the network. In section 5.2.2, we find that for NNs with stronger adversarial robustness, the standard deviation of singular values of weight matrices is smaller in most layers. The phenomenon has been consistently observed in NNs trained with FGSM on CIFAR10/100, as shown in fig. 10c and 10d. Please refer to section 1.1 and section 5.2.2 for the analysis of the results. Here we mainly present counterparts of the results analyzed there..\nIn conclusion, all key empirical results have been consistently observed in NNs trained with FGSM." }, { "heading": "C PROOF OF THEOREM 4.1", "text": "" }, { "heading": "C.1 ALGORITHMIC ROBUSTNESS FRAMEWORK", "text": "In order to characterize the bound to the GE, we build on the algorithmic robustness framework (Xu and Mannor, 2012). We introduce the framework below. Definition 8 ((K, (·))-robust). An algorithm is (K, (·)) robust, for K ∈ N and (·) : Zm 7→ R, if Z can be partitioned into K disjoint sets, denoted by C = {Ck}Kk=1, such that the following holds for all si = (xi, yi) ∈ Sm, z = (x, y) ∈ Z, Ck ∈ C:\n∀si = (xi, yi) ∈ Ck,∀z = (x, y) ∈ Ck =⇒ |l(f(xi), yi)− l(f(x), y)| ≤ (Sm).\nThe gist of the definition is to constrain the variation of loss values on test examples w.r.t. those of training ones through local property of the algorithmically learned function f . Intuitively, if s ∈ Sm and z ∈ Z are “close” (e.g., in the same partition Ck), their loss should also be close, due to the intrinsic constraint imposed by f .\nFor any algorithm that is robust, Xu & Mannor (Xu and Mannor, 2012) proves Theorem C.1 (Xu & Mannor (Xu and Mannor, 2012)). If a learning algorithm is (K, (·))-robust and L is bounded, a.k.a. L(f(x), y) ≤M ∀z ∈ Z , for any η > 0, with probability at least 1− η we have\nGE(fSm) ≤ (Sm) +M √ 2K log(2) + 2 log(1/η)\nm . (5)\nTo control the first term, an approach is to constrain the variation of the loss function. Covering number (Shalev-Shwartz & Ben-David, (Shalev-Shwartz and Ben-David, 2014), Chapter 27) provides a way to characterize the variation of the loss function, and conceptually realizes the actual number K of disjoint partitions.\nFor any regular k-dimensional manifold embedded in space equipped with a metric ρ, e.g., the image data embedded in L2(R2), the square integrable function space defined on R2, it has a covering number N (X ; ρ, ) of (CX / )k (Verma, 2013), where CX is a constant that captures its “intrinsic” properties, and is the radius of the covering ball. When we calculate the GE bound of NNs, we would assume the data space is a k-dimensional regular manifold that accepts a covering.\nAdversarial robustness makes NNs a (K, (·))-robust algorithm, and is able to control the variation of loss values on test examples. Building on covering number and theorem C.1, we are able to prove theorem 4.1." }, { "heading": "C.2 NEURAL NETWORKS", "text": "A NN is a map that takes an input x from the space X , and builds its output by recursively applying a linear map Wi followed by a pointwise non-linearity g:\nxi = g(Wixi−1),\nwhere i indexes the times of recursion, which is denoted as a layer in the community, i = 1, . . . , L, x0 = x, and g denotes the activation function. which is restricted to Rectifier Linear Unit (ReLU) (Glorot et al., 2011) or max pooling operator (Bécigneul, 2017) in this paper. To compactly summarize the operation of T , we denote\nTx = g(WLg(WL−1 . . . g(W1x))). (6)" }, { "heading": "C.3 PROOF", "text": "Proof of lemma 4.1. By Theorem 3 in Sokolic et al. (2017), we have ||Il(x)− Il(x′)|| = ∣∣∣∣∣∣∣∣∫ 1\n0\nJ(x− t(x′ − x))dt(x− x′) ∣∣∣∣∣∣∣∣ (7)\nwhere J(x) denotes the Jacobian of Il(x) at x.\nBy lemma 3.2 in Jia et al. (2019), when we only have max pooling layers and ReLU as nonlinear layer in NNs, J(x) is a linear operator at a local region around x. For terminology concerning regions, we follow the definitions in Jia et al. (2019). More specifically, we have\nJ(x) = l∏ i=1 W xi\nwhere W xi is the linear mapping (matrix) induced by J(x) at x. It is a matrix obtained by selectively setting certain rows of Wi to zero. For the more concrete form of W xi , refer to lemma 3.2 in Jia et al. (2019). In Jia et al. (2019), it is noted as W qi , where q is a region where x is in.\nSuppose that from x to x′, the line segment x− x′ passes through regions {qj}j=1,...,n. The line segment is illustrated in fig. 2b as the boldest black line segment at the upper half of the figure. In the illustration, x− x′ passes through three regions, colored coded as gray, dark yellow, light blue respectively. The line segment is divided into three sub-segments. Suppose l(t) = x+ t(x′ − x). Then the three sub-segments can be represented by l(t) as l(s1) to l(e1), l(s2) to l(e2), and l(s3) to l(e3) respectively, as noted on the line segment in the illustration. Originally, the range of the integration in eq. (7) is from 0 to 1, representing the integration on the line segment l(0) to l(1) in the instance space. Now, since for each of these regions trespassed by the line segment, the Jacobian J(x) is a linear operator, denoted as W qji , the integration in eq. (7) from 0 to 1 can be decomposed as a summation of integration on segments l(s1) to l(e1) etc. In each of these integration, the Jacobian J(x) is the multiplication of linear matrices W qji , i.e., ∏l i=1 W qj i . Thus, eq. (7) can be written as\nn∑ j=1 ∫ ej sj ∣∣∣∣∣ ∣∣∣∣∣ l∏ i=1 W qj i dt(x− x ′) ∣∣∣∣∣ ∣∣∣∣∣\nwhere sj , ej denotes the start and end of the segment [sj , ej ] ⊂ [0, 1] of the segment [0, 1] that passes through the region qj .\nIn the cases that a linear operator is applied on the feature map Il(x) without any activation function, we can also obtain a similar conclusion. Actually, such cases are just degenerated cases of feature maps that have activation functions.\nCorollary C.1. Given two elements x,x′, and Il(x) = Wlg(Wl−1 . . . g(W1x)), we have\n||Il(x)− Il(x′)|| = n∑ j=1 ∫ ej sj ∣∣∣∣∣ ∣∣∣∣∣Wl l−1∏ i=1 W qj i dt(x− x ′) ∣∣∣∣∣ ∣∣∣∣∣\nwhere symbols are defined similar as in lemma 4.1.\nNow, we are ready to prove theorem C.1.\nProof of theorem C.1. Similar with the proof of theorem C.1, we partition space Z into the -cover of Z , which by assumption is a k-dimension manifold. Its covering number is upper bounded by CkX / k, denoting K = CkX / k, and Ĉi the ith covering ball. For how the covering ball is obtained from the -cover, refer to theorem 6 in Xu and Mannor (2012). We study the constraint/regularization that adversarial robustness imposes on the variation of the loss function. Since we only have - adversarial robustness, the radius of the covering balls is at most — this is why we use the same symbol. Beyond , adversarial robustness does not give information on the possible variation anymore. Let T ′ denotes the NN without the last layer.\nFirst, we analyze the risk change in a covering ball Ci. The analysis is divided into two cases: 1 all training samples in Ci are classified correctly; 2) all training samples in Ci are classified wrong. Note that no other cases exist, for that the radius of Ci is restricted to be , and we work on -adversarial robust classifiers. It guarantees that all samples in a ball are classified as the same class. Thus, either all training samples are all classified correctly, or wrongly.\nWe first study case 1). Given any example z = (x, y) ∈ Ci, let ŷ = argmaxi 6=ywTi T ′x. Its ramp loss is\nlγ(x, y) = max{0, 1− 1\nγ (wy −wŷ)TT ′x}.\nNote that within Ci, (wy −wŷ)TT ′x ≥ 0, thus lγ(x, y) is mostly 1, and we would not reach the region where r > 0 in definition 4. Let u(x) := (wy −wŷ)TT ′x, and uimin = min∀x∈Ci u(x). We have\nlγ(x, y) ≤ max{0, 1− uimin γ } ≤ max{0, 1− umin γ },\nwhere umin denotes the smallest margin among all partitions.\nThe inequality above shows adversarial robustness requires that T ′x should vary slowly enough, so that in the worst case, the loss variation within the adversarial radius should satisfy the above inequality. The observation leads to the constraint on the loss difference (·) defined earlier in definition 8 in the following.\nGiven any training example z := (x, y) ∈ Ci, and any element z′ := (x′, y′) ∈ Ci, where Ci is the covering ball that covers x, we have\n|lγ(x, y)− lγ(x′, y′)|\n=|max{0, 1− u(x) γ } −max{0, 1− u(x\n′)\nγ }|\n≤max{0, 1− umin γ }. (8)\nNow we relate the margin to the margin in the instance space.\nGiven z := (x, y) ∈ Z , and z′, of which x′ is the closest points to x (measured in Euclidean norm) on the decision boundary, we can derive the inequality below.\nu(x) = u(x)− u(x′) (9)\n= ∫ 1 0 J(x− t(x− x′))dt(x− x′) (10)\n= ∫ 1 0 (wy −wŷ)T L−1∏ i=1 W x−t(x−x′) i dt(x− x ′)\n= ∫ 1 0 ∣∣∣∣∣(wy −wŷ)T L−1∏ i=1 W x−t(x−x′) i dt(x− x ′) ∣∣∣∣∣ (11) =\nn∑ j=1 ∫ ej sj ∣∣∣∣∣(wy −wŷ)T L−1∏ i=1 W qj i dt(x− x ′) ∣∣∣∣∣ (12) ≥ min y,ŷ∈Y,y 6=ŷ ||wy −wŷ||2 L−1∏ i=1 σimin||x− x′||2 ∫ 1 0 dt (13)\n≥ min y,ŷ∈Y,y 6=ŷ ||wy −wŷ||2 L−1∏ i=1 σimin||x− x′||2\nwhere J(x) denotes the Jacobian of Il(x) at x. Equation (10) can be reached by Theorem 3 in Sokolic et al. (2017). Equation (11) can be reached because (wy −wŷ)W x−t(x−x ′) i (x− x′) is the actually classification score u(x), u(x′) difference between x,x′, and by assumptions 4.1, they are positive throughout. Equation (12) is reached due to corollary C.1 — in this case, the matrix Wl in corollary C.1 is of rank one.\nTo arrive from eq. (12) to eq. (13), we observe that x′ is the closest point to x on the decision boundary. Being the closest means x− x′ ⊥ N ((wy −wŷ)T ′). If the difference x′ − x satisfies x− x′ 6⊥ N (T ′), we can always remove the part in the N (T ′), which would identify a point that is closer to x, but still on the decision boundary, which would be a contradiction. Then if x − x′ is\northogonal to the null space, we can bound the norm using the least singular values. We develop the informal reasoning above formally in the following.\nSimilarly in Lemma 3.4 in Jia et al. (2019), by Cauchy interlacing law by row deletion, assuming x ⊥ N ( ∏L−1 i=1 W qj i ) (N denotes the null space; the math statement means x is orthogonal to the null space of J(x)), we have\n|| L−1∏ i=1 W qj i x||2 ≥ L−1∏ i=1 σimin||x||2 (14)\nwhere σimin is the smallest singular value of Wi. Then conclusion holds as well for multiplication of matrices ∏L−1 i=1 W qj i , since the multiplication of matrices are also a matrix.\nNotice that in each integral in eq. (12), we are integrating over constant. Thus, we have it equates to\nn∑ j=1 (ej − sj) ∣∣∣∣∣(wy −wŷ)T L−1∏ i=1 W qj i (x− x ′) ∣∣∣∣∣ . Now we show that in each operand, x − x′ ⊥ N ((wy − wŷ)T ∏L−1 i=1 W qj i ). Denote Tqj as\nN ((wy −wŷ)T ∏L−1 i=1 W qj i ). Suppose that it does not hold. Then we can decompose x− x′ into two components ∆1,∆2, where ∆1 ⊥ Tqj ,∆2 6⊥ Tqj . We can find a new point x′′ = x+∆1 that is on the boundary. However, in this case\n||x− x′′||2 = ||∆1||2 ≤ ||∆1||2 + ||∆2||2 = ||x− x′||2 Recall that x′ is the closest point to x on the decision boundary. This leads to a contradiction. Repeat this argument for all j = 1, . . . , n, then we have x− x′ be orthogonal to all N (Tqj ). Thus, by the inequality eq. (14) earlier, we can arrive at eq. (13) — notice that wy − wŷ is a matrix with one column, thus also satisfies the above reasoning.\nThrough the above inequality, we can transfer the margin to margin in the instance space. Let v(x) be the shortest distance in || · ||2 norm from an element x ∈ X to the decision boundary. For a covering ball Ci, let vimin be minx∈Ci v(x). Let vmin be the smallest v i min among all covering balls Ci that contain at least a training example. We have that\numin ≥ min y,ŷ∈Y,y 6=ŷ ||wy −wŷ||2 L−1∏ i=1 σiminvmin\nConsequently, we can obtain an upper bound of eq. (8) parameterized on vmin, as follows\nmax{0, 1− umin γ } ≤ max{0, 1−\nminy,ŷ∈Y,y 6=ŷ ||wy −wŷ||2 ∏L−1 i=1 σ i minvmin\nγ }.\nNotice that only because 0-adversarial robustness, we can guarantee that vmin is non-zero, thus the bound is influenced by AR.\nThen, we study case 2), in which all training samples z ∈ Ci are classified wrong. In this case, for all z ∈ Ci, the ŷ given by ŷ = argmaxi 6=ywTi T ′x in the margin operator is the same, for that ŷ is the wrongly classified class. Its ramp loss is\nlγ(x, y) = max{0, 1− 1\nγ (wy −wŷ)TT ′x}.\nNote that in the case 1), it is the y that stays fixed, while ŷ may differ from example to example; while in the case 2), it is the ŷ stays fixed, while y may differ.\nSimilarly, within Ci as required by adversarial robustness, (wy −wŷ)TT ′x ≤ 0, thus we always have 1− 1γ (wy −wŷ) TT ′x ≥ 1, implying\nlγ(x, y) = 1.\nThus, ∀z = (x, y), z′ = (x′, y′) ∈ Ci\n|lγ(x, y)− lγ(x′, y′)| = 0. (15)\nSince only these two cases are possible, by eq. (8) and eq. (15), we have ∀z, z′ ∈ Ci\n|lγ(x, y)− lγ(x′, y′)| ≤ max{0, 1− umin γ }. (16)\nThe rest follows the standard proof in algorithmic robust framework.\nLet Ni be the set of index of points of examples that fall into Ci. Note that (|Ni|)i=1...K is an IDD multimonial random variable with parameters m and (|µ(Ci)|)i=1...K . Then\n|R(l ◦ T )−Rm(l ◦ T )|\n=| K∑ i=1 EZ∼µ[l(TX, Y )]µ(Ci)− 1 m m∑ i=1 l(Txi, yi)|\n≤| K∑ i=1 EZ∼µ[l(TX, Y )] |Ni| m − 1 m m∑ i=1 l(Txi, yi)|\n+ | K∑ i=1 EZ∼µ[l(TX, Y )]µ(Ci)− K∑ i=1 EZ∼µ[l(TX, Y )] |Ni| m |\n≤| 1 m K∑ i=1 ∑ j∈Ni max z∈Ci |l(Tx, y)− l(Txj , yj)| (17)\n+ |max z∈Z |l(Tx, y)| K∑ i=1 | |Ni| m − µ(Ci)||. (18)\nRemember that z = (x, y).\nBy eq. (16) we have eq. (17) is equal or less than max{0, 2(1− uminγ )}. By Breteganolle-Huber-Carol inequality, eq. (18) is less or equal to √\nlog(2)2k+1CkX γkm + 2 log(1/η)m .\nThe proof is finished.\nD IMPLEMENTATION DETAILS\nWe summarize the details of the experiments in this section. The experiments are run with PyTorch (Pfeiffer, 2017)." }, { "heading": "D.1 DATASETS", "text": "CIFAR10/100. Each CIFAR dataset consists of 50, 000 training data and 10, 000 test data. CIFAR-10 and CIFAR-100 have 10 and 100 classes respectively. Our data augmentation follows the standard manner in Lee et al. (2015): during training, we zero-pad 4 pixels along each image side, and sample a 32 × 32 region cropped from the padded image or its horizontal flip; during testing, we use the original non-padded image.\nTiny-ImageNet. Tiny-ImageNet is a subset of ImageNet dataset, which contains 200 classes rather than 1, 000 classes. Each class has 500 training images and 50 validation images. Images in the Tiny-ImageNet dataset are of 64× 64 pixels, as opposed to 256× 256 in the full ImageNet set. The data augmentation is straightforward: an input image is 56× 56 randomly cropped from a resized image using the scale, aspect ratio augmentation as well as scale jittering. A single 56× 56 cropped image is used for testing." }, { "heading": "D.2 EXPERIMENTS IN SECTION 5.1", "text": "CIFAR10/100 Models and Training. The models for CIFAR10/100 are the same as the ones in appendix B.4, except that we do not use spectral normalization anymore. CIFAR100 has 100 output neurons instead of 10.\nTiny-ImageNet Model. For Tiny ImageNet dataset, we use 50-layered wide residual networks with 4 groups of residual layers and [3, 4, 6, 3] bottleneck residual units for each group respectively. The 3× 3 filter of the bottleneck residual units have [64× k, 128× k, 256× k, 512× k] feature maps with the widen factor k = 2 as mentioned in Zagoruyko and Komodakis (2016). We replace the first 7× 7 convolution layer with 3× 3 filters with stride 1 and padding 1. The max pooling layer after the first convolutional layer is also removed to fit the 56× 56 input size. Batch normalization layers are retained for this dataset. The weights of convolution layers for Tiny ImageNet are initialized with Xavier uniform (Glorot and Bengio, 2010). Again, all dropout layers are omitted.\nTiny-ImageNet Training. The experiments on the Tiny-ImageNet dataset are based on a mini-batch size of 256 for 90 epochs. The initial learning rate is set to be 0.1 and decayed at 10 at 30 and 60 epochs respectively. All experiments are trained on the training set with stochastic gradient descent with the momentum of 0.9.\nResults. The data for fig. 3 are given in table 3, 4 and 5. More specifically, the data on CIFAR10 are given in table 3. The result on CIFAR100 are given in table 4. The result on Tiny-ImageNet are given in table 5.\nAdversarial Robustness Attack Method. The adversarial accuracy is evaluated against l∞-PGD (Madry et al., 2018) untargeted attack adversary, which is one of the strongest white-box attack methods. When considering adversarial attack, they usually train and evaluate against the same perturbation. And for our tasks, we only use the moderate adversaries that generated by 10 iterations with steps of size 2 and maximum of 8. When evaluating adversarial robustness, we only consider clean examples classified correctly originally, and calculate the accuracy of the adversarial examples generated from them that are still correctly classified. The adversarial accuracy is given in table 3 4 5, the row named “PGD”, and plotted in fig. 11." }, { "heading": "D.3 EXPERIMENTS IN APPENDIX B.4", "text": "Models. We use ResNet-type networks (Zhang et al., 2018). Given that we need to isolate factors that influence spectral complexity, we use ResNet without additional batch normalization (BN) layers. To train ResNet without BN, we rely on the fixup initialization proposed in Zhang et al. (2018). The scalar layers in Zhang et al. (2018) are also omitted, since it changes spectral norms of layers.\nDropout layers are omitted as well. Following Sedghi et al. (2018), we clip the spectral norm every epoch rather than every iteration.\nTraining. The experiments on CIFAR10 datasets are based on a mini-batch size of 256 for 200 epochs. The learning rate starts at 0.05, and is divided by 10 at 100 and 150 epochs respectively. All experiments are trained on training set with stochastic gradient descent based on the momentum of 0.9.\nResults. The data for fig. 6 are given in table 6." } ]
2,019
null
SP:e16dc7a0c8ab7f163f6b8f06926aeec03161280d
[ "This paper describes a sensor placement strategy based on information gain on an unknown quantity of interest, which already exists in the active learning literature. As is well-known in the literature, this is equivalent to minimizing the expected remaining entropy. What the authors have done differently is to consider the use of neural nets (as opposed to the widely-used Gaussian process) as the learning models in this sensor placement problem, specifically to (a) approximate the expectation using a set of samples generated from a generator neural net and to (b) estimate the probability term in the entropy by a deterministic/inspector neural net. The authors have performed some simple synthetic experiments to elucidate the behavior and performance of their proposed strategy. ", "This paper addresses the issue of how to optimize sensor placement. The authors propose a framework for sensor placement called Two-step Uncertainty Network (TUN) based on the idea of information gain maximization. More concretely, the proposed method encodes an arbitrary number of measurements, models the conditional distribution of high dimensional data, and estimates the task-specific information gain at unobserved locations. Experimental results on the synthetic data clearly show that TUN outperforms current state-of-the-art methods, such as random sampling strategy and Gaussian Process-based strategy." ]
Optimal sensor placement achieves the minimal cost of sensors while obtaining the prespecified objectives. In this work, we propose a framework for sensor placement to maximize the information gain called Two-step Uncertainty Network(TUN). TUN encodes an arbitrary number of measurements, models the conditional distribution of high dimensional data, and estimates the task-specific information gain at un-observed locations. Experiments on the synthetic data show that TUN outperforms the random sampling strategy and Gaussian Process-based strategy consistently.
[]
[ { "authors": [ "SM Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A Rusu", "Ivo Danihelka", "Karol Gregor" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "Michael R Garey", "David S Johnson" ], "title": "Computers and intractability, volume 29", "venue": "wh freeman New York,", "year": 2002 }, { "authors": [ "Tilmann Gneiting", "Adrian E Raftery" ], "title": "Strictly proper scoring rules, prediction, and estimation", "venue": "Journal of the American Statistical Association,", "year": 2007 }, { "authors": [ "Karol Gregor", "Ivo Danihelka", "Alex Graves", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Draw: A recurrent neural network for image generation", "venue": "arXiv preprint arXiv:1502.04623,", "year": 2015 }, { "authors": [ "David Ha", "Jürgen Schmidhuber" ], "title": "Recurrent world models facilitate policy evolution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Peter Henderson", "Riashat Islam", "Philip Bachman", "Joelle Pineau", "Doina Precup", "David Meger" ], "title": "Deep reinforcement learning that matters", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Chengyu Hu", "Ming Li", "Deze Zeng", "Song Guo" ], "title": "A survey on sensor placement for contamination detection in water distribution systems", "venue": "Wireless Networks,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Andreas Krause", "Ajit Singh", "Carlos Guestrin" ], "title": "Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Ahmad Masoudi", "Ratchaneekorn Thamvichai", "Mark A Neifeld" ], "title": "Shape threat detection via adaptive computed tomography", "venue": "pp. 98470H. International Society for Optics and Photonics,", "year": 2016 }, { "authors": [ "George L Nemhauser", "Laurence A Wolsey", "Marshall L Fisher" ], "title": "An analysis of approximations for maximizing submodular set functionsi", "venue": "Mathematical programming,", "year": 1978 }, { "authors": [ "Linh V Nguyen", "Sarath Kodagoda", "Ravindra Ranasinghe", "Gamini Dissanayake" ], "title": "Informationdriven adaptive sampling strategy for mobile robotic wireless sensor network", "venue": "IEEE Transactions on Control Systems Technology,", "year": 2015 }, { "authors": [ "Wieslaw Ostachowicz", "Rohan Soman", "Pawel Malinowski" ], "title": "Optimization of sensor placement for structural health monitoring: A review", "venue": "Structural Health Monitoring,", "year": 2019 }, { "authors": [ "S Ouadah", "M Jacobson", "Joseph Webster Stayman", "T Ehtiati", "Clifford Weiss", "JH Siewerdsen" ], "title": "Task-driven orbit design and implementation on a robotic c-arm system for cone-beam ct", "venue": "In Medical Imaging 2017: Physics of Medical Imaging,", "year": 2017 }, { "authors": [ "Carl Edward Rasmussen" ], "title": "Gaussian processes in machine learning", "venue": "In Summer School on Machine Learning,", "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "Sensor placement is widely studied in the areas of environment monitoring (Hu et al., 2018; Nguyen et al., 2015), structural health monitoring(Ostachowicz et al., 2019) , security screening(Masoudi et al., 2016), and adaptive computed tomography(Ouadah et al., 2017). The optimal sensor placement maximizes the objectives with minimal cost of sensors. Given the model that maps each possible set of sensor locations to the objectives, the optimal sensor placement can be formulated as an optimization problem. However, the optimization is shown to be NP-hard(Garey & Johnson, 2002). Thus, approximate greedy algorithms of sequential sensor placement are proposed and then proved to be near optimal under the assumptions that the objectives are monotone and submodular(Nemhauser et al., 1978).\nThe diagram of a sequential sensor placement is shown in Figure 1a with the black arrows. The agent inquires at a feasible location to the physical model in each step and obtains the corresponding measurement. The obtained observations are used to make inference for specific tasks. For instance, in security screening tasks the observations are used to predict the distribution of the object’s label. To make an accurate inference, the agent often optimizes the information gain in each step with respect to the feasible location. The corresponding objective is mutual information which is approved to give near optimal approximations in sequential sensing(Krause et al., 2008; Nemhauser et al., 1978).\nTo optimize the objective, a model that estimates the potential information gain at each possible location is necessary. The most generic method to model the unknown spatial phenomenon is Gaussian Process(GP) which incorporates the knowledge of observations and predicts the uncertainty at the un-observed locations. However, the Gaussian model assumption in GP does not perform well on high dimensional data such as images as generative models. In addition, GP inherently adapts the assumptions that the uncertainty at the un-observed locations is independent of the obtained measurements, making the GP based sequential sensing an open-loop control(Rasmussen, 2003). An alternative approach to this problem arises recently is Reinforcement Learning(RL)(Ha & Schmidhuber, 2018). However, the performance in RL is found to be of large variance and difficult to reproduce(Henderson et al., 2018).\nIn this work, we propose a framework for sensor placement to maximize the information gain called Two-step Uncertainty network (TUN). The pipeline of TUN is shown in Figure 1 with red arrows. TUN consists of two steps, namely the imagination step and the inspection step. TUN firstly ”imagines” the possible measurements at the un-observed locations. Then it estimates the task-specific information gain with the imagined measurements along with the previous observations in the inspection step. Both steps are deployed with the pre-trained neural networks. Given the task-specific information gain at all the un-observed locations, the agent adapts a greedy algorithm to select the\noptimal next location to inquire. This procedure emulates how we human think in such tasks: given the observations, we firstly imagine the possible outcomes at un-observed locations, then inspect the information pertaining to the task based on those possible outcomes. We will derive the proposed framework in the next section." }, { "heading": "2 TWO-STEPS UNCERTAINTY NETWORKS", "text": "Consider a sequential sensing strategy, we denote locations as v and measurements as x. At the kth step, we have the previous k − 1 observations Obs = {x1, v1, ..., xk−1, vk−1}, the optimal location v∗k is the one maximizes the mutual information (MI) between the object’s label y and the possible measurement xk given the previous observations:\nv∗k = argmax vk MI(y;xk|Obs, vk) (1)\nThe mutual information can be expressed as,\nMI(y, xk|vk, Obs) = H (y|Obs)− EPr(xk|vk,Obs)H (y|xk, vk, Obs) (2)\nwhere E is the expectation operator. In Equation 2, the first term is the uncertainty of labels conditioned on the previous observations. The second term is the expected uncertainty conditioned on observations and possible measurements xk at vk. The subtraction gives the uncertainty reduction or the information gain at the location vk. It is worth noting that the first term is independent of vk, and can be treated as a constant in optimizing Equation 2 with respect to vk. The second term in Equation 2 can be approximated with Monte Carlo estimator as\nMI(y, xk|Obs) = − M∑ m=1 1 M H (y|xmk , vk, Obs) + Const. (3)\nwhere xmk ∼ Pr(xk|vk, Obs) (4)\nThe summation in Equation 3 (without the negative sign) is the approximate remaining entropy with the measurement at vk given. Maximizing the mutual information is equivalent to minimizing the remaining entropy. Equation 4 is the conditional distribution of the measurement at vk.\nFollowing Equation 3-4, it is natural to approach the remaining entropy in two steps : (1), Generating instances of xk that follows the distribution in Equation 4. (2), Evaluating the remaining entropy∑\n1 MH(y|x m k , vk, Obs) with the generated instances in step (1). The graphical models of the two steps are shown in Figure 1b. The design of our Two-step uncertainty network (TUN) follows the same rationale. In the imagination step, TUN generates multiple instances with a generative neural network. In the inspection step, a deterministic deep neural network is used to estimates the label distribution, and thus to evaluate the information gain(or negative remaining entropy)." }, { "heading": "2.1 IMAGINATION STEP", "text": "The first step of TUN is to generate instances xk ∼ Pr(xk|vk, Obs). Modeling the distribution of high dimensional data such as images is difficult and computationally expensive, even within a constrained family of distributions with simplified assumptions. Recently, remarkable progress has been made on modeling complex distribution of high dimensional data with generative neural network(GNN)(Gregor et al., 2015; Eslami et al., 2018). GNN maps the instances of re-parameterizable distribution (for example, multivariate normal distributions) to the instances of target complex distribution(Kingma & Welling, 2013). GNN is trained to maximize the log likelihood of the generated instances,\nlogPr(xk|vk, Obs) = log ∫ Pr(xk|z, vk)Pr(z|Obs)dz (5)\nwhere z is the latent variable of multivariate normal distributions. The log likelihood within the integral is intractable. Thus, the evidence lower bond(ELBO) as an approximation is evaluated instead,\nln ∫ Pr(xk|z, vk)Pr(z|Obs)dz ≥ Eqφ(z) lnPr(xk|z, vk)−DKL[qφ(z); pθ(z)] (6)\nThe posterior distribution qφ(z|xk, vk, Obs) is conditioned on the previous k − 1 observations Obs and the observed kth measurement in the training data. The prior distribution pθ(z|vk, Obs) is only conditioned on Obs. DKL is the KL divergence measuring the difference between the two distributions. After training, we will have a generator network G(vk, Obs) that generates instances of measurement xk. The generator network G is then employed in the imagination step of TUN." }, { "heading": "2.2 INSPECTION STEP", "text": "The second step in TUN is inspection, which estimates the task-specific information gain(or negative remaining entropy) at vk. With the generated M instances of xk in the imagination step, the\nremaining entropy in Equation 3 can be expressed as M∑ m=1 1 M H(y|xk, vk, Obs) = − M∑ m=1 1 M ∑ y Pr(y|xmk , vk, Obs) logPr(y|xmk , vk, Obs) (7)\nThe remaining entropy is a function of the conditional distribution of the label Pr(y|xmk , vk, Obs). We approximate this conditional distribution with a deterministic neural network, the inspector networkD(xk, vk, Obs). Then the remaining entropy is evaluated as Equation 7. The most informative location to inquire is the one with the lowest remaining entropy. The procedure is summarized in algorithm 1. It can be shown that the true parameters of the model can be recovered by symptomatically maximizing a proper scoring rule(Gneiting & Raftery, 2007). A proper scoring rule S rewards the true distribution p more than any other distributions p̂ on the training data d as∫\nd p(d)S(p̂, d) ≤ ∫ d p(d)S(p, d) (8)\nIt is shown that optimizing the softmax cross entropy loss function in the case of multi-class classification is equivalent to optimizing a proper scoring rule(Lakshminarayanan et al., 2017). Thus, the inspector network D is trained with the softmax cross entropy loss. It is worth noting that the inspector network D takes different number of observations at different steps. To accept arbitrary number of observations, each observation is encoded separately with a shared-weight encoder, and the encoded vectors are fed into an aggregator and aggregated to a fixed-length vector before the succeeding networks. To enforce the commutative property in the sequential sensing problem, we adapt a ”mean operator” as the aggregator in TUN, which takes the average of the input vectors. We take random number of observations at the randomly selected locations in the training stage.\nAlgorithm 1 At the kth step of TUN Require: Obs,G,D, m = 1 Ensure: The optimal kth location v∗k\nfor j is in un-observed locations do vk = j for m ≤M do xmj ∼ G(Obs, vk) Pmy = D(Obs, vk, x m j )\nHmy = −Pmy lnPmy m+ = 1\nend for Hj = 1 M ∑ mH m y\nend for return v∗k = argmax\nj Hj" }, { "heading": "3 EXPERIMENTS AND RESULTS", "text": "To evaluate the feasibility of TUN, we experimented with synthetic datasets. In the first experiment, we visualize the imagination step in TUN with simple 1D spectrum dataset. The spectrum dataset is generated from the spectrums of five minerals including Augite, Allanite, Xenotime, Bikitaitem and Pseudobrookite. We re-sampled the spectrums from 0.2µm to 0.6µm with 100 points and normalized them. The normalized spectrums are then scaled by a random factor ranging from 0.025 to 2.5 and corrupted by a zero-mean Gaussian noise with standard deviation of 0.03. The random scaling creates the intra-class uncertainty in the dataset. We prepared 5000 instances in the training dataset and 500 instances in the test dataset. The generator network G was trained to generate the instances of the spectrum with several observations given. The number and locations of the observations are randomly selected in the training process. In the test stage, we show 10 generated instances in colorful solid lines in Figure 2 with different observations. The observations are indicated as filled circle, and the true spectrum is shown in the dashed line. Given the single observation, the generated instances vary in both mineral types(inter-class uncertainty) and scales(intra-class uncertainty). While with three selected observations, the imagined spectrums mainly vary in scales. The\nintra-class uncertainty in the latter case is task-independent information indicating that the generator network G believes not much information of label are remaining given the observations. In the case of that, taking more measurements may not benefit the label prediction much. Thus, the agent may stop inquiring to avoid redundant inquiries. We will show more quantitative results of the evaluation of the task-specific information gain in the inspection step of TUN in high dimensional dataset.\nTo quantitatively evaluate the information gain(or uncertainty reduction), we created a high dimensional synthetic dataset from x-ray baggage scanning system in security screening. The physical model is shown in Figure 3. The objects to be screened are 3D digits with an obstacle that partially blocks the objects. The existence of the obstacle results in significant variation of information among different locations. There are 8 locations(angles) to illuminate the x-ray onto the object (ranging from 0◦ to 157.5◦). Inquiry on each location returns a 2D projected image. The goal is to recognize the label of the object confidently with least number of inquiries. Before any inquiries, a randomized rotation is applied on the object along z axis. An additive Gaussian noise n ∼ (0, 0.02) is adopted on the observed images. TUN is trained on 5000 objects with random locations, effectively making the training dataset much larger. TUN is tested on 3000 set of observations generated from 1000 held-out un-seen objects. The generator network G firstly encodes the information of one or more observations into a fixed length representation vector. Then it generates M instances of possible measurements at the un-observed locations. based on the representation vector. In our model, M = 10. We show this process in Figure 4. In Figure 4 Left, the first measurement at location 4 is observed. We can see a corner feature in the measurement in the yellow box. Obviously the information from the observation is insufficient to reconstruct the 3D object not to mention the label of the object. We show three generated instances from the generator network at location 6 and 7 which are different in labels(digit 7 and 0) and fonts, yet consistent with the observation. In Figure 4 Middle, measurements at location 4 and 5 are both obtained. The generated samples at location 6 and 7 are more convergent to the ground truth as more information in observations are extracted. The generator network in this situation almost collapses to a deterministic neural network. This shows that our generator network generates the samples following the distribution xk ∼ Pr(xk|vk, Obs). In the second step of TUN, the generated samples are fed into the inspector network D, which estimates the probability of the labels Pr(y|xk, vk, Obs) and evaluates the task specific information gain. We will perform both qualitative and quantitative analysis on the task specific information gain with TUN in the following paragraph. Firstly, we visualize the intermediate feature space in the initial sensing step of an example shown in Figure 5 Left. The obtained observations at location 5 is non-informative. We generated 100 instances at each location and fed them into the inspector network and visualized the feature space in the inspector network using t-SNE(Maaten & Hinton, 2008). We select the vector at the layer before logits in the inspector network to visualize. The feature space is colored by locations and divided into three regions. The region 1 covers the features of the generated instances from exact observed location (location 5). Region 2 covers the features from the locations close to the observed one (location 4 and 6). Region 3 contains the features from the locations far from the observed location. Clearly the features get more disperse as the distance to the observed location increases. This indicates that rich information lies in the locations in region 3. Although this is a qualitative analysis on the feature space, it justifies the necessity to\nsample multiple instances from the generator network. We will perform quantitative analysis of this example with the inspector network in next paragraph.\nThe information gain is evaluated from the averaged entropy in Equation 7 from M samples. The averaged entropy indicates the estimated remaining uncertainty of the labels after we obtain the observation at location vk. Our strategy is to pick the location with least remaining uncertainty, which equivalently maximizes the mutual information in Equation 2 (note the negative sign before the entropy). We show this quantitatively in the example in Figure 6. This is the same example as described in Figure 5. The first observation is at location 5 shown in yellow box in Figure 6a bottom, which is a non-informative observation. The averaged entropy for next step is shown in Figure 6a top. The entropy plot estimates the potential remaining uncertainty at the un-observed\nlocations, thus the less remaining uncertainty the better the location is. The entropy is averaged from 10 samples, and the standard deviation is shown as bars in Figure 6. The entropy plot at the initial step indicates that our model believes there is less remaining uncertainty after we obtain the measurement at location 1,2, or 8 than that at location 3,4,6, or 7. Thus the next location selected by TUN is location 2 (which has least averaged remaining entropy). The successive estimation for the entropy is shown in Figure 6b in which the agent inquires and obtains the observation at location 2 following TUN. With the observations at location 2 and 5, the remaining entropy is very low with neglectable variance. This entropy plot shows TUN is quite confident on the label of the object, and believes there is not much information left at un-observed locations. A threshold can be used as a stopping criterion in practice.\nWe compare TUN with random sampling strategy(RS) and Gaussian Process(GP) strategy. We adapt squared exponential kernel in GP and employ the 2D coordinates and the projection angle [x, y, cosθ, sinθ] as features. The GP model is fitted with training data and performs the prediction of measurement at un-observed locations. To evaluate the quality of the strategies, we trained classifiers with different number of the observations. The training data for the classifiers are generated from 5000 held-out objects with random noise and rotation. All the observations in training the classifiers are taken from random sampling strategy. The performance in both accuracy and\nentropy(confidence) with different sensing budget are shown in Figure 7. The first location is randomly selected in all three strategies, thus, the performances are the same for all strategies. Start from the second step, TUN outperforms other strategies consistently with higher accuracy and less uncertainty." }, { "heading": "4 CONCLUSION", "text": "In this work, we present a task-driven sensor placement framework to maximize the information gain. The proposed framework (TUN) is able to perceive and understand the observation, approximate the conditional distribution of the object, and estimate the information gain pertaining to the task. In the security screening experiment we demonstrated, TUN outperforms random strategy and GP strategy consistently." } ]
2,019
null
SP:99fd9fac1678bb46d41967f397f237561a3890d3
[ "Paper proposes a method for CL. The method is based on hypernetworks. These networks are a metamodel, which produce the parameters (from a task-conditioned embedding) which will be used in the main network. Preventing forgetting in the main network is now, replaced by preventing forgetting in the hypernetwork. This is done by imposing a regularization on the hypernetwork outcome, imposing that the generated weights should be similar for previous tasks (similar to Li & Hoiem who impose this on the network outputs). In addition, the paper proposes chunking, which refers to using an additional set of chunk, embeddings which are shared for all tasks, which allow compressing the hypernetwork. Furthermore, they propose an extension that allows for image replay (this is not an easy extension and an impressive contribution on itself, but maybe confusing for the current paper).", "This paper proposes to use hypernetwork to prevent catastrophic forgetting. In deep learning, the information of the samples are converted to parameters during the training process, however, future training process could interfere with the information from the previous tasks. One of the method to prevent forgetting is to use reheasal, which retrains the network with previous data. The mechanism of this work is to store the previous samples as a trained point in the parameter space, so that a set of points in the original space is stored and thus rehearsed as one point in the parameter space, this saves both the memory and computation." ]
Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks. To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity. Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks only require rehearsing task-specific weight realizations, which can be maintained in memory using a simple regularizer. Besides achieving state-ofthe-art performance on standard CL benchmarks, additional experiments on long task sequences reveal that task-conditioned hypernetworks display a very large capacity to retain previous memories. Notably, such long memory lifetimes are achieved in a compressive regime, when the number of trainable hypernetwork weights is comparable or smaller than target network size. We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. Finally, forward information transfer is further supported by empirical results on a challenging CL benchmark based on the CIFAR-10/100 image datasets.
[ { "affiliations": [], "name": "Johannes von Oswald" }, { "affiliations": [], "name": "Christian Henning" }, { "affiliations": [], "name": "João Sacramento" }, { "affiliations": [], "name": "Benjamin F. Grewe" } ]
[ { "authors": [ "Ari S. Benjamin", "David Rolnick", "Konrad Kording" ], "title": "Measuring and regularizing networks in function space", "venue": "arXiv preprint arXiv:1805.08289,", "year": 2018 }, { "authors": [ "Luca Bertinetto", "João F. Henriques", "Jack Valmadre", "Philip Torr", "Andrea Vedaldi" ], "title": "Learning feedforward one-shot learners", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Johanni Brea", "Robert Urbanczik", "Walter Senn" ], "title": "A Normative Theory of Forgetting: Lessons from the Fruit Fly", "venue": "PLOS Computational Biology,", "year": 2014 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Blake Camp", "Jaya Krishna Mandivarapu", "Rolando Estrada" ], "title": "Self-net: Lifelong learning via continual self-modeling", "venue": "arXiv preprint arXiv:1805.10354,", "year": 2018 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": "arXiv preprint arXiv:1907.02544,", "year": 2019 }, { "authors": [ "Sebastian Farquhar", "Yarin Gal" ], "title": "Towards robust evaluations of continual learning", "venue": "arXiv preprint arXiv:1805.09733,", "year": 2018 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": "arXiv preprint arXiv:1701.08734,", "year": 2017 }, { "authors": [ "Robert M. French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in Cognitive Sciences,", "year": 1999 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Boris Hanin" ], "title": "Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Xu He", "Herbert Jaeger" ], "title": "Overcoming catastrophic interference using conceptor-aided backpropagation", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xu He", "Jakub Sygnowski", "Alexandre Galashov", "Andrei A Rusu", "Yee Whye Teh", "Razvan Pascanu" ], "title": "Task agnostic continual learning via meta learning", "venue": null, "year": 1906 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "arXiv preprint arXiv:1610.02136,", "year": 2016 }, { "authors": [ "Christian Henning", "Johannes von Oswald", "João Sacramento", "Simone Carlo Surace", "Jean-Pascal Pfister", "Benjamin F Grewe" ], "title": "Approximating the predictive distribution via adversarially-trained hypernetworks", "venue": "In NeurIPS Bayesian Deep Learning Workshop,", "year": 2018 }, { "authors": [ "Wenpeng Hu", "Zhou Lin", "Bing Liu", "Chongyang Tao", "Zhengwei Tao", "Jinwen Ma", "Dongyan Zhao", "Rui Yan" ], "title": "Overcoming catastrophic forgetting for continual learning via model adaptation", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ferenc Huszár" ], "title": "Note on the quadratic penalties in elastic weight consolidation", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Herbert Jaeger" ], "title": "Controlling Recurrent Neural Networks by Conceptors", "venue": "arXiv preprint:", "year": 2014 }, { "authors": [ "Xu Jia", "Bert De Brabandere", "Tinne Tuytelaars", "Luc V Gool" ], "title": "Dynamic Filter Networks", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A. Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska", "Demis Hassabis", "Claudia Clopath", "Dharshan Kumaran", "Raia Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2017 }, { "authors": [ "Dharshan Kumaran", "Demis Hassabis", "James L. McClelland" ], "title": "What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated", "venue": "Trends in Cognitive Sciences,", "year": 2016 }, { "authors": [ "Andrew K Lampinen", "James L McClelland" ], "title": "Embedded meta-learning: Toward more flexible deep-learning models", "venue": "arXiv preprint arXiv:1905.09950,", "year": 2019 }, { "authors": [ "Moshe Leshno", "Shimon Schocken" ], "title": "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function", "venue": "Neural Networks,", "year": 1993 }, { "authors": [ "Z. Li", "D. Hoiem" ], "title": "Learning without Forgetting", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "arXiv preprint arXiv:1706.02690,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative Normalizing Flows for Variational Bayesian Neural Networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning Volume 70,", "year": 2017 }, { "authors": [ "Mario Lučić", "Michael Tschannen", "Marvin Ritter", "Xiaohua Zhai", "Olivier Bachem", "Sylvain Gelly" ], "title": "High-fidelity image generation with fewer labels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Arun Mallya", "Svetlana Lazebnik" ], "title": "Packnet: Adding multiple tasks to a single network by iterative pruning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Valerio Mante", "David Sussillo", "Krishna V. Shenoy", "William T. Newsome" ], "title": "Context-dependent computation by recurrent dynamics in prefrontal cortex", "venue": null, "year": 2013 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Eve Marder" ], "title": "Neuromodulation of Neuronal Circuits: Back to the Future", "venue": null, "year": 2012 }, { "authors": [ "Nicolas Y Masse", "Gregory D Grant", "David J Freedman" ], "title": "Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Michael McCloskey", "Neal J. Cohen" ], "title": "Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem", "venue": null, "year": 1989 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "German I. Parisi", "Ronald Kemker", "Jose L. Part", "Christopher Kanan", "Stefan Wermter" ], "title": "Continual lifelong learning with neural networks: A review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "Nick Pawlowski", "Andrew Brock", "Matthew C.H. Lee", "Martin Rajchl", "Ben Glocker" ], "title": "Implicit Weight Uncertainty in Neural Networks", "venue": "arXiv preprint arXiv:1711.01297,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": "In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32,", "year": 2014 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "Online structured laplace approximations for overcoming catastrophic forgetting", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Anthony Robins" ], "title": "Catastrophic Forgetting, Rehearsal and Pseudorehearsal", "venue": "Connection Science,", "year": 1995 }, { "authors": [ "David Rolnick", "Arun Ahuja", "Jonathan Schwarz", "Timothy P Lillicrap", "Greg Wayne" ], "title": "Experience replay for continual learning", "venue": "arXiv preprint arXiv:1811.11682,", "year": 2018 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to dynamic recurrent networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Jonathan Schwarz", "Wojciech Czarnecki", "Jelena Luketina", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Joan Serra", "Didac Suris", "Marius Miron", "Alexandros Karatzoglou" ], "title": "Overcoming catastrophic forgetting with hard attention to the task", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual Learning with Deep Generative Replay", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Jake P. Stroud", "Mason A. Porter", "Guillaume Hennequin", "Tim P. Vogels" ], "title": "Motor primitives in space and time via targeted gain modulation in cortical networks", "venue": "Nature Neuroscience,", "year": 2018 }, { "authors": [ "Siddharth Swaroop", "Cuong V Nguyen", "Thang D Bui", "Richard E Turner" ], "title": "Improving and understanding variational continual learning", "venue": "Continual Learning Workshop at NeurIPS,", "year": 2018 }, { "authors": [ "Gido M. van de Ven", "Andreas S. Tolias" ], "title": "Generative replay with feedback connections as a general strategy for continual learning", "venue": "arXiv preprint arXiv:1809.10635,", "year": 2018 }, { "authors": [ "Gido M. van de Ven", "Andreas S. Tolias" ], "title": "Three scenarios for continual learning", "venue": "arXiv preprint arXiv:1904.07734,", "year": 2019 }, { "authors": [ "Chenshen Wu", "Luis Herranz", "Xialei Liu", "yaxing wang", "Joost van de Weijer", "Bogdan Raducanu" ], "title": "Memory Replay GANs: Learning to Generate New Categories without Forgetting", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual Learning Through Synaptic Intelligence", "venue": "In Proceedings of the 34th International Conference on Machine Learning - Volume 70,", "year": 2017 }, { "authors": [ "Wu" ], "title": "Recent developments in the GAN literature already allude", "venue": null, "year": 2018 }, { "authors": [ "Mao" ], "title": "ADDITIONAL EXPERIMENTAL DETAILS All experiments are conducted using 16 NVIDIA GeForce RTX 2080 TI graphics cards. For simplicity, we decided to always keep the previous task embeddings e, t = 1", "venue": null, "year": 2080 }, { "authors": [ "He" ], "title": "2016)) and again produce the weights of this target network by a hypernetwork", "venue": null, "year": 2016 }, { "authors": [ "HAT. Serra" ], "title": "|Θtrgt| = 1.37", "venue": null, "year": 2018 }, { "authors": [ "HAT HNET", "Serra" ], "title": "Task-averaged test accuracy on the PermutedMNIST experiment with T = 10 and T = 100 tasks (’P10’, ’P100’) with three different target network sizes, i.e., three fully connected neural networks with hidden layer sizes of (100", "venue": null, "year": 2018 }, { "authors": [ "Serra" ], "title": "reran HAT for PermutedMNIST-100 with code provided at https://github.com/joansj/hat, and for PermutedMNIST-10 with hidden layer size (1000, 1000) to match our setup. HAT and HNET perform similarly on large target networks for PermutedMNIST-10, while HNET is able to achieve larger performances with smaller target networks as well as for long task sequences", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "We assume that a neural network f(x,Θ) with trainable weights Θ is given data from a set of tasks {(X(1),Y(1)), . . . , (X(T ),Y(T ))}, with input samples X(t) = {x(t,i)}nti=1 and output samples Y(t) = {y(t,i)}nti=1, where nt ≡ |X(t)|. A standard training approach learns the model using data from all tasks at once. However, this is not always possible in real-world problems, nor desirable in an online learning setting. Continual learning (CL) refers to an online learning setup in which tasks are presented sequentially (see van de Ven & Tolias, 2019, for a recent review on CL). In CL, when learning a new task t, starting with weights Θ(t−1) and observing only (X(t),Y(t)), the goal is to find a new set of parameters Θ(t) that (1) retains (no catastrophic forgetting) or (2) improves (positive backward transfer) performance on previous tasks compared to Θ(t−1) and (3) solves the new task t potentially utilizing previously acquired knowledge (positive forward transfer). Achieving these goals is non-trivial, and a longstanding issue in neural networks research.\nHere, we propose addressing catastrophic forgetting at the meta level: instead of directly attempting to retain f(x,Θ) for previous tasks, we fix the outputs of a metamodel fh(e,Θh) termed task-conditioned hypernetwork which maps a task embedding e to weights Θ. Now, a single point has to be memorized per task. To motivate such approach, we perform a thought experiment: we assume that we are allowed to store all inputs {X(1), . . . ,X(T )} seen so far, and to use these data to compute model outputs corresponding to Θ(T−1). In this idealized setting, one can avoid forgetting by simply mixing data from the current task with data from the past, {(X(1), Ŷ(1)), . . . , (X(T−1), Ŷ(T−1)), (X(T ),Y(T ))}, where Ŷ(t) refers to a set of synthetic targets generated using the model itself f( · ,Θ(t−1)). Hence, by training to retain previously acquired input-output mappings, one can obtain a sequential algorithm in principle as powerful as multi-task learning. Multi-task learning, where all tasks are learned\nsimultaneously, can be seen as a CL upper-bound. The strategy described above has been termed rehearsal (Robins, 1995). However, storing previous task data violates our CL desiderata.\nTherefore, we introduce a change in perspective and move from the challenge of maintaining individual input-output data points to the problem of maintaining sets of parameters {Θ(t)}, without explicitly storing them. To achieve this, we train the metamodel parameters Θh analogous to the above outlined learning scheme, where synthetic targets now correspond to weight configurations that are suitable for previous tasks. This exchanges the storage of an entire dataset by a single low-dimensional task descriptor, yielding a massive memory saving in all but the simplest of tasks. Despite relying on regularization, our approach is a conceptual departure from previous algorithms based on regularization in weight (e.g., Kirkpatrick et al., 2017; Zenke et al., 2017) or activation space (e.g., He & Jaeger, 2018).\nOur experimental results show that task-conditioned hypernetworks do not suffer from catastrophic forgetting on a set of standard CL benchmarks. Remarkably, they are capable of retaining memories with practically no decrease in performance, when presented with very long sequences of tasks. Thanks to the expressive power of neural networks, task-conditioned hypernetworks exploit task-totask similarities and transfer information forward in time to future tasks. Finally, the task-conditional metamodelling perspective that we put forth is generic, as it does not depend on the specifics of the target network architecture. We exploit this key principle and show that the very same metamodelling framework extends to, and can improve, an important class of CL methods known as generative replay methods, which are current state-of-the-art performers in many practical problems (Shin et al., 2017; Wu et al., 2018; van de Ven & Tolias, 2018)." }, { "heading": "2 MODEL", "text": "" }, { "heading": "2.1 TASK-CONDITIONED HYPERNETWORKS", "text": "Hypernetworks parameterize target models. The centerpiece of our approach to continual learning is the hypernetwork, Fig. 1a. Instead of learning the parameters Θtrgt of a particular function ftrgt directly (the target model), we learn the parameters Θh of a metamodel. The output of such metamodel, the hypernetwork, is Θtrgt. Hypernetworks can therefore be thought of as weight generators, which were originally introduced to dynamically parameterize models in a compressed form (Ha et al., 2017; Schmidhuber, 1992; Bertinetto et al., 2016; Jia et al., 2016).\nContinual learning with hypernetwork output regularization. One approach to avoid catastrophic forgetting is to store data from previous tasks and corresponding model outputs, and then fix such outputs. This can be achieved using an output regularizer of the following form, where past outputs play the role of pseudo-targets (Robins, 1995; Li & Hoiem, 2018; Benjamin et al., 2018):\nLoutput = T−1∑ t=1 |X(t)|∑ i=1 ‖f(x(t,i),Θ∗)− f(x(t,i),Θ)‖2, (1)\nIn the equation above, Θ∗ is the set of parameters before attempting to learn task T , and f is the learner. This approach, however, requires storing and iterating over previous data, a process that is known as rehearsing. This is potentially expensive memory-wise and not strictly online learning. A possible workaround is to generate the pseudo-targets by evaluating f on random patterns (Robins, 1995) or on the current task dataset (Li & Hoiem, 2018). However, this does not necessarily fix the behavior of the function f in the regions of interest.\nHypernetworks sidestep this problem naturally. In target network weight space, a single point (i.e., one set of weights) has to be fixed per task. This can be efficiently achieved with task-conditioned hypernetworks, by fixing the hypernetwork output on the appropriate task embedding.\nSimilar to Benjamin et al. (2018), we use a two-step optimization procedure to introduce memorypreserving hypernetwork output constraints. First, we compute a candidate change ∆Θh which minimizes the current task loss L(T )task = Ltask(Θh, e(T ),X(T ),Y(T )) with respect to Θ. The candidate ∆Θh is obtained with an optimizer of choice (we use Adam throughout; Kingma & Ba, 2015). The actual parameter change is then computed by minimizing the following total loss:\nLtotal = Ltask(Θh, e(T ),X(T ),Y(T )) + Loutput(Θ∗h ,Θh,∆Θh, {e(t)})\n= Ltask(Θh, e(T ),X(T ),Y(T )) + βoutput T − 1 T−1∑ t=1 ‖fh(e(t),Θ∗h)− fh(e(t),Θh + ∆Θh))‖2, (2)\nwhere Θ∗h is the set of hypernetwork parameters before attempting to learn task T , ∆Θh is considered fixed and βoutput is a hyperparameter that controls the strength of the regularizer. On Appendix D, we run a sensitivity analysis on βoutput and experiment with a more efficient stochastic regularizer where the averaging is performed over a random subset of past tasks.\nMore computationally-intensive algorithms that involve a full inner-loop refinement, or use secondorder gradient information by backpropagating through ∆Θh could be applied. However, we found empirically that our one-step correction worked well. Exploratory hyperparameter scans revealed that the inclusion of the lookahead ∆Θh in (2) brought a minor increase in performance, even when computed with a cheap one-step procedure. Note that unlike in Eq. 1, the memory-preserving term Loutput does not depend on past data. Memory of previous tasks enters only through the collection of task embeddings {e(t)}T−1t=1 .\nLearned task embeddings. Task embeddings are differentiable deterministic parameters that can be learned, just like Θh. At every learning step of our algorithm, we also update the current task embedding e(T ) to minimize the task loss L(T )task . After learning the task, the final embedding is saved and added to the collection {e(t)}." }, { "heading": "2.2 MODEL COMPRESSION WITH CHUNKED HYPERNETWORKS", "text": "Chunking. In a straightforward implementation, a hypernetwork produces the entire set of weights of a target neural network. For modern deep neural networks, this is a very high-dimensional output. However, hypernetworks can be invoked iteratively, filling in only part of the target model at each step, in chunks (Ha et al., 2017; Pawlowski et al., 2017). This strategy allows applying smaller hypernetworks that are reusable. Interestingly, with chunked hypernetworks it is possible to solve tasks in a compressive regime, where the number of learned parameters (those of the hypernetwork) is effectively smaller than the number of target network parameters.\nChunk embeddings and network partitioning. Reapplying the same hypernetwork multiple times introduces weight sharing across partitions of the target network, which is usually not desirable.\nTo allow for a flexible parameterization of the target network, we introduce a set C = {ci}NCi=1 of chunk embeddings, which are used as an additional input to the hypernetwork, Fig. 1b. Thus, the full set of target network parameters Θtrgt = [fh(e, c1), . . . , fh(e, cNC)] is produced by iteration over C, keeping the task embedding e fixed. This way, the hypernetwork can produce distinct weights for each chunk. Furthermore, chunk embeddings, just like task embeddings, are ordinary deterministic parameters that we learn via backpropagation. For simplicity, we use a shared set of chunk embeddings for all tasks and we do not explore special target network partitioning strategies.\nHow flexible is our approach? Chunked neural networks can in principle approximate any target weight configuration arbitrarily well. For completeness, we state this formally in Appendix E." }, { "heading": "2.3 CONTEXT-FREE INFERENCE: UNKNOWN TASK IDENTITY", "text": "Determining which task to solve from input data. Our hypernetwork requires a task embedding input to generate target model weights. In certain CL applications, an appropriate embedding can be immediately selected as task identity is unambiguous, or can be readily inferred from contextual clues. In other cases, knowledge of the task at hand is not explicitly available during inference. In the following, we show that our metamodelling framework generalizes to such situations. In particular, we consider the problem of inferring which task to solve from a given input pattern, a noted benchmark challenge (Farquhar & Gal, 2018; van de Ven & Tolias, 2019). Below, we explore two different strategies that leverage task-conditioned hypernetworks in this CL setting.\nTask-dependent predictive uncertainty. Neural network models are increasingly reliable in signalling novelty and appropriately handling out-of-distribution data. For categorical target distributions, the network ideally produces a flat, high entropy output for unseen data and, conversely, a peaked, low-entropy response for in-distribution data (Hendrycks & Gimpel, 2016; Liang et al., 2017). This suggests a first, simple method for task inference (HNET+ENT). Given an input pattern for which task identity is unknown, we pick the task embedding which yields lowest predictive uncertainty, as quantified by output distribution entropy. While this method relies on accurate novelty detection, which is in itself a far from solved research problem, it is otherwise straightforward to implement and no additional learning or model is required to infer task identity.\nHypernetwork-protected synthetic replay. When a generative model is available, catastrophic forgetting can be circumvented by mixing current task data with replayed past synthetic data (for recent work see Shin et al., 2017; Wu et al., 2018). Besides protecting the generative model itself, synthetic data can protect another model of interest, for example, another discriminative model. This conceptually simple strategy is in practice often the state-of-the-art solution to CL (van de Ven & Tolias, 2019). Inspired by these successes, we explore augmenting our system with a replay network, here a standard variational autoencoder (VAE; Kingma & Welling, 2014) (but see Appendix F for experiments with a generative adversarial network, Goodfellow et al., 2014).\nSynthetic replay is a strong, but not perfect, CL mechanism as the generative model is subject to drift, and errors tend to accumulate and amplify with time. Here, we build upon the following key observation: just like the target network, the generator of the replay model can be specified by a hypernetwork. This allows protecting it with the output regularizer, Eq. 2, rather than with the model’s own replay data, as done in related work. Thus, in this combined approach, both synthetic replay and task-conditional metamodelling act in tandem to reduce forgetting.\nWe explore hypernetwork-protected replay in two distinct setups. First, we consider a minimalist architecture (HNET+R), where only the replay model, and not the target classifier, is parameterized by a hypernetwork. Here, forgetting in the target network is obviated by mixing current data with synthetic data. Synthetic target output values for previous tasks are generated using a soft targets method, i.e., by simply evaluating the target function before learning the new task on synthetic input data. Second (HNET+TIR), we introduce an auxiliary task inference classifier, protected using synthetic replay data and trained to predict task identity from input patterns. This architecture requires additional modelling, but it is likely to work well when tasks are strongly dissimilar. Furthermore, the task inference subsystem can be readily applied to process more general forms of contextual information, beyond the current input pattern. We provide additional details, including network architectures and the loss functions that are optimized, in Appendices B and C." }, { "heading": "3 RESULTS", "text": "We evaluate our method on a set of standard image classification benchmarks on the MNIST, CIFAR10 and CIFAR-100 public datasets1. Our main aims are to (1) study the memory retention capabilities of task-conditioned hypernetworks across three continual learning settings, and (2) investigate information transfer across tasks that are learned sequentially.\nContinual learning scenarios. In our experiments we consider three different CL scenarios (van de Ven & Tolias, 2019). In CL1, the task identity is given to the system. This is arguably the standard sequential learning scenario, and the one we consider unless noted otherwise. In CL2, task identity is unknown to the system, but it does not need to be explicitly determined. A target network with a fixed head is required to solve multiple tasks. In CL3, task identity has to be explicitly inferred. It has been argued that this scenario is the most natural, and the one that tends to be harder for neural networks (Farquhar & Gal, 2018; van de Ven & Tolias, 2019).\nExperimental details. Aiming at comparability, for the experiments on the MNIST dataset we model the target network as a fully-connected network and set all hyperparameters after van de Ven & Tolias (2019), who recently reviewed and compared a large set of CL algorithms. For our CIFAR experiments, we opt for a ResNet-32 target neural network (He et al., 2016) to assess the scalability of our method. A summary description of the architectures and particular hyperparameter choices, as well as additional experimental details, is provided in Appendix C. We emphasize that, on all our experiments, the number of hypernetwork parameters is always smaller or equal than the number of parameters of the models we compare with.\nNonlinear regression toy problem. To illustrate our approach, we first consider a simple nonlinear regression problem, where the function to be approximated is scalar-valued, Fig. 2. Here, a sequence of polynomial functions of increasing degree has to be inferred from noisy data. This motivates the continual learning problem: when learning each task in succession by modifying Θh with the memory-preserving regularizer turned off (βoutput = 0, see Eq. 2) the network learns the last task but forgets previous ones, Fig. 2c. The regularizer protects old solutions, Fig. 2a, and performance is comparable to an offline non-continual learner, Fig. 2b.\nPermuted MNIST benchmark. Next, we study the permuted MNIST benchmark. This problem is set as follows. First, the learner is presented with the full MNIST dataset. Subsequently, novel tasks are obtained by applying a random permutation to the input image pixels. This process can be repeated to yield a long task sequence, with a typical length of T = 10 tasks. Given the low similarity of the generated tasks, permuted MNIST is well suited to study the memory capacity of a continual learner. For T = 10, we find that task-conditioned hypernetworks are state-of-the-art on CL1, Table 1. Interestingly, inferring tasks through the predictive distribution entropy (HNET+ENT) works well on the permuted MNIST benchmark. Despite the simplicity of the method, both synaptic intelligence (SI; Zenke et al., 2017) and online elastic weight consolidation (EWC; Schwarz et al., 2018) are overperformed on CL3 by a large margin. When complemented with generative replay\n1Source code is available under https://github.com/chrhenning/hypercl.\na\nb\n|Θtrgt| versus task-averaged test set accuracy\nafter learning all tasks (labelled ‘final’, in red) and immediately after learning a task (labelled ‘during’, in purple) for the PermutedMNIST-10 benchmark. Hypernetworks allow for model compression and perform well even when the number of target model parameters exceeds their own. Performance decays nonlinearly: accuracies stay approximately constant for a wide range of compression ratios below unity. Hyperparameters were tuned once for compression ratio ≈ 1 and were then used for all compression ratios. Shaded areas denote STD (a) resp. SEM (b) across 5 random seeds.\nmethods, task-conditioned hypernetworks (HNET+TIR and HNET+R) are the best performers on all three CL scenarios.\nPerformance differences become larger in the long sequence limit, Fig. 3a. For longer task sequences (T = 100), SI and DGR+distill (Shin et al., 2017; van de Ven & Tolias, 2018) degrade gracefully, while the regularization strength of online EWC prevents the method from achieving high accuracy (see Fig. A6 for a hyperparameter search on related work). Notably, task-conditioned hypernetworks show minimal memory decay and find high performance solutions. Because the hypernetwork operates in a compressive regime (see Fig. 3b and Fig. A7 for an exploration of compression ratios), our results do not naively rely on an increase in the number of parameters. Rather, they suggest that previous methods are not yet capable of making full use of target model capacity in a CL setting. We report a set of extended results on this benchmark on Appendix D, including a study of CL2/3 (T = 100), where HNET+TIR strongly outperforms the related work.\nSplit MNIST benchmark. Split MNIST is another popular CL benchmark, designed to introduce task overlap. In this problem, the various digits are sequentially paired and used to form five binary classification tasks. Here, we find that task-conditioned hypernetworks are the best overall performers. In particular, HNET+R improves the previous state-of-the-art method DGR+distill on both CL2 and CL3, almost saturating the CL2 upper bound for replay models (Appendix D). Since HNET+R is essentially hypernetwork-protected DGR, these results demonstrate the generality of task-conditioned hypernetworks as effective memory protectors. To further support this, in Appendix F we show that our replay models (we experiment with both a VAE and a GAN) can learn in a class-incremental manner the full MNIST dataset. Finally, HNET+ENT again outperforms both EWC and SI, without any generative modelling.\nOn the split MNIST problem, tasks overlap and therefore continual learners can transfer information across tasks. To analyze such effects, we study task-conditioned hypernetworks with two-dimensional task embedding spaces, which can be easily visualized. Despite learning happening continually, we\nfind that the algorithm converges to a hypernetwork configuration that can produce target model parameters that simultaneously solve old and new tasks, Fig. 4, given the appropriate task embedding.\nSplit CIFAR-10/100 benchmark. Finally, we study a more challenging benchmark, where the learner is first asked to solve the full CIFAR-10 classification task and is then presented with sets of ten classes from the CIFAR-100 dataset. We perform experiments both with a high-performance ResNet-32 target network architecture (Fig. 5) and with a shallower model (Fig. A3) that we exactly reproduced from previous work (Zenke et al., 2017). Remarkably, on the ResNet-32 model, we find that task-conditioned hypernetworks essentially eliminate altogether forgetting. Furthermore, forward information transfer takes place; knowledge from previous tasks allows the network to find better solutions than when learning each task individually from initial conditions. Interestingly, forward transfer is stronger on the shallow model experiments (Fig. A3), where we otherwise find that our method performs comparably to SI." }, { "heading": "4 DISCUSSION", "text": "Bayesian accounts of continual learning. According to the standard Bayesian CL perspective, a posterior parameter distribution is recursively updated using Bayes’ rule as tasks arrive\nCIFAR-10 1 2 3 4 5 Task t\n0\n25\n50\n75\n100\nA cc\nur ac\ny [%\n]\n88 83 79 82 81 82\nSplit CIFAR-10/100\nhnet during hnet from scratch fine-tuning\nFigure 5: Split CIFAR-10/100 CL benchmark. Test set accuracies (mean ± STD, n = 5) on the entire CIFAR10 dataset and subsequent CIFAR-100 splits of ten classes. Our hypernetworkprotected ResNet-32 displays virtually no forgetting; final averaged performance (hnet, in red) matches the immediate one (hnet-during, in blue). Furthermore, information is transferred across tasks, as performance is higher than when training each task from scratch (purple). Disabling our regularizer leads to strong forgetting (in yellow).\n(Kirkpatrick et al., 2017; Huszár, 2018; Nguyen et al., 2018). While this approach is theoretically sound, in practice, the approximate inference methods that are typically preferred can lead to stiff models, as a compromise solution that suits all tasks has to be found within the mode determined by the first task. Such restriction does not apply to hypernetworks, which can in principle model complex multimodal distributions (Louizos & Welling, 2017; Pawlowski et al., 2017; Henning et al., 2018). Thus, rich, hypernetwork-modelled priors are one avenue of improvement for Bayesian CL methods. Interestingly, task-conditioning offers an alternative possibility: instead of consolidating every task onto a single distribution, a shared task-conditioned hypernetwork could be leveraged to model a set of parameter posterior distributions. This conditional metamodel naturally extends our framework to the Bayesian learning setting. Such approach will likely benefit from additional flexibility, compared to conventional recursive Bayesian updating.\nRelated approaches that rely on task-conditioning. Our model fits within, and in certain ways generalizes, previous CL methods that condition network computation on task descriptors. Taskconditioning is commonly implemented using multiplicative masks at the level of modules (Rusu et al., 2016; Fernando et al., 2017), neurons (Serra et al., 2018; Masse et al., 2018) or weights (Mallya & Lazebnik, 2018). Such methods work best with large networks and come with a significant storage overhead, which typically scales with the number of tasks. Our approach differs by explicitly modelling the full parameter space using a metamodel, the hypernetwork. Thanks to this metamodel, generalization in parameter and task space is possible, and task-to-task dependencies can be exploited to efficiently represent solutions and transfer present knowledge to future problems. Interestingly, similar arguments have been drawn in work developed concurrently to ours (Lampinen & McClelland, 2019), where task embedding spaces are further explored in the context of few-shot learning. In the same vein, and like the approach developed here, recent work in CL generates last-layer network parameters as part of a pipeline to avoid catastrophic forgetting (Hu et al., 2019) or distills parameters onto a contractive auto-encoding model (Camp et al., 2018).\nPositive backwards transfer. In its current form, the hypernetwork output regularizer protects previously learned solutions from changing, such that only weak backwards transfer of information can occur. Given the role of selective forgetting and refinement of past memories in achieving intelligent behavior (Brea et al., 2014; Richards & Frankland, 2017), investigating and improving backwards transfer stands as an important direction for future research.\nRelevance to systems neuroscience. Uncovering the mechanisms that support continual learning in both brains and artificial neural networks is a long-standing question (McCloskey & Cohen, 1989; French, 1999; Parisi et al., 2019). We close with a speculative systems interpretation (Kumaran et al., 2016; Hassabis et al., 2017) of our work as a model for modulatory top-down signals in cortex. Task embeddings can be seen as low-dimensional context switches, which determine the behavior of a modulatory system, the hypernetwork in our case. According to our model, the hypernetwork would in turn regulate the activity of a target cortical network.\nAs it stands, implementing a hypernetwork would entail dynamically changing the entire connectivity of a target network, or cortical area. Such a process seems difficult to conceive in the brain. However, this strict literal interpretation can be relaxed. For example, a hypernetwork can output lowerdimensional modulatory signals (Marder, 2012), instead of a full set of weights. This interpretation\nis consistent with a growing body of work which suggests the involvement of modulatory inputs in implementing context- or task-dependent network mode-switching (Mante et al., 2013; Jaeger, 2014; Stroud et al., 2018; Masse et al., 2018)." }, { "heading": "5 CONCLUSION", "text": "We introduced a novel neural network model, the task-conditioned hypernetwork, that is well-suited for CL problems. A task-conditioned hypernetwork is a metamodel that learns to parameterize target functions, that are specified and identified in a compressed form using a task embedding vector. Past tasks are kept in memory using a hypernetwork output regularizer, which penalizes changes in previously found target weight configurations. This approach is scalable and generic, being applicable as a standalone CL method or in combination with generative replay. Our results are state-of-the-art on standard benchmarks and suggest that task-conditioned hypernetworks can achieve long memory lifetimes, as well as transfer information to future tasks, two essential properties of a continual learner." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the Swiss National Science Foundation (B.F.G. CRSII5-173721), ETH project funding (B.F.G. ETH-20 19-01) and funding from the Swiss Data Science Center (B.F.G, C17-18, J. v. O. P18-03). Special thanks to Simone Carlo Surace, Adrian Huber, Xu He, Markus Marks, Maria R. Cervera and Jannes Jegminat for discussions, helpful pointers to the CL literature and for feedback on our paper draft." }, { "heading": "A TASK-CONDITIONED HYPERNETWORKS: MODEL SUMMARY", "text": "In our model, a task-conditioned hypernetwork produces the parameters Θtrgt = fh(e,Θh) of a target neural network. Given one such parameterization, the target model then computes predictions ŷ = ftrgt(x,Θtrgt) based on input data. Learning amounts to adapting the parameters Θh of the hypernetwork, including a set of task embeddings {e(t)}Tt=1, as well as a set of chunk embeddings {ci}NCi=1 in case compression is sought or if the full hypernetwork is too large to be handled directly. To avoid castastrophic forgetting, we introduce an output regularizer which fixes the behavior of the hypernetwork by penalizing changes in target model parameters that are produced for previously learned tasks.\nVariables that need to be stored while learning new tasks. What are the storage requirements of our model, when learning continually?\n1. Memory retention relies on saving one embedding per task. This collection {e(t)}Tt=1 therefore grows linearly with T . Such linear scaling is undesirable asymptotically, but it turns out to be essentially negligible in practice, as each embedding is a single lowdimensional vector (e.g., see Fig. 4 for a run with 2D embeddings).\n2. A frozen snapshot of the hypernetwork parameters Θ∗h , taken before learning a new task, needs to be kept, to evaluate the output regularizer in Eq. 2." }, { "heading": "B ADDITIONAL DETAILS ON HYPERNETWORK-PROTECTED REPLAY MODELS", "text": "Variational autoencoders. For all HNET+TIR and HNET+R experiments reported on the main text we use VAEs as our replay models (Fig. A1a, Kingma & Welling, 2014). Briefly, a VAE consists of an encoder-decoder network pair, where the encoder network processes some input pattern x and its outputs fenc(x) = (µ,σ2) comprise the parameters µ and σ2 (encoded in log domain, to enforce nonnegativity) of a diagonal multivariate Gaussian pZ(z;µ,σ2), which governs the distribution of latent samples z. On the other side of the circuit, the decoder network processes a latent sample z and a one-hot-encoded task identity vector and returns an input pattern reconstruction, fdec(z,1t) = x̂.\nVAEs can preserve memories using a technique called generative replay: when training task T , input samples are generated from the current replay network for old tasks t < T , by varying 1t and drawing latent space samples z. Generated data can be mixed with the current dataset, yielding an augmented dataset X̃ used to relearn model parameters. When protecting a discriminative model, synthetic ‘soft’ targets can be generated by evaluating the network on X̃ . We use this strategy to protect an auxiliary task inference classifier in HNET+TIR, and to protect the main target model in HNET+R.\nHypernetwork-protected replay. In our HNET+TIR and HNET+R experiments, we parameterize the decoder network through a task-conditioned hypernetwork, fh,dec(e,Θh,dec). In combination with our output regularizer, this allows us to take advantage of the memory retention capacity of hypernetworks, now on a generative model.\nThe replay model (encoder, decoder and decoder hypernetwork) is a separate subsystem that is optimized independently from the target network. Its parameters Θenc and Θh,dec are learned by minimizing our regularized loss function, Eq. 2, here with the task-specific term set to the standard VAE objective function,\nLVAE ( X,Θenc,Θh,dec ) = Lrec(X,Θenc,Θdec) + Lprior(X,Θenc,Θdec), (3)\nwith Θdec = fh,dec(e,Θh,dec) introducing the dependence on Θh,dec. LVAE balances a reconstruction Lrec and a prior-matching Lprior penalties. For our MNIST experiments, we choose binary crossentropy (in pixel space) as the reconstruction loss, that we write below for a single example x\nLrec(x,Θenc,Θdec) = Lxent ( x, fdec ( z,1t(x),Θdec )) , (4)\nwhere Lxent(t, y) = − ∑ k tk log yk is the cross entropy. For a diagonal Gaussian pZ , the priormatching term can be evaluated analytically,\nLprior = − 1\n2 |z|∑ i=1 ( 1 + log σ2i − σ2i − µ2i ) . (5)\nAbove, z is a sample from pZ(z;µ(x̃),σ2(x̃)) obtained via the reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014). This introduces the dependency of Lrec on Θenc.\nTask inference network (HNET+TIR). In the HNET+TIR setup, we extend our system to include a task inference neural network classifier α(x) parameterized by ΘTI, where tasks are encoded with a T -dimensional softmax output layer. In both CL2 and CL3 scenarios we use a growing single-head setup for α, and increase the dimensionality of the softmax layer as tasks arrive.\nThis network is prone to catastrophic forgetting when tasks are learned continually. To prevent this from happening we resort to replay data generated from a hypernetwork-protected VAE, described above. More specifically, we introduce a task inference loss,\nLTI(x̃,ΘTI) = Lxent(1t(x̃),α(x̃,Θenc)), (6)\nwhere t(x̃) denotes the correct task identity for a sample x̃ from the augmented dataset X̃ = {X̃(1), . . . X̃(T−1), X̃(T )} with X̃(t) being synthetic data fdec(z,1t,Θdec) for t = 1 . . . T −1 and X̃(T ) = X(T ) is the current task data. Importantly, synthetic data is essential to obtain a well defined objective function for task inference; the cross-entropy loss LTI requires at least two groundtruth classes to be optimized. Note that replayed data can be generated online by drawing samples z from the prior.\na\nfdec\nfenc\nz task id\nx\nx~ (t) fh,dec\n(t)\nθdec\nb\nfgen\nfdisc\nx~ (t) fh,gen\nθgen\nz task id c\nα\ne\nfTI x, {x }(T) T-1t=1 (t)~\nFigure A1: Hypernetwork-protected replay model setups. (a) A hypernetwork-protected VAE, that we used for HNET+R and HNET+TIR main text experiments. (b) A hypernetwork-protected GAN, that we used for our class-incremental learning Appendix F experiments. (c) A task inference classifier protected with synthetic replay data, used on HNET+TIR experiments.\nHypernetwork-protected GANs. Generative adversarial networks (Goodfellow et al., 2014) have become an established method for generative modelling and tend to produce higher quality images compared to VAEs, even at the scale of datasets as complex as ImageNet (Brock et al., 2019; Lučić et al., 2019; Donahue & Simonyan, 2019). This makes GANs perfect candidates for powerful replay models. A suitable GAN instantiation for CL is the conditional GAN (Mirza & Osindero, 2014) as studied by Wu et al. (2018). Recent developments in the GAN literature already allude towards the potential of using hypernetwork-like structures, e.g., when injecting the latent noise (Karras et al., 2019) or when using class-conditional batch-normalization as in (Brock et al., 2019). We propose to go one step further and use a hypernetwork that maps the condition to the full set of generator parameters Θ∗gen. Our framework allows training a conditional GAN one condition at the time. This is potentially of general interest, and goes beyond the scope of replay models, since conditional GANs trained in a mutli-task fashion as in Brock et al. (2019) require very large computational resources.\nFor our showcase experiment on class-incremental MNIST learning, Fig. A8, we did not aim to compare to related work and therefore did not tune to have less weights in the hypernetwork than on the target network (for the VAE experiments, we use the same compressive setup as in the main text, see Appendix C). The GAN hypernetwork is a fully-connected chunked hypernetwork with 2 hidden layers of size 25 and 25 followed by an output size of 75,000. We used learning rates for both discriminator and the generator hypernetwork of 0.0001, as well as dropout of 0.4 in the discriminator and the system is trained for 10000 iterations per task. We use the Pearson Chi2 Least-Squares GAN loss from Mao et al. (2017) in our experiments." }, { "heading": "C ADDITIONAL EXPERIMENTAL DETAILS", "text": "All experiments are conducted using 16 NVIDIA GeForce RTX 2080 TI graphics cards.\nFor simplicity, we decided to always keep the previous task embeddings e(t), t = 1, . . . , T − 1, fixed and only learn the current task embedding e(T ). In general, performance should be improved if the\nregularizer in Eq. 2 has a separate copy of the task embeddings e(t,∗) from before learning the current task, such that e(t) can be adapted. Hence, the targets become fh(e(t,∗),Θ∗h) and remain constant while learning task T . This would give the hypernetwork the flexibility to adjust the embeddings i.e. the preimage of the targets and therefore represent any function that includes all desired targets in its image.\nNonlinear regression toy problem. The nonlinear toy regression from Fig. 2 is an illustrative example for a continual learning problem where a set of ground-truth functions {g(1), . . . , g(T )} is given from which we collect 100 noisy training samples per task {(x,y) | y = g(t)(x) + with ∼ N (0, σ2I),x ∼ U(X (t))}, where X (t) denotes the input domain of task t. We set σ = 0.05 in this experiment.\nWe perform 1D regression and choose the following set of tasks:\ng(1)(x) = x+ 3 X (1) = [−4,−2] (7) g(2)(x) = 2x2 − 1 X (2) = [−1, 1] (8) g(3)(x) = (x− 3)3 X (3) = [2, 4] (9)\nThe target network ftrgt consists of two fully-connected hidden layers using 10 neurons each. For illustrative purposes we use a full hypernetwork fh that generates all 141 weights of ftrgt at once, also being a fully-connected network with two hidden-layers of size 10. Hence, this is the only setup where we did not explore the possibility of a chunked hypernetwork. We use sigmoid activation functions in both networks. The task embedding dimension was set to 2.\nWe train each task for 4000 iterations using the Adam optimizer with a learning rate of 0.01 (and otherwise default PyTorch options) and a batch size of 32.\nTo test our regularizer in Fig. 2a we set βoutput to 0.005, while it is set to 0 for the fine-tuning experiment in Fig. 2c.\nFor the multi-task learner in Fig. 2b we trained only the target network (no hypernetwork) for 12000 iterations with a learning rate of 0.05. Comparable performance could be obtained when training the task-conditioned hypernetwork in this multi-task regime (data not shown).\nIt is worth noting that the multi-task learner from Fig. 2b that uses no hypernetwork is only able to learn the task since we choose the input domains to be non-overlapping.\nPermuted MNIST benchmark. For our experiments conducted on MNIST we replicated the experimental setup proposed by van de Ven & Tolias (2019) whenever applicable. We therefore use the same number of training iterations, the same or a lower number of weights in the hypernetwork than in the target network, the same learning rates and the same optimizer. For the replay model, i.e., the hypernetwork-empowered VAE, as well as for the standard classifier we used 5000 training iterations per task and learning rate is set to 0.0001 for the Adam optimizer (otherwise PyTorch default values). The batchsize is set to 128 for the VAE whereas the classifier is simultaneously trained on a batch of 128 samples of replayed data (evenly distributed over all past tasks) and a batch of 128 images from the currently available dataset. MNIST images are padded with zeros, which results in network inputs of size 32× 32, again strictly following the implementation of the compared work. We experienced better performance when we condition our replay model on a specific task input. We therefore construct for every task a specific input namely a sample from a standard multivariate normal of dimension 100. In practice we found the dimension to be not important. This input stays constant throughout the experiment and is not learned. Note that we use the same hyperparameters for all learning scenarios, which is not true for the reported related work since they have tuned special hyperparameters for all scenarios and all methods.\n• Details of hypernetwork for the VAE. We use one hypernetwork configuration to generate weights for all variational autoencoders used for our PermutedMNIST-10 experiments namely a fully-connected chunked hypernetwork with 2 hidden layers of size 25 and 25 followed by an output size of 85,000. We use ELU nonlinearities in the hidden layers\na\n1 50 100 Task t\n50\n75\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100\nβoutput = 1 βoutput = 0.5 βoutput = 0.1\nβoutput = 0.05 βoutput = 0.01 βoutput = 0.005\nβoutput = 0.001 βoutput = 0.0005\nb\n1 50 100 Task t\n80\n90\n100\nA cc\nur ac\ny [%\n]\n96.78% 95.66%\nPermutedMNIST-100\n(12 + 128)→ 200→ 250→ 350→ 7500, βoutput = 0.01 (12 + 128)→ 200→ 250→ 300→ 6000, βoutput = 0.01\nc\n0.001 0.01 0.1 1 0.0005 0.005 0.05 0.5\nβoutput\n80\n90\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100\nduring final\nFigure A2: Additional experiments on the PermutedMNIST-100 benchmark. (a) Final test set classification accuracy on the t-th task after learning one hundred permutations (PermutedMNIST100). All runs use exactly the same hyperparameter configuration except for varying values of βoutput. The final accuracies are robust for a wide range of regularization strengths. If βoutput is too weak, forgetting will occur. However, there is no severe disadvantage of choosing βoutput too high (cmp. (c)). A too high βoutput simply shifts the attention of the optimizer away from the current task, leading to lower baseline accuracies when the training time is not increased. (b) Due to an increased number of output neurons, the target network for PermutedMNIST-100 has more weights than for PermutedMNIST-10 (this is only the case for CL1 and CL3). This plot shows that the performance drop is minor when choosing a hypernetwork with a comparable number of weights as the target network in CL2 (orange) compared to one that has a similar number of weights as the target network for CL1 in PermutedMNIST-100 (red). (c) Task-averaged test set accuracy after learning all tasks (labelled ‘final’, in red) and immediately after learning a task (labelled ‘during’, in purple) for the runs depicted in (a). For low values of βoutput final accuracies are worse than immediate once (forgetting occurs). If βoutput is too high, baseline accuracies decrease since the optimizer puts less emphasis on the current task (note that the training time per task is not increased). Shaded areas in (a) and (b) denote STD, whereas error bars in (c) denote SEM (always across 5 random seeds).\nof the hypernetwork. The size of task embeddings e has been set to 24 and the size of chunk embeddings c to 8. The parameter βoutput is 0.05 . The number of weights in this hypernetwork is 2,211,907 (2,211,691 network weights + 216 task embedding weights). The corresponding target network (and therefore output of the chunked hypernetwork), as taken from related work, has 2,227,024 weights. • Details of the VAE for HNET+TIR. For this variational autoencoder, we use two fully-\nconnected neural networks with layers of size 1000, 1000 for the encoder and 1000, 1000 for the decoder and a latent space of 100. This setup is again copied from work we compare against. • Details of the VAE for HNET+R. For this variational autoencoder, we use two fully-\nconnected neural networks with layers of size 400, 400 for the encoder and 400, 400 for the decoder (both 1000, 1000 in the related work) and a latent space of dimension 100. Here, we departure from related work by choosing a smaller architecture for the autoencoder. Note that we still use a hypernetwork with less trainable parameters than the target network (in this case the decoder) that is used in related work. • Details of the hypernetwork for the target classifier in PermutedMNIST-10\n(HNET+TIR & HNET+ENT). We use the same setup for the hypernetwork as used for the VAEs above, but since the target network is smaller we reduce the output of the hypernetwork to 78,000. We also adjust the parameter βoutput to 0.01, consistent with our PermutedMNIST-100 experiments. The number of weights in this hypernetwork is therefore 2,029,931 parameters (2,029,691 network weights + 240 task embedding weights). The corresponding target network (from related work) would have 2,126,100 weights for CL1 and CL3 and 2,036,010 for CL2 (only one output head). • Details of the hypernetwork for the target classifier for PermutedMNIST-100. For\nthese experiments we chose an architecture that worked well on the PermutedMNIST-10 benchmark and did not conduct any more search for new architectures. For PermutedMNIST100, the reported results were obtained by using a chunked hypernetwork with 3 hidden layers of size 200, 250 and 350 (300 for CL2) and an output size of 7500 (6000 for CL2) (such that we approximately match the corresponding target network size for CL1/CL2/CL3). Interestingly, Fig. A2b shows that even if we don’t adjust the number of hypernetwork weights to the increased number of target network weights, the superiority of our method is evident. Aside from this, the plots in Fig. 3 have been generated using the PermutedMNIST-10 HNET+TIR setup (note that this includes the conditions set by related work for PermutedMNIST-10, e.g., target network sizes, the number of training iterations, learning rates, etc.). • Details of the VAE and the hypernetwork for the VAE in PermutedMNIST-100 for CL2/CL3. We use a very similar setup for the VAE and it’s hypernetwork used in HNET+TIR for PermutedMNIST-10 as described above. We only applied the following changes: Fully-connected hypernetwork with one hidden layer of size 100; chunk embedding sizes are set to 12; task embedding sizes are set two 128 and the hidden layer sizes of the VAE its generator are 400, 600. Also we increased the regularisation strength βoutput = 0.1 for the VAE its generator hypernetwork. • Details of the target classifier for HNET+TIR & HNET+ENT. For this classifier, we\nuse the same setup as in the study we compare to (van de Ven & Tolias, 2019), i.e., a fully-connected network with layers of size 1000, 1000. Note that if the classifier is used as a task inference model, it is trained on replay data and the corresponding hard targets, i.e., the argmax of the soft targets.\nBelow, we report the specifications for our automatic hyperparameter search (if not noted otherwise, these specifications apply for the split MNIST and split CIFAR experiments as well):\n• Hidden layer sizes of the hypernetwork: (no hidden layer), \"5,5\" \"10,10\", \"25,25\", \"50,50\", \"100,100\", \"10\", \"50\", \"100\" • Output size of the hypernetwork: fitted such that we obtain less parameters then the target\nnetwork which we compare against • Embedding sizes (for e and c): 8, 12, 24, 36, 62, 96, 128 • βoutput: 0.0005, 0.001, 0.005, 0.01, 0.005, 0.1, 0.5, 1.0\n• Hypernetwork transfer functions: linear, ReLU, ELU, Leaky-ReLU\nNote that only a random subset of all possible combinations of hyperparameters has been explored.\nAfter we found a configuration with promising accuracies and a similar number of weights compared to the original target network, we manually fine-tuned the architecture to increase/decrease the number of hypernetwork weights to approximately match the number of target network weights.\nThe choice of hypernetwork architecture seems to have a strong influence on the performance. It might be worth exploring alternatives, e.g., an architecture inspired by those used in typical generative models. We note that in addition to the above specifications we explored manually some hyperparameter configurations to gain a better understanding of our method.\nSplit MNIST benchmark. Again, whenever applicable we reproduce the setup from van de Ven & Tolias (2019). Differences to the PermutedMNIST-10 experiments are just the learning rate (0.001) and the number of training iterations (set to 2000).\n• Details of hypernetwork for the VAE. We use one hypernetwork configuration to generate weights for all variational autoencoders used for our split MNIST experiments, namely a fully-connected chunked hypernetwork with 2 hidden layers of size 10, 10 followed by an output size of 50,000. We use ELU nonlinearities in the hidden layers of the hypernetwork. The size of task embeddings e has been set to 96 and the size of chunk embeddings c to 96. The parameter βoutput is 0.01 for HNET+R and 0.05 for HNET+TIR . The number of weights in this hypernetwork is 553,576 (553,192 network weights + 384 task embedding weights). The corresponding target network (and therefore output of the chunked hypernetwork), as taken from related work, has 555,184 weights. For a qualitative analyses of the replay data of this VAE (class incrementally learned), see A8.\n• Details of the VAE for HNET+TIR. For this variational autoencoder, we use two fullyconnected neural networks with layers of size 400, 400 for the encoder and 50, 150 for the decoder (both 400, 400 in the related work) and a latent space of dimension 100.\n• Details of the VAE for HNET+R. For this variational autoencoder, we use two fullyconnected neural networks with layers of size 400, 400 for the encoder and 250, 350 for the decoder (both 400, 400 in the related work) and a latent space of dimension 100.\n• Details of the hypernetwork for the target classifier in split MNIST (HNET+TIR & HNET+ENT). We use the same setup for the hypernetwork as used for the VAE above, but since the target network is smaller we reduce the output of the hypernetwork to 42,000. We also adjust the βoutput to 0.01 although this parameter seems to not have a strong effect on the performance. The number of weights in this hypernetwork is therefore 465,672 parameters (465,192 network weights + 480 task embedding weights). The corresponding target network (from related work) would have 478,410 weights for CL1 and CL3 and 475,202 for CL2 (only one output head).\n• Details of the target classifier for HNET+TIR & HNET+ENT. For this classifier, we again use the same setup as in the study we compare to (van de Ven & Tolias, 2019), i.e., a fully-connected neural networks with layers of size 400, 400. Note that if the classifier is used as a task inference model, it is trained on replay data and the corresponding hard targets, i.e., the argmax the soft targets.\nSplit CIFAR-10/100 benchmark. For these experiments, we used as a target network a ResNet-32 network (He et al. (2016)) and again produce the weights of this target network by a hypernetwork in a compressive manner. The hypernetwork in this experiment directly maps from the joint task and chunk embedding space (both dimension 32) to the output space of the hypernetwork, which is of dimension 7,000. This hypernetwork has 457,336 parameters (457,144 network weights + 192 task embedding weights). The corresponding target network, the ResNet-32, has 468.540 weights (including batch-norm weights). We train for 200 epochs per task using the Adam optimizer with an initial learning rate of 0.001 (and otherwise default PyTorch values) and a batch size of 32. In addition, we apply the two learning rate schedules suggested in the Keras CIFAR-10 example2.\n2See https://keras.io/examples/cifar10_resnet/.\nDue to the use of batch normalization, we have to find an appropriate way to handle the running statistics which are estimated during training. Note, these are not parameters which are trained through backpropagation. There are different ways how the running statistics could be treated:\n1. One could ignore the running statistics altogether and simply compute statistics based on the current batch during evaluation.\n2. The statistics could be part of the hypernetwork output. Therefore, one would have to manipulate the target hypernetwork output of the previous task, such that the estimated running statistics of the previous task will be distilled into the hypernetwork.\n3. The running statistics can simply be checkpointed and stored after every task. Note, this method would lead to a linear memory growth in the number of tasks that scales with the number of units in the target network.\nFor simplicity, we chose the last option and simply checkpointed the running statistics after every task.\nFor the fine-tuning results in Fig. 5 we just continually updated the running statistics (thus, we applied no checkpointing)." }, { "heading": "D ADDITIONAL EXPERIMENTS AND NOTES", "text": "Split CIFAR-10/100 benchmark using the model of Zenke et al. (2017). We re-run the split CIFAR-10/100 experiment reported on the main text while reproducing the setup from Zenke et al. (2017). Our overall classification performance is comparable to synaptic intelligence, which achieves 73.85% task-averaged test set accuracy, while our method reaches 71.29% ± 0.32%, with initial baseline performance being slightly worse in our approach, Fig. A3.\nCIFAR-10 1 2 3 4 5 Task t\n0\n25\n50\n75\n100\nA cc\nur ac\ny [%\n]\n73 69\n66\n74 69\n7577\n63 57\n69\n58 65\n35 31\n37\n50 54\n7877 74 70 76 73 7574 72 71 76 70 77\nSplit CIFAR-10/100\nhnet during hnet from scratch fine-tuning SI\nFigure A3: Replication of the split CIFAR-10/100 experiment of Zenke et al. (2017). Test set accuracies on the entire CIFAR-10 dataset and subsequent CIFAR-100 splits. Both task-conditioned hypernetworks (hnet, in red) and synaptic intelligence (SI, in green) transfer information forward and are protected from catastrophic forgetting. The performance of the two methods is comparable. For completeness, we report our test set accuracies achieved immediately after training (hnet-during, in blue), when training from scratch (purple), and with our regularizer turned off (fine-tuning, yellow).\nTo obtain our results, we use a hypernetwork with 3 hidden-layers of sizes 100, 150, 200 and output size 5500. The size of task embeddings e has been set to 48 and the size of chunk embeddings c to 80. The parameter βoutput is 0.01 and the learning rate is set to 0.0001.\nThe number of weights in this hypernetwork is 1,182,678 (1,182,390 network weights + 288 task embedding weights). The corresponding target network would have 1,276,508 weights.\nIn addition to the above specified hyperparameter search configuration we also included the following learning rates: 0.0001, 0.0005, 0.001 and manually tuned some architectural parameters.\na\n1 50 100 Task t\n25\n50\n75\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100 - CL2\nHNET+TIR SI DGR+distill Online EWC\nb\n1 50 100 Task t\n25\n50\n75\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100 - CL3\nHNET+TIR SI DGR+distill Online EWC\nFigure A4: Context-free inference using hypernetwork-protected replay (HNET+TIR) on long task sequences. Final test set classification accuracy on the t-th task after learning one hundred permutations of the MNIST dataset (PermutedMNIST-100) for the CL2 (a) and CL3 (b) scenarios, where task identity is not explicitly provided to the system. As before, the number of hypernetwork parameters is not larger than that of the related work we compare to. (a) HNET+TIR displays almost perfect memory retention. We used a stochastic regularizer (cf. Appendix D note below) which evaluates the output regularizer in Eq. 2 only for a random subset of previous tasks (here, twenty). (b) HNET+TIR is the only method that is capable of learning PermutedMNIST-100 in this learning scenario. For this benchmark, the input data domains are easily separable and the task inference system achieves virtually perfect (~100%) task inference accuracy throughout, even for this long experiment. HNET+TIR uses a divide-and-conquer strategy: if task inference is done right, CL3 becomes just CL1. Furthermore, once task identity is predicted, the final softmax computation only needs to consider the corresponding task outputs in isolation (here, of size 10). Curiously, for HNET+TIR, CL2 can be harder than CL3 as the single output layer (of size 10, shared by all tasks) introduces a capacity bottleneck. The related methods, on the other hand, have to consider the entire output layer (here, of size 10*100) at once, which is known to be harder to train sequentially. This leads to overwhelming error rates on long problems such as PermutedMNIST-100. Shaded areas in (a) and (b) denote STD (n = 5).\nUpper bound for replay models. We obtain an upper bound for the replay-based experiments (Table 2) by sequentially training a classifier, in the same way as for HNET+R and DGR, now using true input data from past tasks and a synthetic, self-generated target. This corresponds to the rehearsal thought experiment delineated in Sect. 1.\nQuantification of forgetting in our continual learning experiments. In order to quantify forgetting of our approach, we compare test set accuracies of every single task directly after training with it’s test set accuracy after training on all tasks.\nOnly CL1 is shown since other scenarios i.e. CL2 and CL3 depend on task inference which only is measurable after training on all tasks.\nRobustness of βoutput-choice. In Fig. A2a and Fig. A2c we provide additional experiments for our method on PermutedMNIST-100. We show that our method performs comparable for a wide range of βoutput-values (including the one depicted in Fig. 3a).\na\n1 50 100 Task t\n25\n50\n75\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100\nhnet λ = 1 λ = 10\nλ = 50 λ = 100 λ = 500\nλ = 1000 λ = 5000 λ = 10000\nb\n1 25 100 500 2500 7500 hnet 10 50 250 1000 500010000\nλ\n60\n70\n80\n90\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100\nduring final\nc\n1 50 100 Task t\n10\n40\n70\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100\nhnet hnet fine-tuning fine-tuning\nd\nhnet hnet fine-tuning fine-tuning 10\n40\n70\n100\nA cc\nur ac\ny [%\n] PermutedMNIST-100\nduring final\nFigure A5: Additional experiments with online EWC and fine-tuning on the PermutedMNIST100 benchmark. (a) Final test set classification accuracy on the t-th task after learning one hundred permutations (PermutedMNIST-100) using the online EWC algorithm (Schwarz et al., 2018) to prevent forgetting. All runs use exactly the same hyperparameter configuration except for varying values of the regularization strength λ. Our method (hnet, in red) and the online EWC run (λ = 100, in orange) from Fig. 3a are shown for comparison. It can be seen that even when tuning the regularization strength one cannot attain similar performance as with our approach (cmp. Fig. A2a). Too strong regularization prevents the learning of new tasks whereas too weak regularization doesn’t prevent forgetting. However, a middle ground (e.g., using λ = 100) does not reach acceptable per-task performances. (b) Task-averaged test set accuracy after learning all tasks (labelled ‘final’, in red) and immediately after learning a task (labelled ‘during’, in purple) for a range of regularization strengths λ when using the online EWC algorithm. Results are complementary to those shown in (a). (c) Final test set classification accuracy on the t-th task after learning one hundred permutations (PermutedMNIST-100) when applying fine-tuning to the hypernetwork (labelled ‘hnet fine-tuning’, in blue) or target network (labelled ‘fine-tuning’, in green). Our method (hnet, in red) from Fig. 3a is shown for comparison. It can be seen that without protection the hypernetwork suffers much more severely from catastrophic forgetting as when training a target network only. (d) This plot is complementary to (c). See description of (b) for an explanation of the labels. Shaded areas in (a) and (c) denote STD, whereas error bars in (b) and (d) denote SEM (always across 5 random seeds).\na\n0.005 0.05 0.5 5 50 500 5000 0.001 0.01 0.1 1 10 100 1000 10000\nSI: c\n50\n75 100 A cc ur ac y [% ]\nPermutedMNIST-100\nduring final\nb\n5 50 500 5000 1 10 100 1000 10000\nOnline EWC: λ\n50\n75\n100\nA cc\nur ac\ny [%\n]\nPermutedMNIST-100\nγ = 0.5 γ = 0.6\nγ = 0.7 γ = 0.8\nγ = 0.9 γ = 1\nFigure A6: Hyperparameter search for online EWC and SI on the PermutedMNIST-100 benchmark. We conduct the same hyperparameter search as performed in van de Ven & Tolias (2018). We did not compute different random seeds for this search. (a) Hyperparameter search on the regularisation strength c for the SI algorithm. Accuracies during and after the experiment are shown. (b) Hyperparameter search for parameters λ and γ of the online EWC algorithm. Only accuracies after the experiment are shown.\nVarying the regularization strength for online EWC. The performance of online EWC in Fig. 3a is closest to our method (labelled hnet, in red) compared to the other methods. Therefore, we take a closer look at this method and show that further adjustments of the regularization strength λ do not lead to better performance. Results for a wide range of regularization strengths can be seen in Fig. A5a and Fig. A5b. As shown, online EWC cannot attain a performance comparable to our method when tuning the regularization strength only.\nThe impact of catastrophic forgetting on the hypernetwork and target network. We have successfully shown that by shifting the continual learning problem from the target network to the hypernetwork we can successfully overcome forgetting due to the introduction of our regularizer in Eq. 2. We motivated this success by claiming that it is an inherently simpler task to remember a few input-output mappings in the hypernetwork (namely the weight realizations of each task) rather than the massive number of input-output mappings {(x(t,i),y(t,i))}nti=1 associated with the remembering of each task t by the target network.\nFurther evidence of this claim is provided by fine-tuning experiments in Fig. A5c and Fig. A5d. Fine-tuning refers to sequentially learning a neural network on a set of tasks without any mechanism in place to prevent forgetting. It is shown that fine-tuning a target network (no hypernetwork in this setup) has no catastrophic influence on the performance of previous tasks. Instead there is a graceful decline in performance. On the contrary, catastrophic forgetting has an almost immediate affect when training a hypernetwork without protection (i.e., training our method with βoutput = 0. The performance quickly drops to chance level, suggesting that if we weren’t solving a simpler task then preventing forgetting in the hypernetwork rather than in the target network might not be beneficial.\nChunking and hypernetwork architecture sensitivity. In this note we investigate the performance sensitivity for different (fully-connected) hypernetwork architectures on split MNIST and PermutedMNIST-10, Fig. A7. We trained thousands of randomly drawn architectures from the following grid (the same training hyperparameters as reported for for CL1, see Appendix C, were used throughout): possible number of hidden layers 1, 2, possible layer size 5, 10, 20, . . . , 90, 100, possible chunk embedding size 8, 12, 24, 56, 96 and hypernetwork output size in {10, 50, 100, 200, 300, 400, 500, 750, 1k, 2k, . . . , 9k, 10k, 20k, 30k, 40k}. Since we realize compression through chunking, we sort our hypernetwork architectures by compression ratio, and consider only architectures with small compression ratios.\nPerformance of split MNIST stays in the high 90 percentages even when reaching compression ratios close to 1% whereas for PermutedMNIST-10 accuracies decline in a non-linear fashion. For both experiments, the choice of the chunked hypernetwork archicture is robust and high performing even in\na\n0.0 0.5 1.0 1.5 Compression ratio\n0\n20\n40\n60\n80\n100\nA cc\nur ac\ny [%\n]\nsplitMNIST compression-performance trade-off for random hnet architectures\nb\n0.0 0.5 1.0 Compression ratio\n0\n20\n40\n60\n80\n100\nA cc\nur ac\ny [%\n]\npermutedMNIST compression-performance trade-off for random hnet architectures\nFigure A7: Robustness to hypernetwork architecture choice for a large range of compression ratios. Performance vs. compression for random hypernetwork architecture choices, for split MNIST and PermutedMNIST-10 (mean ± STD, n = 500 architectures per bin). Every model was trained with the same setup (including all hyperparameters) used to obtain results reported in Table 1 (CL1). We considered architectures yielding compression ratios |Θh ∪ {e(t)}|/|Θtrgt| ∈ [0.01, 2.0] (a) split MNIST performance for CL1 stays high even for compression ratios ≈ 1%. (b) PermutedMNIST-10 accuracies degrade gracefully when compression ratios decline to 1%. Notably, for both benchmarks, performance remained stable across a large pool of hypernetwork configurations.\nthe compressive regime. Note that the discussed compression ratio compares the amount of trainable parameters in the hypernetwork to its output size, i.e. the parameters of the target network.\nSmall capacity target networks for the permuted MNIST benchmark. Swaroop et al. (2018) argue for using only small capacity target networks for this benchmark. Specifically, they propose to use hidden layer sizes [100, 100]. Again, we replicated the setup of van de Ven & Tolias (2019) wherever applicable, except for the now smaller hidden layer sizes of [100, 100] in the target network. We use a fully-connected chunked hypernetwork with chunk embeddings c having size 12, hidden layers having size 100, 75, 50 and an output size of 2000, resulting in a total number of hypernetwork weights of 122,459 (including 10 × 64 task embedding weights) compared to 122,700 weights that are generated for the target network. βoutput is set to 0.05. The experiments performed here correspond to CL1.\nWe achieve an average accuracy of 93.91 ± 0.04 for PermutedMNIST-10 after having trained on all tasks. In general, we saw that the hypernetwork training can benefit from noise injection. For instance, when training with soft-targets (i.e., we modified the 1-hot target to be 0.95 for the correct class and 1−0.95# classes−1 for the remaining classes), we could improve the average accuracy to 94.24 ± 0.03.\nWe also checked the challenging PermutedMNIST-50 benchmark with this small target network as previously investigated by Ritter et al. (2018). Therefore, we slightly adapted the above setup by using a hypernetwork with hidden layer sizes [100, 100] and a regularization strength of βoutput = 0.1. This hypernetwork is slightly bigger than the corresponding target network |Θh∪{e (t)}|\n|Θtrgt| = 1.37. With this configuration, we obtain an average accuracy of 90.91 ± 0.07.\nComparison to HAT. Serra et al. (2018) proposed the hard attention to the task (HAT) algorithm, a strong CL1 method which relies on learning a per-task, per-neuron mask. Since the masks are pushed to become binary, HAT can be viewed as an algorithm for allocating subnetworks (or modules) within the target network, which become specialized to solve a given task. Thus, the method is similar to ours in the sense that the computation of the target network is task-dependent, but different in spirit, as it relies on network modularity.\nIn HAT, task identity is assumed to be provided, so that the appropriate mask can be picked during inference (scenario CL1). HAT requires explicitly storing a neural mask for each task, whose size\nscales with the number of neurons in the target network. In contrast, our method allows solving tasks in a compressive regime. Thanks to the hypernetwork, whose input dimension can be freely chosen, only a low-dimensional embedding needs to be stored per task (cf. Fig. 4), and through chunking it is possible to learn to parameterize large target models with a small number of plastic weights (cf. Fig. 3b).\nHere, we compare our task-conditioned hypernetworks to HAT on the permuted MNIST benchmarks (T = 10 and T = 100), cf. Table 6. For large target networks, both methods perform strongly, reaching comparable final task-averaged accuracies. For small target network sizes, task-conditioned hypernetworks perform better, the difference becoming more apparent on PermutedMNIST-100.\nWe note that the two algorithms use different training setups. In particular, HAT uses 200 epochs (batch size set to 64) and applies a learning rate scheduler that acts on a held out validation set. Furthermore, HAT uses differently tuned forgetting hyperparameters when target network sizes change. This is important to control for the target network capacity used per task and assumes knowledge of the (number of) tasks at hand. Using the code freely made available by the authors, we were able to rerun HAT for our target network size and longer task sequences. Here, we used the setup provided by the author’s code for HAT-Large for PermutedMNIST-10 and PermutedMNIST-100. To draw a fairer comparison, when changing our usual target network size to match the ones reported in Serra et al. (2018), we trained for 50 epochs per task (no training loss improvements afterwards observed) and also changed the batch size to 64 but did not changed our training scheme otherwise; in particular, we did not use a learning rate scheduler.\nEfficient PermutedMNIST-250 experiments with a stochastic regularizer on subsets of previous tasks. An apparent drawback of Eq. 2 is that the runtime complexity of the regularizer grows linearly with the number of tasks. To overcome this obstacle, we show here that it is sufficient to consider a small random subset of previous tasks.\nIn particular, we consider the PermutedMNIST-250 benchmark (250 tasks) on CL1 using the hyperparameter setup from our PermutedMNIST-100 experiments except for a hypernetwork output size of 12000 (to adjust to the bigger multi-head target network) and a regularization strength βoutput = 0.1. Per training iteration, we choose maximally 32 random previous tasks to estimate the regularizer from Eq. 2. With this setup, we achieve a final average accuracy of 94.19 ± 0.16 (compared to an average during accuracy (i.e., the accuracies achieved right after training on the corresponding task) of 95.54 ± 0.05). All results are across 5 random seeds. These results indicate that a full evaluation of the regularizer at every training iteration is not necessary such that the linear runtime complexity can be cropped to a constant one.\nCombining hypernetwork output regularizers with weight importance. Our hypernetwork regularizer pulls uniformly in every direction, but it is possible to introduce anisotropy using an EWC-like approach (Kirkpatrick et al., 2017). Instead of weighting parameters, hypernetwork outputs can be weighted. This would allow for a more flexible regularizer, at the expense of additional storage.\nTask inference through predictive entropy (HNET+ENT). In this setup, we rely on the capability of neural networks to separate in- from out-of-distribution data. Although this is a difficult research problem on its own, for continual learning, we face a potentially simpler problem, namely to detect and distinguish between the tasks our network was trained on. We here take the first minimal step exploiting this insight and compare the predictive uncertainty, as quantified by output distribution entropy, of the different models given an input. Hence, during test time we iterate over all embeddings and therefore the models our metamodel can generate and compare the predictive entropies which results in making a prediction with the model of lowest entropy. For future work, we wish to explore the possibility of improving our predictive uncertainty by taking parameter uncertainty into account through the generation of approximate, task-specific weight posterior distributions.\nLearning without task boundaries with hypernetworks. An interesting problem we did not address in this paper is that of learning without task boundaries. For most CL methods, it is crucial to know when learning one task ends and training of a new tasks begins. This is no exception for the methods introduced in this paper. However, this is not necessarily a realistic or desirable assumption; often, one desires to learn in an online fashion without task boundary supervision, which is particularly relevant for reinforcement learning scenarios where incoming data distributions are frequently subject to change (Rolnick et al., 2018). At least for discrete changes, with our hypernetwork setup, this boils down to a detection mechanism that activates the saving of the current model, i.e., the embedding e(T ), and its storage to the collection of embeddings {e(t)}. We leave the integration of our model with such a hypernetwork-specific switching detection mechanism for future work. Interestingly, our task-conditioned hypernetworks would fit very well with methods that rely on fast remembering (a recently proposed approach which appeared in parallel to our paper, He et al., 2019)." }, { "heading": "E UNIVERSAL FUNCTION APPROXIMATION WITH CHUNKED NEURAL NETWORKS", "text": "Proposition 1. Given a compact subset K ⊂ Rm and a continuous function on K i.e. f ∈ C(K), more specifically, f : K → Rn with n = r ·NC. Now ∀ > 0, there exists a chunked neural network fch : Rm × C → Rr with parameters Θh, discrete set C = {c1, . . . , cNC} and ci ∈ Rs such that |f̄ch (x)− f(x)| < , ∀x ∈ K and with f̄ch (x) = [fch (x, c1), . . . , fch (x, cNC)].\nFor the following proof, we assume the existence of one form of the universal approximation theorem (UAT) for neural networks (Leshno & Schocken, 1993; Hanin, 2017). Note that we will not restrict ourselves to a specific architecture, nonlinearity, input or output dimension. Any neural network that is proven to be a universal function approximator is sufficient.\nProof. Given any > 0, we assume the existence of a neural network fh : Rm → Rn that approximates function f on K:\n|fh(x)− f(x)| < 2 , ∀x ∈ K. (10)\nWe will in the following show that we can always find a chunked neural network fch : Rm × C → Rr approximating the neural network fh on K and conclude with the triangle inequality\n|f̄ch (x)− f(x)| ≤ |f̄ch (x)− fh(x)|+ |fh(x)− f(x)| < , ∀x ∈ K. (11)\nIndeed, given the neural network fh such that (10) holds true, we construct\nf̂h(x, c) = { fcih (x) c = ci 0 else\n(12)\nby splitting the full neural network fh(x) = [fc1h (x), f c2 h (x), . . . , f cNC h (x)] with f̂h : Rm×C → Rr.\nNote that f̂h is continuous on Rm × C with the product topology composed of the topology on Rm induced by the metric | ·− · | : Rm×Rm → R and the discrete topology on C. Now we can make use\nof the UAT again: Given the compact K ⊂ Rn, the discrete set C = {c1, . . . , cNC} and any 2NC > 0, there exists a neural network function fch : Rm × Rs → Rr such that\n|fch (x, c)− f̂h(x, c)| < 2NC , ∀x ∈ K, ∀c ∈ C. (13)\nIt follows that\n∑ i |fch (x, ci)− f̂h(x, ci)| < ∑ i 2NC = 2 , ∀x ∈ K, (14)\nwhich is equivalent to\n| f c h (x, c1)\n... f ch(x, cNC) − f̂h(x, c1)... f̂h(x, cNC) | = |f̄ch (x)− fh(x)| < 2 , ∀x ∈ K. (15) We have shown (11) which concludes the proof.\nNote that we did not specify the number of chunks NC, r or the dimension s of the embeddings ci. Despite this theoretical result, we emphasize that we are not aware of a constructive procedure to define a chunked hypernetwork that comes with a useful bound on the achievable performance and/or compression rate. We evaluate such aspects empirically in our experimental section.\nF QUALITATIVE ANALYSES OF HYPERNETWORK-PROTECTED REPLAY MODELS\na b\nFigure A8: Image samples from hypernetwork-protected replay models. The left column of both of the subfigures display images directly after training the replay model on the corresponding class, compared to the right column(s) where samples are obtained after training on eights and nines i.e. all classes. (a) Image samples from a class-incrementally trained VAE. Here the exact same training configuration to obtain results for split MNIST with the HNET+R setup are used, see Appendix C. (b) Image samples from a class-incrementally trained GAN. For the training configurations, see Appendix B. In both cases the weights of the generative part, i.e., the decoder or the generator, are produced and protected by a hypernetwork." } ]
2,020
CONTINUAL LEARNING WITH HYPERNETWORKS
SP:139a4db0a387a52eebf873a8f37f974492aa0d2f
[ "This paper introduces IMPACT which is a distributed RL algorithm that shortens training time of RL systems while maintaining/ improving the sample efficiency. It is built on top of the famous PPO algorithm (https://arxiv.org/abs/1707.06347). The authors break down the novel component of their model into three categories: target network, circular buffer, and importance sampling. They evaluate the effectiveness of each component through different experiments. ", "Reinforcement learning (RL) training speed is broadly evaluated on two dimensions: sample efficiency (the number of environment interactions required) and wall-clock time. Improved wall-clock training time has been achieved through distributed actors and learners, but often at the expense of sample efficiency. IMPACT repurposes successful concepts from deep RL - the target network, importance sampling and a replay buffer to demonstrate improvements on both axes in on three continuous environments and three games from the Atari Learning Environment." ]
The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends IMPALA with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.
[ { "affiliations": [], "name": "Michael Luo" }, { "affiliations": [], "name": "Jiahao Yao" } ]
[ { "authors": [ "Joshua Achiam" ], "title": "Openai Spinning Up. https://spinningup.openai.com/en/latest/ spinningup/bench.html, November 2018", "venue": null, "year": 2018 }, { "authors": [ "Itai Caspi", "Gal Leibovich", "Gal Novik", "Shadi Endrawis" ], "title": "Reinforcement Learning Coach, December 2017", "venue": "URL https://doi.org/10.5281/zenodo.1134899", "year": 2017 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "IMPALA: Scalable Distributed Deep-RL with Importance Weighted Acto-Learner Architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Linxi Fan", "Yuke Zhu", "Jiren Zhu", "Zihua Liu", "Orien Zeng", "Anchit Gupta", "Joan Creus-Costa", "Silvio Savarese", "Li Fei-Fei" ], "title": "SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E Turner", "Sergey Levine" ], "title": "Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic", "venue": "arXiv preprint arXiv:1611.02247,", "year": 2016 }, { "authors": [ "Seungyul Han", "Youngchul Sung" ], "title": "AMBER: Adaptive Multi-Batch Experience Replay for Continuous Action Control", "venue": "arXiv preprint arXiv:1710.04423,", "year": 2017 }, { "authors": [ "Seungyul Han", "Youngchul Sung" ], "title": "Dimension-Wise Importance Sampling Weight Clipping for Sample-Efficient Reinforcement Learning", "venue": "arXiv preprint arXiv:1905.02363,", "year": 2019 }, { "authors": [ "Dan Horgan", "John Quan", "David Budden", "Gabriel Barth-Maron", "Matteo Hessel", "Hado Van Hasselt", "David Silver" ], "title": "Distributed Prioritized Experience Replay", "venue": "arXiv preprint arXiv:1803.00933,", "year": 2018 }, { "authors": [ "Tang Jie", "Pieter Abbeel" ], "title": "On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient", "venue": null, "year": 2010 }, { "authors": [ "Eric Liang", "Richard Liaw", "Robert Nishihara", "Philipp Moritz", "Roy Fox", "Ken Goldberg", "Joseph E. Gonzalez", "Michael I. Jordan", "Ion Stoica" ], "title": "RLlib: Abstractions for Distributed Reinforcement Learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous Control with Deep Reinforcement Learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous Methods for Deep Reinforcement Learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Arun Nair", "Praveen Srinivasan", "Sam Blackwell", "Cagdas Alcicek", "Rory Fearon", "Alessandro De Maria", "Vedavyas Panneershelvam", "Mustafa Suleyman", "Charles Beattie", "Stig Petersen" ], "title": "Massively Parallel Methods for Deep Reinforcement Learning", "venue": "arXiv preprint arXiv:1507.04296,", "year": 2015 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust Region Policy Optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "High-Dimensional Continuous Control using Generalized Advantage Estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal Policy Optimization Algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy Gradient Methods for Reinforcement Learning with Function Approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 } ]
[ { "heading": "1 INTRODUCTION", "text": "Proximal Policy Optimization (Schulman et al., 2017) is one of the most sample-efficient on-policy algorithms. However, it relies on a synchronous architecture for collecting experiences, which is closely tied to its trust region optimization objective. Other architectures such as IMPALA can achieve much higher throughputs due to the asynchronous collection of samples from workers. Yet, IMPALA suffers from reduced sample efficiency since it cannot safely take multiple SGD steps per batch as PPO can. The new agent, Importance Weighted Asynchronous Architectures with Clipped Target Networks (IMPACT), mitigates this inherent mismatch. Not only is the algorithm highly sample efficient, it can learn quickly, training 30 percent faster than IMPALA. At the same time, we propose a novel method to stabilize agents in distributed asynchronous setups and, through our ablation studies, show how the agent can learn in both a time and sample efficient manner.\nIn our paper, we show that the algorithm IMPACT realizes greater gains by striking the balance between high sample throughput and sample efficiency. In our experiments, we demonstrate in the experiments that IMPACT exceeds state-of-the-art agents in training time (with same hardware) while maintaining similar sample efficiency with PPO’s. The contributions of this paper are as follows:\n1. We show that when collecting experiences asynchronously, introducing a target network allows for a stabilized surrogate objective and multiple SGD steps per batch (Section 3.1).\n2. We show that using a circular buffer for storing asynchronously collected experiences allows for smooth trade-off between real-time performance and sample efficiency (Section 3.2).\n3. We show that IMPACT, when evaluated using identical hardware and neural network models, improves both in real-time and timestep efficiency over both synchronous PPO and IMPALA (Section 4)." }, { "heading": "2 BACKGROUND", "text": "Reinforcement Learning assumes a Markov Decision Process (MDP) setup defined by the tuple (S,A, p, γ, r) where S and A represent the state and action space, γ ∈ [0, 1] is the discount factor, and p : S ×A× S → R and R : S ×A→ R are the transition dynamics and reward function that models an environment.\nLet π(at|st) : S ×A→ [0, 1] denote a stochastic policy mapping that returns an action distribution given state st ∈ S. Rolling out policy π(at|st) in the environment is equivalent to sampling a trajectory τ ∼ P(τ ), where τ := (s0, a0, ...., aT−1, sT , aT ). We can compactly define state and state-action marginals of the trajectory distribution pπ(st) and pπ(st, at) induced by the policy π(at|st).The goal for reinforcement learning aims to maximize the following objective: J(θ) = E(st,at)∼pπ [ ∑T t=0 γ tR(st, at)].\nWhen θ parameterizes π(at|st), the policy is updated according to the Policy Gradient Theorem (Sutton et al., 2000):\n∇θJ(θ) = E(st,at)∼pπ(·) [ ∇θ log πθ(at|st)Âπθ (st, at) ] ,\nwhere Âπθ (st, at) is an estimator of the advantage function. The advantage estimator is usually defined as the 1-step TD error, Âπθ (st, at) = r(st, at) + γV̂ (st+1) − V̂ (st), where V̂ (st) is an estimation of the value function. Policy gradients, however, suffer from high variance and large update-step sizes, oftentimes leading to sudden drops in performance." }, { "heading": "2.1 DISTRIBUTED PPO", "text": "Per iteration, Proximal Policy Optimization (PPO) optimizes policy πθ from target πθold via the following objective function\nL(θ) = Epπθold\n[ min ( rt(θ)Ât, clip (rt(θ), 1− , 1 + ) Ât )] ,\nwhere rt(θ) = πθ(at|st) πθold (at|st) and is the clipping hyperparameter. In addition, many PPO implementations use GAE-λ as a low bias, low variance advantage estimator for Ât (Schulman et al., 2015b). PPO’s surrogate objective contains the importance sampling ratio rt(θ), which can potentially explode if πθold is too far from πθ. (Han & Sung, 2017). PPO’s surrogate loss mitigates this with the clipping function, which ensures that the agent makes reasonable steps. Alternatively, PPO can also be seen as an adaptive trust region introduced in TRPO (Schulman et al., 2015a).\nIn Figure 1a, distributed PPO agents implement a synchronous data-gathering scheme. Before data collection, workers are updated to πold and aggregate worker batches to training batch Dtrain. The learner performs many mini-batch gradient steps on Dtrain. Once the learner is done, learner weights are broadcast to all workers, who start sampling again." }, { "heading": "2.2 IMPORTANCE WEIGHTED ACTOR-LEARNER ARCHITECTURES", "text": "In Figure 1b, IMPALA decouples acting and learning by having the learner threads send actions, observations, and values while the master thread computes and applies the gradients from a queue of learners experience (Espeholt et al., 2018). This maximizes GPU utilization and allows for increased sample throughput, leading to high training speeds on easier environments such as Pong. As the number of learners grows, worker policies begin to diverge from the learner policy, resulting in stale policy gradients. To correct this, the IMPALA paper utilizes V-trace to correct the distributional shift:\nvst = Vφ (st) + t+n−1∑ i=t γi−t i−1∏ j=t cj ρi (ri+1 + γVφ (si+1)− Vφ (si)) where, Vφ is the value network, πθ is the policy network of the master thread, µθ′ is the policy\nnetwork of the learner thread, and cj = min ( c̄,\nπθ(aj |sj) µθ′ (aj |sj)\n) and ρi = min ( ρ̄, πθ(ai|si)µθ′ (ai|si) ) are clipped\nIS ratios.\nAlgorithm 1 IMPACT Input: Batch size M , number of workers W , circular buffer size N , replay coefficient K, target\nupdate frequency ttarget, weight broadcast frequency tfrequency, learning rates α and β 1: Randomly initialize network weights (θ, w) 2: Initialize target network (θ′, w′)← (θ, w) 3: Create W workers and duplicate (θ, w) to each worker 4: Initialize circular buffer C(N,K) 5: for t = 1, .., T do 6: Obtain batch B of size M traversed k times from C(N,K) 7: If k = 0, evaluate B on target θ′, append target output to B 8: Compute policy and value network gradients\n∇θJ(θ) = 1\nM ∑ (i,j)∈B ∇θπθ(sj |aj) max(πtarget(sj |aj), βπworkeri(sj |aj)) ÂV -GAE − η∇θKL (πtarget , πθ)\n∇wL(w) = 1\nM ∑ j (Vw(sj)− V̂V -GAE(sj))∇wVw(sj)\n9: Update policy and value network weights θ ← θ + αt∇θJ(θ),w ← w − βt∇wL(w) 10: If k = K, discard batch B from C(N,K) 11: If t ≡ 0 (mod ttarget), update target network (θ′, w′)← (θ, w) 12: If t ≡ 0 (mod tfrequency), broadcast weights to workers 13: end for\nWorker-i Input: Worker sample batch size S\n1: repeat 2: Bi = ∅ 3: for t = 1, ..., S do 4: Store (st, at, rt, st+1) ran by θi in batch Bi 5: end for 6: Send Bi to C(N,K) 7: If broadcasted weights exist, set θi ← θ 8: until learner finishes" }, { "heading": "3 IMPACT ALGORITHM", "text": "Like IMPALA, IMPACT separates sampling workers from learner workers. Algorithm 1 and Figure 1c describe the main training loop and architecture of IMPACT. In the beginning, each worker copies weights from the master network. Then, each worker uses their own policy to collect trajectories\nand sends the data (st, at, rt) to the circular buffer. Simultaneously, workers also asynchronously pull policy weights from the master learner. In the meantime, the target network occasionally syncs with the master learner every ttarget iterations. The master learner then repeatedly draws experience from the circular buffer. Each sample is weighted by the importance ratio of πθπworkeri as well as clipped with target network ratio πworkeriπtarget . The target network is used to provide a stable trust region (Figure 2), allowing multiple steps per batch (i.e., like PPO) even in the asynchronous setting (i.e., with the IMPALA architecture). In the next section, we describe the design of this improved objective." }, { "heading": "3.1 MAXIMAL TARGET-WORKER CLIPPING", "text": "PPO gathers experience from previous iteration’s policy πθold , and the current policy trains by importance sampling off-policy experience with respect to πθ. In the asynchronous setting, worker i’s policy, denoted as πworkeri , generates experience for the policy network πθ. The probability that batch B comes from worker i can be parameterized as a categorical distribution i ∼ D(α1, ..., αn). We include this by adding an extra expectation to the importance-sampled policy gradient objective (IS-PG) (Jie & Abbeel, 2010):\nJIS(θ) = Ei∼D(α) [ E(st,at)∼πworkeri [ πθ\nπworkeri Ât\n]] .\nSince each worker contains a different policy, the agent introduces a target network for stability (Figure 2). Off-policy agents such as DDPG and DQN update target networks with a moving average. For IMPACT, we periodically update the target network with the master network. However, training with importance weighted ratio πθπtarget can lead to numerical instability, as shown in Figure 3. To prevent this, we clip the importance sampling ratio from worker policy,πworkeri , to target policy, πtarget:\nJAIS(θ) = Ei∼D(α) [ E(st,at)∼πworkeri [ min( πworkeri πtarget , ρ) πθ πworkeri Ât ]] = Ei∼D(α) [ E(st,at)∼πworkeri [ πθ\nmax(πtarget, βπworkeri) Ât\n]] ,\nwhere β = 1ρ . In the experiments, we set ρ as a hyperparameter with ρ ≥ 1 and β ≤ 1. To see why clipping is necessary, when master network’s action distribution changes significantly over few training iterations, worker i’s policy, πworkeri , samples data outside that of target policy, πtarget, leading to large likelihood ratios,\nπworkeri πtarget . The clipping function min(πworkeriπtarget , ρ) pulls back\nπworkeri\n, R3 = πθmax(πtarget,βπworkeri )\n. In (b),\nwe show the target network update frequency is robust to a range of choices. We try target network update frequency ttarget equal to the multiple (ranging from 1/16 and 16) of n = N ·K, the product of the size of circular buffer and the replay times for each batch in the buffer.\nlarge IS ratios to ρ. Figure 10 in Appendix E provides additional intuition behind the target clipping objective. We show that the target network clipping is a lower bound of the IS-PG objective.\nFor ρ > 1, the clipped target ratio is larger and serves to augment advantage estimator Ât. This incentivizes the agent toward good actions while avoiding bad actions. Thus, higher values of ρ encourages the agent to learn faster at the cost of instability.\nWe use GAE-λ with V-trace (Han & Sung, 2019). The V-trace GAE-λ modifies the advantage function by adding clipped importance sampling terms to the summation of TD errors:\nÂV -GAE = t+n−1∑ i=t (λγ)i−t i−1∏ j=t cj δiV, where ci = min ( c̄,\nπtarget(aj |sj) πworkeri (aj |sj)\n) (we use the convention ∏t−1 j=t cj = 1) and δiV is the importance\nsampled 1-step TD error introduced in V-trace." }, { "heading": "3.2 CIRCULAR BUFFER", "text": "IMPACT uses a circular buffer (Figure 4) to emulate the mini-batch SGD used by standard PPO. The circular buffer stores N batches that can be traversed at max K times. Upon being traversed K times, a batch is discarded and replaced by a new worker batch.\nFor motivation, the circular buffer and the target network are analogous to mini-batching from πold experience in PPO. When target network’s update frequency n = NK, the circular buffer is equivalent to distributed PPO’s training batch when the learner samples N minibatches for K SGD iterations.\nThis is in contrast to standard replay buffers, such as in ACER and APE-X, where transitions (st, at, rt, st+1) are either uniformly sampled or sampled based on priority, and, when the buffer is full, the oldest transitions are discarded (Wang et al., 2016; Horgan et al., 2018).\nFigure 4 illustrates an empirical example where tuning K can increase training sample efficiency and decrease training wall-clock time." }, { "heading": "4 EVALUATION", "text": "In our evaluation we seek to answer the following questions:\n1. How does the target-clipping objective affect the performance of the agents compared to prior work? (Section 4.1)\n2. How does the IMPACT circular buffer affect sample efficiency and training wall-clock time? (Section 4.2)\n3. How does IMPACT compare to PPO and IMPALA baselines in terms of sample and real-time performance? (Section 4.3)\n4. How does IMPACT scale with respect to the number of workers? (Section 4.4)" }, { "heading": "4.1 TARGET CLIPPING PERFORMANCE", "text": "We investigate the performance of the clipped-target objective relative to prior work, which includes PPO and IS-PG based objectives. Specifically, we consider the following ratios below:\nR1 = πθ πtarget R2 = πθ πworkeri R3 = πθ max(πtarget,βπworkeri )\nFor all three experiments, we truncate all three ratios with PPO’s clipping function: c(R) = clip(R, 1− , 1+ ) and train in an asynchronous setting. Figure 4(a) reveals two important takeaways: first, R1 suffers from sudden drops in performance midway through training. Next, R2 trains stably but does not achieve good performance.\nWe theorize that R1 fails due to the target and worker network mismatch. During periods of training where the master learner undergoes drastic changes, worker action outputs vastly differ from the learner outputs, resulting in small action probabilities. This creates large ratios in training and destabilizes training. We hypothesize that R2 fails due to different workers pushing and pulling the learner in multiple directions. The learner moves forward with the most recent worker’s suggestions without developing a proper trust region, resulting in many worker’s suggestions conflicting with each other.\nThe loss function, R3 shows that clipping is necessary and can help facilitate training. By clipping the target-worker ratio, we make sure that the ratio does not explode and destabilize training. Furthermore, we prevent workers from making mutually destructive suggestions by having a target network provide singular guidance." }, { "heading": "4.1.1 TARGET NETWORK UPDATE FREQUENCY", "text": "In Section 3.2, an analogy was drawn between PPO’s mini-batching mechanism and the circular buffer. Our primary benchmark for target update frequency is n = N ·K, where N is circular buffer size and K is maximum replay coefficient. This is the case when PPO is equivalent to IMPACT.\nIn Figure 4(b), we test the frequency of updates with varying orders of magnitudes of n. In general, we find that agent performance is robust to vastly differing frequencies. However, when n = 1 ∼ 4,\nthe agent does not learn. Based on empirical results, we theorize that the agent is able to train as long as a stable trust region can be formed. On the other hand, if update frequency is too low, the agent is stranded for many iterations in the same trust region, which impairs learning speed." }, { "heading": "4.2 TIME AND SAMPLE EFFICIENCY WITH CIRCULAR BUFFER", "text": "Counter to intuition, the tradeoff between time and sample efficiency when K increases is not necessarily true. In Figure 4b and 4c, we show that IMPACT realizes greater gains by striking the balance between high sample throughput and sample efficiency. When K = 2, IMPACT performs the best in both time and sample efficiency. Our results reveal that wall-clock time and sample efficiency can be optimized based on tuning values of K in the circular buffer." }, { "heading": "4.3 COMPARISON WITH BASELINES", "text": "We investigate how IMPACT attains greater performance in wall clock-time and sample efficiency compared with PPO and IMPALA across six different continuous control and discrete action tasks.\nWe tested the agent on three continuous environments (Figure 5): HalfCheetah, Hopper, and Humanoid on 16 CPUs and 1 GPU. The policy networks consist of two fully-connected layers of 256 units with nonlinear activation tanh. The critic network shares the same architecture as the policy network. For consistentency, same network architectures were employed across PPO, IMPALA, and IMPACT.\nFor the discrete environments (Figure 6), Pong, SpaceInvaders, and Breakout were chosen as common benchmarks used in popular distributed RL libraries (Caspi et al., 2017; Liang et al., 2018). Additional experiments for discrete environments are in the Appendix. These experiments were ran on 32 CPUs and 1 GPU. The policy network consists of three 4x4 and one 11x11 conv layer, with nonlinear activation ReLU. The critic network shares weights with the policy network. The input of the network is a stack of four 42x42 down-sampled images of the Atari environment. The hyper-parameters for continuous and discrete environments are listed in the Appendix B table 1 and 2 respectively.\nFigures 5 and 6 show the total average return on evaluation rollouts for IMPACT, IMPALA and PPO. We train each algorithm with three different random seeds on each environment for a total time\nof three hours. According to the experiments, IMPACT is able to train much faster than PPO and IMPALA in both discrete and continuous domains, while preserving same or better sample efficiency than PPO.\nOur results reveal that continuous control tasks for IMPACT are sensitive to the tuple (N,K) for the circular buffer. N = 16 and K = 20 is a robust choice for continuous control. Although higher K inhibits workers’ sample throughput, increased sample efficiency from replaying experiences results in an overall reduction in training wall-clock time and higher reward. For discrete tasks, N = 1 and K = 2 works best. Empirically, agents learn faster from new experience than replaying old experience, showing how exploration is crucial to achieving high asymptotic performance in discrete enviornments." }, { "heading": "4.4 IMPACT SCALABILITY", "text": "Figure 7 shows how IMPACT’s performance scales relative to the number of workers. More workers means increased sample throughput, which in turn increases training throughput (the rate that learner consumes batches). With the learner consuming more worker data per second, IMPACT can attain better performance in less time. However, as number of workers increases, observed increases in performance begin to decline." }, { "heading": "5 RELATED WORK", "text": "Distributed RL architectures are often used to accelerate training. Gorila (Nair et al., 2015) and A3C (Mnih et al., 2016) use workers to compute gradients to be sent to the learner. A2C (Mnih et al., 2016) and IMPALA (Espeholt et al., 2018) send experience tuples to the learner. Distributed replay buffers, introduced in ACER (Wang et al., 2016) and Ape-X (Horgan et al., 2018), collect worker-collected experience and define an overarching heuristic for learner batch selection. IMPACT is the first to fully incorporate the sample-efficiency benefits of PPO in an asynchronous setting.\nSurreal PPO (Fan et al., 2018) also studies training with PPO in the asynchronous setting, but do not consider adaptation of the surrogate objective nor IS-correction. Their use of a target network for broadcasting weights to workers is also entirely different from IMPACT’s. Consequently, IMPACT is able to achieve better results in both real-time and sample efficiency.\nOff-policy methods, including DDPG and QProp, utilize target networks to stabilize learning the Q function (Lillicrap et al., 2015; Gu et al., 2016). This use of a target network is related but different from IMPACT, which uses the network to define a stable trust region for the PPO surrogate objective." }, { "heading": "6 CONCLUSION", "text": "In conclusion, we introduce IMPACT, which extends PPO with a stabilized surrogate objective for asynchronous optimization, enabling greater real-time performance without sacrificing timestep efficiency. We show the importance of the IMPACT objective to stable training, and show it can outperform tuned PPO and IMPALA baselines in both real-time and timestep metrics." }, { "heading": "A ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "B HYPER PARAMETERS FOR ALL ENVIRONMENTS", "text": "B.1 DISCRETE ENVIRONMENTS\nB.2 CONTINUOUS ENVIRONMENTS\nB.3 HYPERPARAMETER BUDGET\nListed below was the grid search we used for each algorithm to obtain optimal hyperparameters. Optimal values were found via grid searching on each hyperparameter separately. We found that IMPACT’s optimal hyperparameter values tend to hover close to either IMPALA’s or PPO’s, which greatly mitigated IMPACT’s budget.\nB.3.1 DISCRETE ENVIRONMENT SEARCH\n1For HalfCheetah-v2, IMPACT and PPO Num SGD Iterations (K) is 32. 2For HalfCheetah-v2, IMPACT Value Function Coeff is 0.5. 3IMPALA was difficult to finetune due to unstable runs.\nB.4 CONTINUOUS ENVIRONMENT SEARCH\nC IMPALA TO IMPACT\nIn Figure 9, we gradually add components to IMPALA until the agent is equivalent to IMPACT’s. Starting from IMPALA, we gradually add PPO’s objective function, circular replay buffer, and target-worker clipping. In particular, IMPALA with PPO’s objective function and circular replay buffer is equivalent to an asynchronous-variant of PPO (APPO). APPO fails to perform as well as synchronous distributed PPO, since PPO is an on-policy algorithm.\nD IMPALA IN CONTINUOUS ENVIRONMENTS\nIn Figure 6, IMPALA performs substantially worse than other agents in continuous environments. We postulate that IMPALA suffers from low asymptotic performance here since its objective is an importance-sampled version of the Vanilla Policy Gradient (VPG) objective, which is known to suffer from high variance and large update-step sizes. We found that for VPG, higher learning rates encourage faster learning in the beginning but performance drops to negative return later in training.\nIn Appendix B, for IMPALA, we heavily tuned on the learning rate, finding that small learning rates stabilize learning at the cost of low asymptotic performance. Prior work also reveals the agents that use VPG fail to attain good performance in non-trivial continuous tasks (Achiam, 2018). Our results with IMPALA reaches similar performance compared to other VPG-based algorithms. The closest neighbor to IMPALA, A3C uses workers to compute gradients from the VPG objective to send to the learner thread. A3C performs well in InvertedPendulum yet flounders in continuous environments (Tassa et al., 2018)." }, { "heading": "E THE INTUITION OF THE OBJECTIVE", "text": "The following ratios represent the objective functions for different ablation studies. In the plots (Figure 10), we set the advantage function to be one, i.e. Ât = 1.\n• IS ratio: πθπworkeri Ât • IMPACT target: min ( πworkeri πtarget , ρ ) πθ πworkeri Ât\n• PPO -clip: min (\nπθ πworkeri Ât, clip( πθπworkeri , 1− , 1 + )Ât ) • IMPACT target -clip: min ( min ( πworkeri πtarget , ρ ) πθ πworkeri Ât, clip ( min ( πworkeri πtarget , ρ ) πθ πworkeri , 1− , 1 + ) Ât\n) According to Figure 10, IS ratio is large when πworkeri assigns low probability. IMPACT target -clip is a lower bound of the PPO -clip. In an distributed asynchronous setting, the trust region suffers from larger variance stemming from off-policy data. IMPACT target -clip ratio mitigates this by encouraging conservative and reasonable policy-gradient steps." } ]
2,020
IMPACT: IMPORTANCE WEIGHTED ASYNCHRONOUS ARCHITECTURES WITH CLIPPED TARGET NETWORKS
SP:a4b0890cdeb53d7ea32798703955b14baeb60715
[ "The paper focuses on alleviating the problem of \"catastrophic forgetting\", exhibited by neural networks learned with gradient-based algorithms over long sequence of tasks. In such learning scenarios, tuning of parameters over the new tasks lead to degradation of performance over the old tasks as the parameters important for the latter are overwritten. The gradient-based algorithms are unable to distinguish between the important and the not-so-important parameters of the old tasks. Hence, one direction of works, including the proposed one, aim at identifying the most important parameters for all the old tasks and discourage modifications on those parameters during the training of the new tasks. ", "This paper proposes a method for tackling catastrophic forgetting. Similar to previous methods such as EWC (Kirkpatrick et al., 2017), they penalize parameter updates that align with the Fisher information matrix of the previous tasks. This will prevent the model from changing the previously useful parameters. They try to match the result of previous fisher-based methods but at a lower computational cost. They propose using a low-rank approximation to the Hessian using Hessian-vector-product with two types of vectors: the momentum velocity vector and the largest eigen-vector of the hessian. Then they build a diagonal approximation to the Hessian." ]
Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work.
[ { "affiliations": [], "name": "CATASTROPHIC FORGETTING" } ]
[ { "authors": [ "Rahaf Aljundi", "Francesca Babiloni", "Mohamed Elhoseiny", "Marcus Rohrbach", "Tinne Tuytelaars" ], "title": "Memory aware synapses: Learning what (not) to forget", "venue": "In Proc. European Conf. Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jonathan Baxter" ], "title": "A model of inductive bias learning", "venue": "J. Artificial Intelligence Research,", "year": 2000 }, { "authors": [ "Matthieu Bray", "Esther Koller-Meier", "Pascal Müller", "Luc Van Gool", "Nicol N Schraudolph" ], "title": "3d hand tracking by rapid stochastic gradient descent using a skinning model", "venue": "In Proc. 1st European Conf. Visual Media Production (CVMP),", "year": 2004 }, { "authors": [ "Matthieu Bray", "Esther Koller-Meier", "Nicol N Schraudolph", "Luc Van Gool" ], "title": "Stochastic metadescent for tracking articulated structures", "venue": "In Proc. Intl. Conf. Computer Vision and Pattern Recognition (CVPR),", "year": 2004 }, { "authors": [ "Dumitru Erhan", "Pierre-Antoine Manzagol", "Yoshua Bengio", "Samy Bengio", "Pascal Vincent" ], "title": "The difficulty of training deep architectures and the effect of unsupervised pre-training", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Sebastian Farquhar", "Yarin Gal" ], "title": "Towards robust evaluations of continual learning", "venue": "arXiv preprint arXiv:1805.09733,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proc. 34th Intl. Conf. Machine Learning,", "year": 2017 }, { "authors": [ "Behrooz Ghorbani", "Shankar Krishnan", "Ying Xiao" ], "title": "An investigation into neural net optimization via hessian eigenvalue density", "venue": "In Proc. 36th Intl. Conf. Machine Learning,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow", "Mehdi Mirza", "Da Xiao", "Aaron Courville", "Yoshua Bengio" ], "title": "An empirical investigation of catastrophic forgetting in gradient-based neural networks", "venue": "In Proc. 6th Intl. Conf. Learning Representations (ICLR),", "year": 2013 }, { "authors": [ "Matteo Hessel", "Hado van Hasselt", "Joseph Modayil", "David Silver" ], "title": "On inductive biases in deep reinforcement learning", "venue": "arXiv preprint arXiv:1907.02908,", "year": 2019 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z. Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "In Proc. 5th Intl. Conf. Learning Representations,", "year": 2017 }, { "authors": [ "Nitin Kamra", "Umang Gupta", "Yan Liu" ], "title": "Deep generative dual memory network for continual learning", "venue": "arXiv preprint arXiv:1710.10368,", "year": 2017 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proc. National Academy of Sciences,", "year": 2017 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Reply to huszár: The elastic weight consolidation penalty is empirically valid", "venue": "Proc. National Academy of Sciences,", "year": 2018 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming catastrophic forgetting by incremental moment matching", "venue": "In Proc. Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "David Lopez-Paz" ], "title": "Gradient episodic memory for continual learning", "venue": "In Proc. Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "A practical bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "James Martens" ], "title": "Deep learning via hessian-free optimization", "venue": "In Proc. Intl. Conf. Machine Learning (ICML),", "year": 2010 }, { "authors": [ "Michael McCloskey", "Neal J Cohen" ], "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "venue": "In Psychology of learning and motivation,", "year": 1989 }, { "authors": [ "TM Mitchell" ], "title": "The need for biases in learning generalizations (rutgers computer science", "venue": "tech. rept. cbm-tr-117). Rutgers University,", "year": 1980 }, { "authors": [ "Manfred Opper", "Ole Winther" ], "title": "A bayesian approach to on-line learning", "venue": "On-line learning in neural networks,", "year": 1998 }, { "authors": [ "Jose L Part", "Oliver Lemon" ], "title": "Incremental on-line learning of object classes using a combination of self-organizing incremental neural networks and deep convolutional neural networks", "venue": "In Proc. Workshop Bio-inspired Social Robot Learning in Home Scenarios,", "year": 2016 }, { "authors": [ "Barak A Pearlmutter" ], "title": "Fast exact multiplication by the hessian", "venue": "Neural computation,", "year": 1994 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "Online structured laplace approximations for overcoming catastrophic forgetting", "venue": "In Proc. Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Nicol N Schraudolph" ], "title": "Fast curvature matrix-vector products for second-order gradient descent", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehong Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Proc. Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "James Hardy Wilkinson" ], "title": "The algebraic eigenvalue problem, volume 662", "venue": "Oxford Clarendon,", "year": 1965 }, { "authors": [ "Tianjun Xiao", "Jiaxing Zhang", "Kuiyuan Yang", "Yuxin Peng", "Zheng Zhang" ], "title": "Error-driven incremental learning in deep convolutional neural network for large-scale image classification", "venue": "In Proc. 22nd ACM Intl. Conf. Multimedia,", "year": 2014 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "In Proc. 6th Intl. Conf. Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Friedemann Zenke", "Ben Poole", "Surya Ganguli" ], "title": "Continual learning through synaptic intelligence", "venue": "In Proc. 34th Intl. Conf. Machine Learning (ICML),", "year": 2017 } ]
[ { "heading": null, "text": "Learning neural networks with gradient descent over a long sequence of tasks is problematic as their fine-tuning to new tasks overwrites the network weights that are important for previous tasks. This leads to a poor performance on old tasks – a phenomenon framed as catastrophic forgetting. While early approaches use task rehearsal and growing networks that both limit the scalability of the task sequence orthogonal approaches build on regularization. Based on the Fisher information matrix (FIM) changes to parameters that are relevant to old tasks are penalized, which forces the task to be mapped into the available remaining capacity of the network. This requires to calculate the Hessian around a mode, which makes learning tractable. In this paper, we introduce Hessian-free curvature estimates as an alternative method to actually calculating the Hessian. In contrast to previous work, we exploit the fact that most regions in the loss surface are flat and hence only calculate a Hessian-vector-product around the surface that is relevant for the current task. Our experiments show that on a variety of well-known task sequences we either significantly outperform or are en par with previous work." }, { "heading": "1 INTRODUCTION", "text": "The main goal of machine learning is the ability to generalize from the given training data to unseen examples. However, in practice the achievable degree of generalization is limited. While in the ideal case an end-to-end system learns complex functions from minimum input, it is often necessary to introduce a certain amount of prior knowledge. Such prior knowledge operates as an inductive bias and therefore has a constraining effect on the hypothesis space, i.e., the set of all possible functions that can be learned by the learning algorithm (Mitchell, 1980). While this sounds counter-intuitive such a reduction of the hypothesis space may lead to better generalization properties in practice (Mitchell, 1980). Hence, instead of eliminating the bias to increase generalization (as suggested by Hessel et al. (2019)), a promising direction of research tries to identify and introduce the right form of it.\nWe can achieve this by limiting the functions that can be expressed by the learning algorithm or by introducing bias to the learning algorithm itself. Simple examples include the choice for linear activations to only allow approximations of linear functions or to add a regularization term to the objective function. Similar to this, we can also improve generalization by training on different tasks (Baxter, 2000) from a task family at the same time or by introducing auxiliary tasks (Jaderberg et al., 2017). This is commonly known as multitask learning and has shown to not only improve generalization properties but also to be more sample-efficient (Baxter, 2000). Due to the limited availability of data for training we need a well-tuned inductive bias. Hence, such choices are crucial for the final real-world performance of any machine learning algorithm.\nWhile multitask learning is a great tool to improve generalization and to reduce the amount of samples that are necessary to learn a family of tasks it is still limited in its scalability. Both the amount of tasks that can be learned and the amount of data required to learn them are strongly limiting factors. Consider, for instance, a reinforcement learning setup where an agent learns different tasks from interacting with in an environment. In practice we are limited in storing the data for all relevant tasks required to train a model on all tasks jointly. However, learning those tasks sequentially is also not an option as gradient descent and its variants (which are the dominant learning approaches for neural networks) do not consider the importance of individual parameters for early tasks.\nThis destructive learning is commonly termed as catastrophic forgetting (McCloskey & Cohen, 1989). While in the context of fine-tuning and pre-training (Erhan et al., 2009) this does not bear a problem (as the goal is not to reuse the previous parameter state, but rather to optimize the learning process for some target task) it becomes important in multitask problems where we wish to maximize generalization and sample-efficiency. It is also critical in the continual learning framework, where the parameters of a neural network are optimized over multiple datasets (representing different tasks) provided sequentially, which are not available at later time. The goal is hence to retain all (or most) of the important parameters for previous tasks and to be able to build-up on this knowledge for an arbitrary number of future tasks. Thus, the scalability of learning would only be limited by the capacity of the neural network but not by the properties of the training method.\nThe Bayesian framework (Kirkpatrick et al., 2017; Ritter et al., 2018) is a promising approach to address catastrophic forgetting. The information about former tasks is condensed in a prior, which not only preserves the knowledge about tasks but also introduces an inductive bias based on the learned tasks. Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) is a simple yet efficient way to reduce catastrophic forgetting. EWC approximates the prior with a Gaussian centered around the optimized network parameters for previous tasks, where the diagonal precision is given by the diagonal approximation of the Fisher Information Matrix (FIM). This approach has two significant downsides: i) each new task adds a new regularization term that penalizes changes of parameters that are relevant to previous tasks; and ii) the diagonal approximation of the FIM assumes independent network parameters, which leads to information loss with a growing number of tasks. Ritter et al. (2018) extend EWC but still approximate the prior from previous tasks using a Gaussian. They devise a block-diagonal approximation for the prior from the older tasks by defining a quadratic approximation whose solution requires to calculate the Hessian. The Hessian is in turn approximated by the block-diagonal Kronecker-factored approximation.\nIn this work we propose an alternative way of calculating the Hessian, based on well established Hessian-free (Schraudolph, 2002; Pearlmutter, 1994) methods to estimate curvature information of the network parameters. In contrast to Ritter et al. (2018), we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the network parameters when we train the network over a long sequence of tasks. We evaluate our algorithm on permuted MNIST (Kirkpatrick et al., 2017), disjoint MNIST (Ritter et al., 2018) and single-headed disjoint MNIST (Farquhar & Gal, 2019), and compare with state of the art approaches. Our results show that we consistently outperform EWC across all tasks and that we are en par with Ritter et al. (2018) on the disjoint tasks, while our method has significantly lower space complexity compared to both EWC and Kronecker-factored approximation.\nThe remainder of this paper is structured as follows. Section 2 provides background on continual learning, EWC, and Kronecker-factored Laplace approximation. Section 3 describes our method in detail. Section 4 shows the efficiency of our approach and compares it against state of the art on a variety of well-known task sequences. Section 5 discusses related work. Section 6 concludes." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 CONTINUAL LEARNING AND CATASTROPHIC FORGETTING", "text": "In the continual learning framework the parameters θ ∈ Rn of a neural network are optimized over multiple datasets D1, . . . ,Dt, . . . ,DT . These individual datasets become available to the training algorithm one after another and usually cannot be revisited at a later time. The goal is to achieve a high accuracy/performance on the current task (represented by the current dataset Dt) while still preserving (high or most of) the performance for all the previously visited tasks. However, this is usually challenging for neural network models as commonly used gradient-based optimization methods cannot distinguish between important and unimportant parameters for previous tasks. As a consequence parameters that are relevant for previous tasks are modified (heavily), which leads to performance degradation when the network is used on any of those previous tasks (Rusu et al., 2016).\nHence, to address catastrophic forgetting in neural networks we need to retain the parameters that are important for previous tasks while still allowing the network to learn new tasks. However, at the same time we also want the space complexity of the network to be independent of the amount of tasks that were observed so far (and that are about to come). This means that learning a new task while retaining high performance on all prior tasks should be possible without adding new parameters or regularization terms for each new task, at least as long sufficient capacity is available. As a plus we want to foster some degree of parameter sharing to enable positive transfer effects, e.g., improved sample-efficiency due to the fact that past experience can be reused." }, { "heading": "2.2 ELASTIC WEIGHT CONSOLIDATION (EWC)", "text": "EWC (Kirkpatrick et al., 2017) is a simple yet efficient approach that meets most of the above mentioned requirements. The key idea is to add a penalty when parameters that are important for previous tasks are about to be changed while parameters that are less relevant for previous tasks do not receive a penalty. EWC uses a quadratic penalty term that is derived from a Bayesian formulation of the problem (where all the information of all previous tasks is condensed in the prior) as follows:\np(θ|D1:t+1) = p(Dt+1|θ)p(θ|D1:t)\np(Dt+1) , (1)\nwhere p(θ|D1:t+1) and p(θ|D1:t) are the posterior and prior distributions over the parameters θ of the network and D1, . . . ,Dt,Dt+1 are the datasets corresponding to the respective tasks. If we want to learn a new task we update the posterior by conditioning it on the newly available data Dt+1. However, we have to address two problems that stem from Equation 1. First, maintaining the full posterior over all previous datasets is usually intractable (Ritter et al., 2018; Opper & Winther, 1998) and we instead need to approximate it. Second, without storing the information from all previous tasks there is no easy solution to update the posterior.\nThe first problem can be addressed by approximating the posterior with a Gaussian (MacKay, 1992): p(θ|D1:t) ∼ N (µt,Σt). (2)\nWith two tasks A and B and their datasets DA and DB , for the posterior p(θ|DA) the mean µA is given by the solution for the previous task θ∗A, and the precision Σ −1 A , i.e., the inverse of the covariance, by the diagonal of the Fisher information matrix (FIM) F . Learning tasks A and B consecutively then results in the following objective function:\nL(θ) = LB(θ) + λ\n2 (θ − θ∗A)TF (θ − θ∗A), (3)\nwhere LB(θ) is the loss depending on the current data DB , and λ is a hyperparameter that controls the influence of the regularization term. At this point we only need to store the previous weights and the diagonal approximation of the FIM for the previous task. For another task C we store a separate FIM for that new task together with the solution for task B θ∗B , and add another regularization term:\nL(θ) = LC(θ) + λ\n2 (θ − θ∗A)TFA(θ − θ∗A) +\nλ 2 (θ − θ∗B)TFB(θ − θ∗B). (4)" }, { "heading": "2.3 KRONECKER-FACTORED LAPLACE APPROXIMATION", "text": "The diagonal approximation of the FIM assumes the parameters to be independent, which is rarely the case in practice. Ritter et al. (2018) address this shortcoming by adopting the Bayesian online learning approach (Opper & Winther, 1998). As the prior p(θ|D1:t) preserves all the information about the previous tasks recursively using the previous posterior as the next prior makes it possible to find a MAP-estimate θ∗ = arg maxθ p(θ|D1, . . . ,Dt+1) sequentially. Due to the fact that the posterior conditioned on all previous tasks is intractable, a parameterization of the posterior p ( θ|Dt+1, w(t) ) with parameters w(t) is introduced. To update this parametric approximate posterior requires two steps:\n1. Update Step: in an update step the old approximative posterior p(θ|w(t)) is used to perform an update using the Bayesian rule (see Ritter et al. (2018) for a detailed analysis):\np(θ|Dt+1, w(t)) = p(Dt+1|θ)p(θ|w(t))∫ dθ′p(Dt+1|θ′)p(θ′ |w(t))\n(5)\n2. Projection Step: In a projection step the new posterior p(θ|Dt+1, w(t)) is projected onto the same parametric family as p ( θ|w(t) ) (as they are usually not from the same parametric family):\nq(θ|w(t+ 1)) ≈ p(θ|Dt+1, w(t)). (6)\nSimilar to EWC the update step can be approximated by a Gaussian approximate posterior:\nL(θ) = Lt+1(θ) + 1\n2 (θ − µt)TΣ−1t (θ − µt). (7)\nAs before, the mean µt is given by the solution for the previous task θ∗t . Accordingly, the parameters w(t) are given by w(t) = {µt,Σ−1t }. The core improvement that this framework offers is encapsulated in the projection step: instead of adding a new regularization term for each new task, Σ−1t is instead projected to Σ−1t+1 which then maintains information about all tasks up to task t + 1. Ritter et al. (2018) realize this by computing the Hessian around the most recent solution θ∗t+1, and adding it to the Hessians from all previous solutions:\nΣ−1t+1 = Ht+1(θ ∗ t+1) + Σ −1 t , where\nHt+1(θ ∗ t+1) = − ∂2p(Dt+1|θ) ∂θ∂θ ∣∣∣∣ θ=θ∗t+1\n(8)\nThis way information about previous tasks can be preserved while still limiting the storage requirements to a constant number of parameters. However, in practice this approach needs to store a set of parameters per task." }, { "heading": "3 HESSIAN-FREE CURVATURE ESTIMATION", "text": "Previous approaches identify the most important parameters for each previous task and then prevent the modification of those parameters during the training of a new task. EWC uses the diagonal of the FIM while Ritter et al. (2018) use a Hessian approximated using the block-diagonal Kroneckerfactored approximation.\nWe address the same problem but approach it differently. We build upon the intuition of metalearning in general and from the model-agnostic meta learning (MAML) algorithm (Finn et al., 2017) in particular. MAML identifies model parameters that (upon modification) lead to faster learning for all tasks in a given task distribution. By defining a meta-learning objective and using available data for all tasks in the task distribution it learns network weights that will lead to faster learning and generalization in new tasks, if being used as a starting point for the optimization.\nIn our case, apart from the fact that we assume no access to samples from previous tasks, we invert the intuition behind MAML: we identify model parameters that are sensitive to changes in each task but instead of tuning these parameters to be a good starting point for the fine-tuning of all tasks, we penalize large changes to them, as this will deteriorate the performance of previous tasks.\nIn order to identify the important network parameters, i.e., parameters that upon being changed lead to a big change in the loss, we also use the Hessian matrix, but in contrast to the Kronecker-factored Laplace approximation we exploit the fact that most regions of the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as this subset already holds enough relevant information. We then use a Hessian-vector-product to sample from this subset.\nIn essence, we need to estimate directions with high curvature as at those points we find the important weights of the network. However, any computation involving the exact Hessian for larger networks is infeasible in practice. Hence, it is key to find a good approximation of the Hessian while still preserving enough curvature information to determine which parameters are crucial for the previous tasks. Fortunately, as most regions in the loss surface are flat it is sufficient to only extract information about the few regions that exhibit a high curvature. Thus, instead of computing the full Hessian we compute a Hessian-vector-product, which is similar to sampling the curvature in the direction of a given vector. There are two important questions to answer here: (i) how to efficiently calculate the Hessian-vector product, and (ii) how to chose a suitable vector/direction.\nAn efficient Hessian-vector-product calculation was initially presented in Pearlmutter (1994) and has subsequently been used for several Hessian-free (also called truncated-Newton) optimization\nmethods (Schraudolph, 2002; Martens, 2010). The key idea is that the Hessian is not calculated explicitly. Instead, for a given vector v the Hessian-vector-product Hv is directly computed using finite differences (Martens, 2010) at the cost of a forward- and a backward-pass through the network (e.g., using algorithms such as back-propagation). The Hessian-vector-product is then calculated by (see Pearlmutter (1994) for the implementation details):\nHv = lim →0\n∇f(θ + v)−∇f(θ)\n= ∂\n∂ ∇f(θ + v) ∣∣∣∣ =0\n(9)\nGiven that the Hessian-vector-product can be computed as described above, the second question is how to choose the vector v that defines the direction in which we sample the curvature. Inspired by Stochastic Meta-Descent (Bray et al., 2004a;b), which uses the combination of the momentum and a Hessian-vector-product to estimate gradient directions with low curvature, our first choice to select the vector v is to use the momentum. In our case the momentum is calculated using the exponentially weighted moving average of the past gradients:\nvt+1 = αvt + (1− α)∇f(θ), (10) where α controls the discount of older observations. The momentum is a sensible choice for the vector as it holds information about the parameters that have been changed the most during the training. The assumption is then that exactly these parameters will be among the most important ones for the most recent task. As such, if the parameters for the previous task θ∗t−1 are at an optimum, any change to important parameters results in a performance drop.\nAn alternative to the momentum is the eigenvector corresponding to the largest eigenvalue. This eigenvector represents the direction of highest curvature, and therefore by definition includes the most important parameters for the most recent task. A simple way to compute this eigenvector is to use the power method (Wilkinson, 1965), which entails computing a Hessian-vector-product.\nBoth versions result in a vector which maintains critical information about second-order interactions. From this vector we construct a positive semidefinite matrix by placing its absolute values as the entries of a diagonal matrix. Let ht be the resulting vector of the Hessian-vector-product Hv for task t, then our curvature estimate Ct is given as:\nCt = |ht,1| . . . |ht,n| , (11) with n the number of network parameters.\nThe projection step then is defined as:\nΣ−1t = Ct + Σ −1 t−1, (12)\nand the final objective function for a new task t+ 1 as:\nL(θ) = Lt+1(θ) + λ\n2 (θ − θ∗t )TΣ−1t (θ − θ∗t ) (13)\nSimilar to Kirkpatrick et al. (2018) and Ritter et al. (2018) we add a hyperparameter λ to control the influence of the regularization term on the overall loss, i.e., that controls how to weigh the importance of the previous tasks over the most recent task.\nOne of the main advantages of our approach is the low storage requirements. Following the analysis in Ritter et al. (2018), Kronecker-factor approximation approach requires that all Hessians for previous tasks are kept in memory and the same holds for EWC, as the diagonal approximation of the FIM for all previous tasks are required to learn each new task. Instead, our approach only needs to store two vectors with the same size as the network parameters independently of the size of the task sequence." }, { "heading": "4 EXPERIMENTS", "text": "In our experiments, we compare both of our Hessian-free curvature estimations (eigenvector and momentum) to closely related methods, i.e.. EWC (Kirkpatrick et al., 2017) and Kronecker-factored\napproximation (Ritter et al., 2018). For both EWC and Kronecker-factored approximation we adapt the implementation from https://github.com/hannakb/KFA. We release the source code of our methods upon publication." }, { "heading": "4.1 PERMUTED MNIST", "text": "For our first evaluation, we utilize the widely-used permutedMNIST dataset as presented in Goodfellow et al. (2013) and used in Kirkpatrick et al. (2017) and Ritter et al. (2018). The dataset contains 28× 28 grey-scale images, that are permuted randomly in order to generate new tasks. Each permutation is a truly new task, since it is unrecognizable from its original.\nFor the evaluation, we perform a hyperparameter search with the following range of parameters: i) network structure: either 1 layer with 200 hidden units or 2 layers with 100 hidden units each; ii) λ ∈ [1, 2, 3, 10, 20, 30, 100, 300]. We use the ADAM optimizer with a learning rate of 0.001, a momentum of 0.5, and a batch size of 64 over 10 epochs.\nFigure 1 shows the mean average accuracy over all 50 tasks with the best hyperparameters discovered for each method. While Kronecker-factor approximation achieves 83.82%, Hessian-free curvature estimation achieves 62.58% and Hessian-free curvature estimation with the largest eigenvector achieves 61.63%, leading to better results compared to EWC (51.62%) for the last 15 tasks.\nEven though Kronecker-factored approximation achieves better performance compared to our approach, according to Farquhar & Gal (2019) in order to evaluate continual learning approaches other tasks can be more representative. In fact, Farquhar & Gal (2019) suggest to use a specific version of disjointMNIST which we evaluate below." }, { "heading": "4.2 DISJOINTMNIST", "text": "For an evaluation according to the DisjointMNIST (Ritter et al., 2018) we split MNIST into two tasks: (1) letters ’0’ to ’4’ and (2) letters ’5’ to ’9’. For this experiment we use a network with a ten-way classifier which makes the problem considerably more challenging than in the previous experiment where we used a five-way classifier. Hence, here the classifier learns a strong (bad) prior for the (respective) unseen classes in the datasets. It is more difficult as training on the second split can easily overwrite the parameters of the ten-way classifiers for the classes of the first split. We use a simple dense feed-forward network architecture with 2 layers and 100 hidden units in each layer as well as a batch size of 250 as reported in Ritter et al. (2018). We use 10 epochs and the same Adam parameters as in the PermutedMNIST experiment. This allows a comparison of our results against Kronecker-factored approximation and EWC.\nFollowing the same evaluation procedure from Ritter et al. (2018) Figure 2a illustrates the result of a hyperparameter search over λ ∈ [100, 101, . . . , 107] for EWC, Kronecker-factored approximation,\nand ours (i.e., Hessian-free curvature estimation using either the largest eigenvector or the momentum to estimate v). The results show the balancing of retaining information on old tasks over the learning accuracy on new tasks. Note that the different scales in λ between our results and that from Ritter et al. (2018) only stem from different implementation details (but the results are still comparable). Similar to the PermutedMNIST experiment, we see that our approach (using the momentum) outperforms EWC with 91.01% (at λ = 106) vs 86.11% (which is what we expected as EWC disregards parameter dependencies that are not reflected by the diagonal of the FIM). Surprisingly, our approach is even comparable to the Kronecker-factored approximation (which reaches 94.93%) although our method uses considerably less storage memory to store information on the importance of parameters. The use of the largest eigenvector on the other hand performs poorly compared to the other methods with 72.69% for λ = 106." }, { "heading": "4.3 SINGLE-HEADED SPLIT MNIST", "text": "For the Single-Headed-Split-MNIST task (Farquhar & Gal, 2019) the available digits are split into five groups (i.e., tasks) of two classes each. The classifier (as for the PermutedMNIST) uses ten outputs, i.e., one for each digit, and the network is trained on each task one after another. In contrast to some other work (Zenke et al., 2017) all the tasks share the classifier head instead of having multiple task-specific outputs. Hence, the predictions are made for all possible outputs, not only for the outputs of classes that belong the most recent task.\nWe use the same network as in the previous experiments (i.e., 2 layers of 100 hidden units each) and a batch of 64. Figure 2b shows the results after a hyperparameter search over λ. As in the previous experiments we can observe that both of our Hessian-free curvature estimations consistently outperform EWC (Hessian-free with momentum achieves 57.54% and the eigenvector approach 55.36% while EWC reaches 46.73%) and that the momentum-based variant even comes again close to the Kronecker-factored approximation (which is at 57.2% at the end)." }, { "heading": "5 RELATED WORK", "text": "Related work around the field of catastrophic forgetting is mainly driven by regularization methods, rehearsal methods, and dynamic architecture methods.\nRegularization Methods. Elastic Weight Consolidation (Kirkpatrick et al., 2017) measures the distance between the network weights for the current task and the weight state of previous tasks, and applies a quadratic penalty weighted by a diagonal approximation of the Fisher information matrix to ensure that the new weights are not too far from the old weights. EWC only penalizes important parameters while the parameters that have no influence on the performance of previous tasks are allowed to change freely. Similar approaches have been proposed by Aljundi et al. (2018) and Lee et al. (2017). The main difference is how the importance of parameters for previous tasks are approximated. However, all these approaches have limited performance as they do not consider interactions between the parameters. Instead of using the diagonal of the Fisher information matrix (Ritter et al.,\n2018) apply a Kronecker-factored approximation of the Hessian. This leads to strong improvements over EWC. This approach is most similar to ours, as it attempts to capture second-order parameter interactions to regularize parameter change. The main difference to our method is the usage of the Kronecker factorization to store the Hessian in a compact way while we exploit the fact that most regions in the loss surface are flat (Ghorbani et al., 2019). This allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset.\nRehearsal Methods. Rehearsal methods attempt to reduce catastrophic forgetting by replaying examples of previous tasks when learning a new task. A first approach here is to not only learn the actual task at hand but also the distribution of the training data. When a new task is learned, artificial samples from this learned distribution are added to the current set of training data. Typically this is done by adding a Variational Autoencoder (Kamra et al., 2017). Recent approaches (Shin et al., 2017) also employ generative adversarial networks with promising results. A second, more direct approach preserves a subset of the training data for each task in an episodic memory and reuses it to constrain the learning process of future tasks (Lopez-Paz et al., 2017). However, while being effective in reducing catastrophic forgetting in general, both approaches have shortcomings as the inherent problem of catastrophic forgetting is simply shifted to a scalability problem. In generative approaches samples for all previous tasks must be replayed each time to preserve old parameter states and as the number of tasks increases this becomes problematic. Similarly for the direct approach, even if only a small subset of examples for each task is preserved, still we can end up with a large dataset as the number of tasks increases.\nDynamic Architecture Methods. Another way to address catastrophic forgetting is to incrementally increase the capacity of the architecture. Approaches vary mainly in whether new capacity is added for each new task by default, or whether this is determined by a metric. Progressive Neural Networks (Rusu et al., 2016) add a new network for each new task and each new network is connected via lateral connections to the old ones to allow for transfer from previous tasks to the current one. This avoids catastrophic forgetting by design but as each new task requires a new network this approach does not scale well with the number of tasks. In contrast to Progressive Nets other approaches only add capacity when it is necessary. Part & Lemon (2016) present an approach based on Self-Organizing Map, which employs a similarity metric to determine whether a new node should be added to the network. Similar to this, Xiao et al. (2014) start out with a classifier with one super class and add new parameters, based on an error signal. Depending on the error made by the current model, only the final layer is extended by another output dimension, or a whole new sub-network is added as a subclass. Yoon et al. (2018) use the combination of sparsity and breadth-first-search to determine which parameters should be retrained for the current task. If the features learned so far are not able to represent the new task, more capacity is added dynamically (as in Xiao et al. (2014)). While these methods suffer significantly less from scalability issues, their main disadvantage lies in the fact that they have very stringent architectural constraints, which cannot be easily transferred to any arbitrary existing model." }, { "heading": "6 CONCLUSION", "text": "This paper addressed catastrophic forgetting within a continual learning framework where the ultimate goal lies in the identification of the network weights that are important to previously learned tasks. While previous work in this direction is either limited in the achievable accuracy (as it only considers the diagonal of the Fisher Information Matrix) or limited in number of tasks (as they need to store information that grows linearly with the number of tasks) we set out to provide a first approach that uses second-order parameter dependencies with constant space complexity. We exploit the fact that most regions in the loss surface are flat, which allows us to use only a small subset of the Hessian as it holds enough relevant information. We then use a Hessian-vector-product to sample from this subset. This way, we can incorporate the importance of individual weights and include dependencies between the parameters when we train the network over a long task sequence.\nWe evaluated our algorithm on three widely used benchmarks and compared it with state of the art. Our results show that we consistently outperform EWC across all benchmarks and that we are better or at least en par with Kronecker-factor approximation, while our method at the same time requires significantly less memory." } ]
2,019
null
SP:1b41929d9d98d6c198b688533d8066bd53ad085c
[ "This paper analyzes and extends a recently proposed goodness-of-fit test based on typicality [Nalisnick et al., ArXiv 2019]. Firstly, the authors give bounds on the type-II error of this test, showing it can be characterized as a function of KLD[q || p_true] where p is the true data generating process and q is an alternative generative process. The paper then shifts to the main contribution: an in-depth study of a Gaussian mixture simulation along with accompanying theoretical results. The simulation shows that maximum likelihood estimation (MLE)---due to it optimizing KLD[p_true || p_model]---does not penalize the model for placing probability in places not occupied by p_true. This means that while samples from p_true should fall within the model’s typical set, the model typical set may be broader than p_true’s. Table 1 makes this clear by showing that only 30-40% of samples from the model fall within the typical set of p_true. Yet >93% of samples from p_true fall within the models’ typical sets. The paper then makes the observation that the models do not have high overlap in their typical sets, and thus p_true’s typical set could be well approximated by the intersection of the various models’ typical sets. Applying this procedure to the Gaussian mixture simulation, the authors observe that ~95% of samples drawn from the intersection of the ensemble fall within p_true’s typical set. Moreover, ~97% of samples from p_true are in the ensemble (intersection) typical set. The paper closes by proving that the diversity of the ensemble controls the overlap in their typical sets, and hence increasing diversity should only improve the approximation of p_true’s typical set. ", "I machine learning, we often have training data representative of an underlying distribution, and we want to test whether additional data come from the same distribution as the training data (e.g. for outlier/anomaly detection, or model checking). One way to do this is to learn a model of the underlying distribution, and test whether the additional data fall within the typical set of the model. This paper points out that the typical set of the model may be very different from the typical set of the underlying distribution if the model is learned by maximum likelihood, in which case a test of typicality with respect to the model would be a poor test of typicality with respect to the underlying distribution. The paper shows theoretically that the intersection of the typical sets of an ensemble of models lies within the typical set of the underlying distribution, provided that (a) each model is a good enough approximation to the underlying distribution, and (b) the models are all sufficiently different from each other. Based on that, the paper argues that a better test of typicality would be to test whether the additional data fall within the intersection of the typical sets of the ensemble of models." ]
Good methods of performing anomaly detection on high-dimensional data sets are needed, since algorithms which are trained on data are only expected to perform well on data that is similar to the training data. There are theoretical results on the ability to detect if a population of data is likely to come from a known base distribution, which is known as the goodness-of-fit problem, but those results require knowing a model of the base distribution. The ability to correctly reject anomalous data hinges on the accuracy of the model of the base distribution. For high dimensional data, learning an accurate-enough model of the base distribution such that anomaly detection works reliably is very challenging, as many researchers have noted in recent years. Existing methods for the goodness-of-fit problem do not account for the fact that a model of the base distribution is learned. To address that gap, we offer a theoretically motivated approach to account for the density learning procedure. In particular, we propose training an ensemble of density models, considering data to be anomalous if the data is anomalous with respect to any member of the ensemble. We provide a theoretical justification for this approach, proving first that a test on typicality is a valid approach to the goodness-of-fit problem, and then proving that for a correctly constructed ensemble of models, the intersection of typical sets of the models lies in the interior of the typical set of the base distribution. We present our method in the context of an example on synthetic data in which the effects we consider can easily be seen.
[ { "affiliations": [], "name": "TESTING TYPICALITY" } ]
[ { "authors": [ "Sivaraman Balakrishnan", "Larry Wasserman" ], "title": "Hypothesis testing for densities and highdimensional multinomials: Sharp local minimax rates", "venue": "The Annals of Statistics,", "year": 2019 }, { "authors": [ "Andrew R Barron" ], "title": "Uniformly powerful goodness of fit tests", "venue": "The Annals of Statistics,", "year": 1989 }, { "authors": [ "Raghavendra Chalapathy", "Sanjay Chawla" ], "title": "Deep learning for anomaly detection: A survey", "venue": "arXiv preprint arXiv:1901.03407,", "year": 2019 }, { "authors": [ "Xi Chen", "Nikhil Mishra", "Mostafa Rohaninejad", "Pieter Abbeel" ], "title": "Pixelsnail: An improved autoregressive generative model", "venue": "arXiv preprint arXiv:1712.09763,", "year": 2017 }, { "authors": [ "Yen-Chi Chen" ], "title": "A tutorial on kernel density estimation and recent advances", "venue": "Biostatistics & Epidemiology,", "year": 2017 }, { "authors": [ "Hyunsun Choi", "Eric Jang" ], "title": "Generative ensembles for robust anomaly detection", "venue": "arXiv preprint arXiv:1810.01392,", "year": 2018 }, { "authors": [ "Thomas M Cover", "Joy A Thomas" ], "title": "Elements of information theory", "venue": null, "year": 2012 }, { "authors": [ "Ivo Danihelka", "Balaji Lakshminarayanan", "Benigno Uria", "Daan Wierstra", "Peter Dayan" ], "title": "Comparison of maximum likelihood and gan-based training of real nvps", "venue": "arXiv preprint arXiv:1705.05263,", "year": 2017 }, { "authors": [ "Nicola De Cao", "Ivan Titov", "Wilker Aziz" ], "title": "Block neural autoregressive flow", "venue": "arXiv preprint arXiv:1904.04676,", "year": 2019 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Will Grathwohl", "Ricky TQ Chen", "Jesse Betterncourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "arXiv preprint arXiv:1810.01367,", "year": 2018 }, { "authors": [ "Aditya Grover", "Manik Dhar", "Stefano Ermon" ], "title": "Flow-gan: Combining maximum likelihood and adversarial learning in generative models", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "arXiv preprint arXiv:1610.02136,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas G Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "arXiv preprint arXiv:1812.04606,", "year": 2018 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": null, "year": 1902 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Implicit maximum likelihood estimation", "venue": "arXiv preprint arXiv:1809.09087,", "year": 2018 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R Srikant" ], "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "venue": "arXiv preprint arXiv:1706.02690,", "year": 2017 }, { "authors": [ "Rowan McAllister", "Gregory Kahn", "Jeff Clune", "Sergey Levine" ], "title": "Robustness to out-of-distribution inputs via task-aware generative uncertainty", "venue": "In International Conference on Robotics and Automation", "year": 2019 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "arXiv preprint arXiv:1810.09136,", "year": 2018 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Balaji Lakshminarayanan" ], "title": "Detecting out-of-distribution inputs to deep generative models using a test for typicality", "venue": null, "year": 1906 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": "arXiv preprint arXiv:1906.00446,", "year": 2019 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "Oren Rippel", "Ryan Prescott Adams" ], "title": "High-dimensional probability estimation with deep density models", "venue": "arXiv preprint arXiv:1302.5125,", "year": 2013 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Thomas Schlegl", "Philipp Seeböck", "Sebastian M Waldstein", "Ursula Schmidt-Erfurth", "Georg Langs" ], "title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2017 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "arXiv preprint arXiv:1710.10766,", "year": 2017 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "arXiv preprint arXiv:1511.01844,", "year": 2015 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals" ], "title": "Neural discrete representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sumio Watanabe" ], "title": "Asymptotic equivalence of bayes cross validation and widely applicable information criterion in singular learning theory", "venue": "Journal of Machine Learning Research,", "year": 2010 } ]
[ { "heading": null, "text": "Good methods of performing anomaly detection on high-dimensional data sets are needed, since algorithms which are trained on data are only expected to perform well on data that is similar to the training data. There are theoretical results on the ability to detect if a population of data is likely to come from a known base distribution, which is known as the goodness-of-fit problem, but those results require knowing a model of the base distribution. The ability to correctly reject anomalous data hinges on the accuracy of the model of the base distribution. For high dimensional data, learning an accurate-enough model of the base distribution such that anomaly detection works reliably is very challenging, as many researchers have noted in recent years. Existing methods for the goodness-of-fit problem do not account for the fact that a model of the base distribution is learned. To address that gap, we offer a theoretically motivated approach to account for the density learning procedure. In particular, we propose training an ensemble of density models, considering data to be anomalous if the data is anomalous with respect to any member of the ensemble. We provide a theoretical justification for this approach, proving first that a test on typicality is a valid approach to the goodness-of-fit problem, and then proving that for a correctly constructed ensemble of models, the intersection of typical sets of the models lies in the interior of the typical set of the base distribution. We present our method in the context of an example on synthetic data in which the effects we consider can easily be seen." }, { "heading": "1 INTRODUCTION", "text": "Machine learning models are inherently non-robust to distributional shift, and at no fault of the model necessarily. There is no reason to expect that models should perform well on data that is dissimilar to the data on which they were trained. Interestingly, despite the fact that researchers and practitioners have been able to train models that perform exceptionally well on a variety of challenging tasks, we are still bad at reliably predicting when those models will fail. This implies that not only will the models have undefined behavior on out-of-distribution data, we are unable to detect when the models are presented with out-of-distribution data. This poses a conundrum, since we wish to deploy our high-performing models, yet often we can’t since they can potentially have unpredictable behavior at unpredictable instances.\nSince training a model that is robust to all possible distributional shifts the model might encounter is potentially impossible, a more modest approach might be to come up with ways for detecting out-of-distribution data. These detections could then act as an indication that the model might be incorrect. This ability to predict incorrectness can go a long way in making systems more reliable.\nThe problem of detecting out-of-distribution data has a long history, and is formally known as the goodness-of-fit problem. Statisticians have proved bounds on the probability of detecting populations of out-of-distribution data, such as in Barron (1989) and Balakrishnan et al. (2019). These type of bounds show that certain tests can be performed which are capable of discerning (with non-trivial probability) that populations of data sampled from distributions at least some positive distance away from the base distribution are anomalous. However, in order to perform the proposed tests, an explicit form of the probability density function (or probability mass function) describing the base distribution is needed. For most real-world data sets, this density is not known, and must be esti-\nmated. While there has been a lot of analysis on the ability to detect anomalous data, those analyses typically do not account for the fact that the base density for which the tests are designed is learned.\nEmpirically, however, many researchers have noted that detecting anomalous data in high dimensions using learned densities is hard, even when using modern, powerful density estimators. For example, the researchers in Nalisnick et al. (2018) and Choi & Jang (2018) both claim that state-ofthe-art learned densities are not suitable for anomaly detection since they assign higher probability to some out-of-distribution data than the data on which the models were trained. The authors in Nalisnick et al. (2019) realized that such a phenomenon is actually not cause for alarm, and is even expected with high-dimensional distributions. Therefore they propose performing an anomaly detection test based on the typicality of data under the learned density instead of the likelihood of data under the learned density. However, in that work, the authors noted such a test still performed poorly in some cases.\nIn this article, we investigate why goodness-of-fit tests are challenging when using learned densities. In particular, we analyze how the typical set of learned distributions relates to the typical set of the ground truth distribution. In order to do this, we work with synthetic distributions in which the probability density function is known exactly. The contributions we make are summarized as the following:\n• We prove error rate bounds on the goodness-of-fit test based on testing for typicality with respect to the base distribution\n• We prove theorems stating that the typical sets of any two distributions having sufficiently high KL divergence must have low intersection, and that distributions having low KL divergence must have non-zero intersection.\n• We use these theorems to motivate our proposed method for conservative goodness-of-fit testing. Specifically, we propose training an ensemble of models, and show that by taking the intersection of their typical sets, we can approximately recover the typical set of the ground truth distribution. We show that such an ensemble can exist and give sufficient conditions for its existence.\n• We demonstrate on synthetic data sets that the typical set of standard learned distributions and the ground-truth distribution often have low intersection, even when the class of densities from which we approximate the ground truth contains the ground-truth. We validate that our proposed method addresses this issue." }, { "heading": "2 PRELIMINARIES", "text": "The setting we are concerned with is the following. Consider a continuous random variable x ∈ X with probability density function p(x). Let xn := (x1, x2, ..., xn) denote the collection of n random variables x. Let hp denote the differential entropy of p(x). For > 0 and positive integer n, The typical set with respect to p(x) is defined as the following:\nT (n) := {\nxn ∈ Xn : | − 1 n\nlog p(xn)− hp| < } . (1)\nIn words, this is the set of sequences of samples whose average negative-log-probability is close to the differential entropy. This set has high probability under p(x), implying that most sequences of data points sampled from p(x) are contained in this set (Cover & Thomas, 2012)." }, { "heading": "2.1 HYPOTHESIS TESTING", "text": "We are interested in determining the plausibility that some collection of samples x̃n ∈ Xn were sampled i.i.d. according to p(x). In hypothesis testing terminology, we are interested in determining which of two hypotheses are more probable:\n• The null-hypothesis, H0 := x̃n ∼ p(x) • The alternative-hypothesis, H1 := x̃n ∼ p̂(x),\nwhere here p̂(x) is any distribution such that DKL(p‖p̂) ≥ d. If we define An ⊂ Xn to be the acceptance region for the null-hypothesis, or the set of all sequences xn such that H0 is deemed more probable, then we can define the probabilities of error as the following:\nαn = p n(Acn), βn = max p̃:D(p‖p̃)≥d p̃n(An). (2)\nTypically we desire tests that minimize βn for a fixed αn, or tests that minimize some linear combination of the two.\nThe authors in Nalisnick et al. (2019) propose a test procedure which accepts the null-hypothesis iff x̃n ∈ T (n) . In other words, a collection of points are determined to be anomalous with respect to p(x) iff x̃n 6∈ T (n) . In that work, they show that this test performs about as well as other tests such as the Student’s t-test, the Kolmogorov-Smirnov test, the Maximum Mean Discrepancy test, and the Kernelized Stein Discrepancy test. We note that in those comparisons, all tests are performed with respect to a learned density as a proxy for the ground-truth density.\nIn this work, we consider the same typicality-based test for accepting or rejecting the null hypothesis. We focus on this test since it allows for analysis into what happens when the proxy learned density for which the typical set is constructed is different from the ground-truth distribution p(x). In order to validate this approach, we first prove that for a fixed αn, the typical set test achieves an error rate βn that is at worst, given by:\nTheorem 1. Let the region of acceptance for the typicality test be defined by the set T (n) (p(x)). For n sufficiently large, then αn < , and\nβn < max p̃:D(p‖p̃)≥d\ne−n(DKL(p̃‖p)+hp̃−hp−3 ) + 3\nProof of Theorem 1. See Section A.1.\nThis theorem states that for a fixed rate of correctly accepting H0, the test for typicality will fail to detect anomalous data at a rate no greater than the bound given." }, { "heading": "2.2 DENSITY LEARNING AS OPTIMIZATION", "text": "In order to perform the test on typicality to determine whether a sequence of data points are anomalous or not, a model of the probability density function p(x) is needed. For many practical applications, we are only ever presented with a data set of N samples from p(x), which we denote X̄ = (x̄1, x̄2, ...x̄N ). However, an estimate of the ground-truth density can be learned. We denote a parameterized distribution by q(x;θ), which can be learned by minimizing the following objective:\nmin θ DKL(p(x)‖q(x;θ)). (3)\nThe KL divergence is defined by DKL(p(x)‖q(x; θ)) := Ep[log p(x)q(x;θ) ]. By expanding terms and eliminating constants, it is clear that an equivalent optimization problem is\nmax θ Ep[log q(x;θ)] ≈ max θ\n1\nN N∑ i=1 log q(x̄i;θ). (4)\nThe optimal θ in this problem is that which maximizes the log likelihood on the data set X̄ . This is why this optimization is commonly referred to as maximum-likelihood estimation. Importantly, the dependence on the unknown true distribution p(x) was removed through the use of the law of large numbers. Since the KL-divergence is equal to zero if and only if p(x) = q(x;θ) ∀x ∈ X , this objective is well-motivated if we wish that q(x;θ) ≈ p(x). However, there are many other statistical divergences between distributions that would also be suitable as objectives in the optimizations above. In fact, many of these other divergences might be preferred, especially in the context of anomaly detection. Among other reasons, this is because when the parameterized class of distributions q(x;θ) is highly expressive, the optimizations are non-convex, and local optima may be found with sub-optimal properties. This can be true even if there exist some θ∗ such that q(x;θ∗) = p(x).\nFor example, by looking at the form of the KL-divergence, there is no direct penalty for q(x;θ) in assigning a high density to points far away from any of the x̄i’s. This can lead to locally optimal learned densities which have high probability density in regions of X that p(x) does not. This can potentially lead to situations in which a collection of points can be anomalous with respect to p(x), but in-distribution with respect to the sub-optimally learned q(x;θ).\nThere exist other divergences which do not suffer this effect, but it is not obvious how to optimize with respect to them. For example, the reverse KL-divergence, which switches the places of p(x) and q(x;θ) in the standard KL-divergence, has an alternate effect; q(x;θ) is directly penalized for assigning high density to anywhere that does not also have high density under p(x). Unfortunately, this requires direct knowledge of p(x) to evaluate the objective, as do all divergences other than the forward KL, to the best of our knowledge. This makes the forward KL-divergence special in that it is the only divergence which can directly be optimized for.\nThese implications are important when considering how learning a density affects the hypothesis testing problem." }, { "heading": "2.3 MODERN DENSITY PARAMETERIZATIONS", "text": "There have been many advancements in the parameterization of density models in recent years. Broadly speaking, there are two main classes of parameterizations, being latent-variable models and invertible flow models. Latent variable models, such as Variational Auto-encoders (Rezende et al., 2014; Kingma & Welling, 2013), refer to models which map a lower-dimensional, latent random variable through a probabilistic mapping into data space. Because the image of any non-surjective function necessarily has measure zero, these type of models cannot have deterministic mappings, or else the model cannot constitute a valid probability distribution. The probabilistic mapping corrects for this, and the ability to represent data in a lower-dimensional latent space is a nice property of such models.\nThe other major class of parameterizations are those which map a probability density function through a bijective, continuously differentiable mapping, which we refer to as change-of-variable models. These models include autoregressive models, such as PixelCNN (Oord et al., 2016; Salimans et al., 2017), and invertible flow models, such as NICE (Dinh et al., 2014), RealNVP (Dinh et al., 2016), and GLOW (Kingma & Dhariwal, 2018). Often autoregressive models are considered as a separate class of model than invertible flow models, but we lump them together as they are effectively different implementations of the same concept. All change-of-variable models operate on the ability to express a probability density function through a change of variables. Specifically, if m(x;θ) : X → Z is a continuously differentiable bijection, and r(z;θ) is a probability density function corresponding to random variable z ∈ Z , then together m and r implicitly define a probability density function over random variable x ∈ X :\nq(x;θ) = r(z;θ) ∣∣∣dm(x;θ)\ndx ∣∣∣ . (5) So long as the determinant in (5) is easy to evaluate, the probability density q(x;θ) can be evaluated and the parameters θ can easily be optimized over. In both of these broad class of parameterizations, there have been many clever implementations of these concepts, creating highly expressive density models. In this work, our considerations are tangential and complementary to the advancements in these parameterizations. This is because regardless of the parameterization used, all of these models rely on optimizing the forward KL-divergence in order to learn their parameters. The effects we study in this paper occur even when the parameterization includes the ground-truth density we wish to learn, indicating that advancements in the expressivity of the models are unlikely to fix the undesired effects." }, { "heading": "3 CASE STUDY ON SYNTHETIC DATA", "text": "To understand why hypothesis testing is so difficult when using learned densities, we investigate this occurrence by learning synthetic probability distributions for which the ground-truth p(x) is known. In particular, we consider the problem of learning the density of a high-dimensional mixture of Gaussians.\nLet M denote the number of mixture components in the base distribution. Then we define the base distribution as the following:\np(x) := M∑ m=1 πmpm(x;µm,Σm), (6)\nwhere pm(x;µm,Σm) is the probability density function associated with a Gaussian distribution with mean µm and covariance Σm. For this investigation, we define the space X := R100, so that effects dependent on the dimensionality of X are evident. Here we consider the parameters µm and Σm to be chosen such that the µm are sampled from a uniform distribution over a hyper-ball with radius r = 3.0, and the matrices Σm are diagonal matrices with the negative logarithm of the diagonal elements each sampled from a uniform distribution between 1.0 and 3.0. We fix the πm = 1/M . In all experiments we choose the number of components to be M = 20.\nSome random projections of samples from one such p(x) can be seen in Figure 1. Note that although in each projection the probability mass of the different mixture components seem to be overlapping, due to the dimensionality of the space, each mode is separated by a relatively large distance. In the example shown in Figure 1, the minimum distance between the means of each mixture component is 3.62. Furthermore, the max probability of any mode with respect to any of the other mixture components is roughly 1e-20. This means that each of the M modes in the mixture distribution are clearly distinct and separated by regions of near-zero probability.\nWe attempt to learn p(x) by optimizing (4), for a q(x;θ) parameterized exactly as p(x). In other words, the parameters θ are the set {π1,µ1,Σ1, π2,µ2,Σ2, ..., πM ,µM ,ΣM}, where the πm > 0 are constrained to sum to 1, and the Σm are constrained to be positive definite and diagonal.\nWe perform five experiments, each time randomly initializing the parameters θ exactly as was done for p(x). We use the indices k ∈ {1, ...,K = 5} to index the 5 experiments ran, and refer the the k-th learned density as qk(x;θk). The training curve for learning these densities can be seen in Figure 2, which shows the average test and train errors across experiments, and random projections of the resulting samples overlaid on samples of the base distribution can be seen in Figure 5. Examining these figures, we see that the learning procedure converged, yet the samples generated from the learned distributions clearly differ from the ground-truth samples. By examining the relative probabilities of each mixture component for the learned densities (Figure 3), we see that all of the\nlearned densities attempt to represent the M = 20 modes of the base distribution using only about 12 of their 20 modes.\nRecalling that the base distribution p(x) has distinctly separated modes, the fact that the base distributions represent the data using fewer modes directly implies that the models necessarily assign high density to regions in-between modes of the ground-truth distribution. This behavior is very troublesome for the purposes of anomaly detection, since intuitively these regions of high-density in-between modes of the true distribution effectively include those regions in the typical set of the learned density. This means that data-points lying in these regions, which should be considered anomalous with respect to the true distribution, are not detectable by hypothesis tests which test for typicality (or likelihood, for that matter).\nTo validate this claim, we generate histograms of the ground-truth negative log-likelihoods on samples from the base distribution and each of the learned distributions, which can be seen in Figure 4. We also list in Table 1 the percentages of samples from each learned density that lie in the typical sets of the ground-truth distribution and every other learned distribution. These statistics give a sense of the intersection of the typical sets of the learned and ground-truth densities. In particular, despite almost all of the ground-truth samples lying in the typical set of each learned-distribution, only about 40% of the samples from each learned distribution lie in the typical set of the true distribution. In the context of anomaly detection, this means that if we used a test for typicality using one of these learned densities, about 60% of the region of acceptance would include points that are not actually typical with respect to the base distribution.\nThe mismatch in typical sets corroborates what was stated in Section 2.2, which was that the forward KL-divergence objective can lead to assigning density to regions far from any data points. Here, the learned densities are optimized to explain the data. The way in which they do results in inadvertently explaining other regions of the data space as well.\nThe example presented here is admittedly a simple one, yet it demonstrates that coming up with more clever parameterizations will not necessarily result in a better learning process. In this example, the learned densities are expressive enough to learn the true distribution, however get stuck in poor local minima due to bad initializations. This is the case in this situation, even though we initialized the learned distributions according to the same distribution over parameters that the true distribution was initialized from. For real data sets, it is unlikely that the parameterized space of densities we learn over can ever represent the density we are trying to estimate, and even if it could, finding a proper initialization might be exceptionally difficult. Therefore, the effects illustrated in this example are expected to occur in more complicated (and interestig) domains, and should be accounted for in those domains as well.\nIn order to address this issue of over-assignment of density, we turn to ensembles of learned distributions. As can be seen in Table 1, we find that due to the random initializations of each of the learned distributions, the regions in which they over-assign density have low intersection. In the following section, we demonstrate how this property can be leveraged to account for the mismatch in typical sets between learned and target densities." }, { "heading": "4 TYPICAL SETS OF ENSEMBLES OF LEARNED DENSITIES", "text": "We propose an alternative test to discern the two hypotheses described in Section 2.1. First we define what we call the multi-typical set, with respect to an ensemble of distributions qk(x), k ∈ {1, ...,K}. For a given > 0 and positive integer n, this set is defined as the following:\nT (n) ({q1(x), ..., qK(x)}) := {\nxn ∈ Xn : max k∈{1,...,K}\n| − log qk(xn)− hk| < } . (7)\nHere, hk refers to the differential entropy of the k-th density in the ensemble, qk(x).\nThe method we propose is the following:\n1. Train an ensemble of parameterized densities, {q1(x;θ1, ..., qK(x;θK)}, using random initializations and using the maximum likelihood objective (4).\n2. When discerning between H0 and H1 (Section 2.1), we choose H0 iff x̃n ∈ T (n) ({q1(x;θ1), ..., qK(x;θK)}).\nBefore justifying this test theoretically, we first demonstrate its utility on the Mixture of Gaussians example in Section 3. By using rejection sampling, we generate samples that lie in the multi-\ntypical set with respect to the 5 learned densities. Specifically, we generate 1000 samples from each model, and then reject any of the resulting 5000 samples that do not lie in the typical set of every model in the ensemble. The result of this is that 39.9% of the total samples make it through the rejection process, and 94.8% of the remaining samples lie within the typical set of the ground-truth distribution. Furthermore, 96.6% of the ground-truth samples are considered typical with respect to the multi-typical set (7). We emphasize that this sampling procedure is a proxy for estimating the interior of the multi-typical set. The fact that about as many of these samples lie in the ground-truth typical set as samples from the ground-truth distribution itself demonstrates that this set works as a good indicator of typicality with respect to the ground-truth distribution.\nTherefore, by leveraging the use of an ensemble of learned densities, we have shown that the shortcoming of any individual learned density model can be overcome, at least in this motivating example.\nThe high-level idea for why this method works is that by sufficiently minimizing the forward KLdivergence between the ground-truth distribution and each of the learned distributions, there is guaranteed to be some intersection of the typical sets of each model in the ensemble, and at least part of that intersection must lie in the typical set of the ground-truth typical set. Furthermore, we prove that if each model in the ensemble is sufficiently different, such that the KL-divergence between each model is large enough, then the intersection between their typical sets must be small. This implies that the intersection of typical sets from sufficiently different models trained to have low KL-divergence with the ground-truth model must lie almost entirely in the interior of the typical set of the ground-truth distribution.\nTo formalize this argument, we state and prove the following theorems: Theorem 2. For continuous random variable x ∈ X , Consider some distribution p(x) with differential entropy hp, and any collection of distributions qk(x), k ∈ {1, ...,K}, all having DKL(p‖qk) = dk < ∞. Let Tp := T (n) (p) denote the typical set of p, and similarly Tqk := T (n) (qk) the typical set of qk for each k ∈ {1, ...,K}.\nFor n sufficiently large, if the following hold, • 14 log (\n1−2 K−1 K +\n) > n\n• dk < 1n log ( 1−2 e −4n − K−1 K ) + 3\nthen V ol ( Tp ⋂ k Tqk ) > 0.\nTheorem 3. For continuous random variable x ∈ X , consider two distributions, qa(x) and qb(x). Denote ha and hb, the differential entropy of these distributions, respectively. Similarly, let Ta := T (n) (qa) and Tb := T (n) (qb) represent the typical sets of qa and qb. If for some 0 < r ≤ 1, and n sufficiently large,\nDKL(qa‖qb) > hb − ha − 3 + 1 n log ( 1 r(1− )e−2n − 2 ) ,\nthen, V ol ( Ta ∩ Tb ) V ol ( Ta ) < r.\nProof of both Theorems is given in Section A.1. The result of Theorem 2 is that we can be sure that if every model in a density of learned distributions is successful in minimizing the KL-divergence with respect to the ground-truth density, then at least part of the multi-typical set must lie in the interior of the typical set T (n) (p(x)). The result of Theorem 3 is that sufficiently different models in an ensemble must not have large intersection. The condition should hold for the r such that the ratio of volumes in Theorem 3 is approximately the same for every pair of models in the ensemble. If the conditions listed in both theorems hold, then the multi-typical set must lie almost entirely in T (n) (p(x)). The result of this is that the multi-typical set, if constructed according to the conditions\ngiven, can be used as a conservative estimate of the typical set T (n) (p(x)). Here we use the word conservative, since the conditions only are sufficient for the multi-typical set to act as an underapproximation of T (n) (p(x)).\nThe point of proving these theorems is to demonstrate that, in theory, an ensemble of learned models can approximate the typical set of the ground-truth distribution. The bounds given are sufficient conditions, but in practice we find that it is much easier to find an ensemble of models such that the multi-typical set approximates the ground-truth typical set than the bounds require. Therefore, these theorems should be taken as proof that such a procedure is well motivated, and not necessarily a guide for choosing the values of and n in practice." }, { "heading": "5 RELATED WORK", "text": "There are many works interested in goodness-of-fit testing for modern high-dimensional data distributions. The works most similar to ours are Nalisnick et al. (2019) and Choi & Jang (2018). The authors in Nalisnick et al. (2019) use a test for typicality, showing empirically that it can perform about as well as other traditional goodness-of-fit tests.\nThe authors in Choi & Jang (2018) leverage an ensemble of learned density models to obtain better tests for anomaly detection than the other tests using a single learned distribution. Specifically, they leverage the Watanabe-Akaike Information Criterion (Watanabe, 2010) to produce a score which averages the log probabilities across models of the ensemble, and subtracts the variance. Without considering the variance term, taking the average log probability is equivalent to the geometric mean of the probability density functions in the ensemble, which acts as a sort of soft-min over the ensemble. Hence, in a way, that method can be thought of as generating a density that is the pointwise least probable density in the ensemble, where as the method we propose can be thought of as taking the point-wise least typical density in the ensemble. We elaborate more on this in Section A.2.\nWe also point out that the authors in Choi & Jang (2018) propose a baseline test that resembles a test on typicality. They measure the distance from latent variables in a normalizing flow model from the origin, assuming that the distribution defined over the latent space is an isotropic Gaussian. This measure only corresponds to measuring typicality if the bijection is volume preserving.\nThere are many other methods for performing anomaly detection which do not rely on traditional hypothesis testing theory. For example, the works in Hendrycks & Gimpel (2016); Hendrycks et al. (2018); Liang et al. (2017) propose using the outputs of trained neural networks themselves to discern when the networks are presented with out-of-distribution data. By examining the soft-max probabilities at the penultimate layer of these networks, they can distinguish between in- and outof-distribution inputs with relatively good success. The work in Schlegl et al. (2017) similarly uses the output of the final layers in the Discriminator in a GAN to detect anomalous data-points. The authors in McAllister et al. (2019) use a VAE to make data points more likely under a learned distribution before feeding them to a trained model. An issue with all of these methods mentioned is that they rely on checking whether low-dimensional, learned representations of the data are anomalous or not. This has the downside that by construction, anomalous aspects of the data can be missed if the low-dimensional representation is invariant to those aspects. Nevertheless, some of these methods have proven to work well in certain domains. A survey of other methods for leveraging deep learning for anomaly detection can be found in in Chalapathy & Chawla (2019).\nAs an important aspect of hypothesis testing, it is important to consider the many methods, old and new, for learning probability densities. Kernel Density Estimation (KDE) methods are a traditional way of estimating densities, although they do not scale to high-dimensional large data sets. An overview of KDE methods can be found in Chen (2017). Some more modern approaches to modeling densities were defined in Section 2.3. In addition to the parameterizations of densities discussed there, many extensions have been developed with appealing properties, such as Chen et al. (2017), Ho et al. (2019), Grathwohl et al. (2018), De Cao et al. (2019), van den Oord et al. (2017), and Razavi et al. (2019), to list a few. There are also other objectives that are considered when learning densities, such as training a density in a adversarial manner as in Grover et al. (2018) and Danihelka et al. (2017). Alternatively, Li & Malik (2018) define implicit maximum likelihood estimation, which can be thought of as approximately minimizing the reverse KL-divergence." }, { "heading": "6 CONCLUSION", "text": "We have presented a case study investigating why learned density models can perform poorly when used in goodness-of-fit tests. We argue that the hypothesis test used to test for anomalous data must be considered together with the procedure for learning a density which is used by the test. Along these lines, we propose training an ensemble of learned densities and jointly testing for typicality with respect to each of these densities. We proved that the intersection of typical sets of such an ensemble can lie in the interior of the typical set of the ground-truth data distribution if the ensemble is constructed correctly. We demonstrated on a simple and realistic example that in practice, this procedure outperforms using typicality tests on a single learned density.\nInvestigations left for future work include extending error rate bounds to the case of the multitypicality test proposed here, and further evaluating the empirical performance of our proposed test on different domains." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF THEOREMS\nTo prove Theorem 1, we make use of the following lemma.\nLemma 1. Consider a continuous random variable x ∈ X , and two distributions qa(x) and qb(x). Denote the differential entropy of qa(x) and qb(x) as ha and hb, respectively. Let DKL(qa‖qb) = d. Denote the typical sets of these distributions as Ta := T (n) (qa(x)) and Tb := T (n) (qb(x)). Finally, let Tab := T (n) (DKL(qa‖qb) represent the relative entropy typical set (Cover & Thomas (2012), Section 11.8). Then, for n sufficiently large,\nqa(Tb) < e −n(d+ha−hb−3 ) + 3 .\nProof of Lemma 1. We prove this lemma through contradiction. Assume\nqa(Tb) > e −n(d+ha−hb−3 ) + 3 .\nThen by Theorem 8.2.2 in Cover & Thomas (2012) and the union bound, we have\nqa(Ta ∩ Tb) > e−n(d+ha−hb−3 ) + 2 ,\nand, qa(Ta ∩ Tab) > 1− 2 . This implies that\ne−n(d+ha−hb−3 ) < qa(Ta ∩ Tb ∩ Tab) < e−n(ha− )V ol(Ta ∩ Tb ∩ Tab),\nimplying e−n(d−hb−2 ) < V ol(Ta ∩ Tb ∩ Tab).\nWe also have that\nqb(Tab) > qb(Ta ∩ Tb ∩ Tab) > V ol(Ta ∩ Tb ∩ Tab) · min x∈Ta∩Tb∩Tab qb(x)\n> e−n(d−hb−2 )e−n(hb+ )\n= e−n(d− )\nHowever, we also have that qb(Tab) < e−n(d− ) by Lemma 11.8.1 in Cover & Thomas (2012). This implies that\ne−n(d− ) < e−n(d− ),\nwhich forms a contradiction. Therefore, our original assumption must be false, and therefore\nqa(Tb) < e −n(d+ha−hb−3 ) + 3 .\nProof of Theorem 1. The proof relies on the preceding lemma. Letting Tb in Lemma 1 correspond to our acceptance region for H0, T (n) (p(x)), then Lemma 1 states that for any distribution p̃, the probability\np̃(T (n) (p(x))) < e −n(DKL(p̃‖p)+hp̃−hp−3 ) + 3 .\nTherefore, by optimizing over all distributions p̃(x) such that DKL(p‖p̃) < d, the result follows immediately.\nTo prove Theorem 2, we make use of the following lemmas.\nLemma 2. For n sufficiently large,\nqk(Tp) > (1− 2 )e−n(dk+ )\nProof of Lemma 2. See Cover & Thomas (2012), Lemma 11.8.1.\nLemma 3. For n sufficiently large, V ol ( Tp ∩ Tqk ) ≥ (1− 2 )en(h(p)−3 ) − en(h(p)+dk−2 ).\nProof of Lemma 3. For n sufficiently large, qk(Tqk) > 1− , as shown in Cover & Thomas (2012), Theorem 8.2.2. By the union bound and Lemma 2,\nqk(T c qk ∪ T cp) < + 1− (1− 2 )e−n(dk+ )\n(1− 2 )e−n(d+ ) − < qk(Tqk ∩ Tp) ≤ ∫\nx∈Tqk∩Tp p(xn)e−n(dk− )dxn\n≤ ∫\nx∈Tqk∩Tp e−n(h(p)− )e−n(dk− )dxn\n= e−n(h(p)+dk−2 )V ol ( Tqk ∩ Tp ) (1− 2 )en(h(p)−3 ) − en(h(p)+dk−2 ) < V ol ( Tqk ∩ Tp )\nLemma 4. For sets Si ⊂ X , i ∈ {0, ...,K}, if\nV ol(Si ∩ S0) V ol(S0) > K − 1 K , ∀i ∈ {1, ...,K},\nThen V ol( ⋂K i=0 Si) > 0.\nProof of Lemma 4. Immediate consequence of union bound. Proof of Theorem 2. For n sufficiently large, V ol ( T (p) ) < en(h(p)+ ) (Cover & Thomas (2012), Theorem 8.2.2). If for all k ∈ {1, ...,K},\nV ol(T (qk) ∩ T (p)) V ol(Tp) > K − 1 K ,\nthen by Lemma 4, the result holds. For k ∈ {1, ...,K},\nV ol(T (qk) ∩ T (p)) V ol(T (p)) > (1− 2 )en(h(p)−3 ) − en(h(p)+dk−2 ) en(h(p)+ )\n= (1− 2 )e−4n − en(dk−3 ).\nTherefore if K − 1 K < (1− 2 )e−4n − en(dk−3 ),\nor equivalently\ndk < 1 n log (1− 2 e−4n − K − 1 K ) + 3 ,\nthen the result follows immediately. Note that the following condition is only valid if the argument of the logarithm is positive, and only possible if the entire right-hand side is positive. An sufficient condition for the RHS to be positive is,\n1 4 log ( 1− 2 K−1 K + ) > n.\nThese conditions are the two conditions in Theorem 2.\nProof of Theorem 3. Similar to Lemma 1, we prove this by contradiction. We make use of bounds that hold for n sufficiently large, which we assume from here on to avoid repeatedly stating so. Let Tab := T (n) (DKL(qa‖qb)) represent the relative entropy typical set (Cover & Thomas (2012), Section 11.8).\nAssume that, in fact, V ol ( Ta ∩ Tb ) V ol ( Ta ) > r.\nTherefore, by Theorem 8.2.2 in Cover & Thomas (2012), V ol ( Ta ∩ Tb ) > rV ol ( Ta )\n> r(1− )en(ha− ).\nHowever, by the definition of Ta,\nV ol ( Ta ∩ Tb ) <\nqa(Ta ∩ Tb) minxn∈Ta∩Tb qa(x n)\n< qa(Ta ∩ Tb) e−n(ha+ ) .\nThis then implies that qa(Ta ∩ Tb) > r(1− )e−2n .\nNow, using the union bound and Theorem 11.8.2 in Cover & Thomas (2012), we have that\nqa(Ta ∩ Tab) > 1− 2 .\nTherefore, again by the union bound,\nr(1− )e−2n − 2 < qa(Ta ∩ Tb ∩ Tab)\n= ∫ xn∈Ta∩Tb∩Tab qa(x n)dxn\n< e−n(ha− )V ol(Ta ∩ Tb ∩ Tab) =⇒ V (Ta ∩ Tb ∩ Tab) > (r(1− )e−2n − 2 )en(ha− ).\nFinally, we have that\nqb(Tab) ≥ qb(Ta ∩ Tb ∩ Tab) ≥ V ol(Ta ∩ Tb ∩ Tab) · min xn∈Ta∩Tb∩Tab qb(x n)\n= V ol(Ta ∩ Tb ∩ Tab)e−n(hb+ )\n> (r(1− )e−2n − 2 )e−n(hb−ha+2 ).\nFrom Theorem 11.8.2 referenced above, we also have that qb(Tab) < e−n(DKL(qa‖qb)− ), which implies the following.\n(r(1− )e−2n − 2 )e−n(hb−ha+2 ) < e−n(DKL(qa‖qb)+ ) log ( r(1− )e−2n − 2 ) − n(hb − ha + 2 ) < −n ( DKL(qa‖qb)− ) .\nRearranging, this gives rise to the condition:\nDKL(qa‖qb) > hb − ha − 3 + 1 n log ( 1 r(1− )e−2n − 2 ) .\nIf this condition does not hold, then there is a contradiction, and our original assumption that V ol ( Ta ∩ Tb ) V ol ( Ta ) < r\nmust be false.\nA.2 EXPRESSING THE MULTI-TYPICAL SET AS THE TYPICAL SET OF SOME DISTRIBUTION\nRecall the definition of the multi-typical set for a given ensemble of probability distributions: T (n) ({q1(x), ..., qK(x)}) := {\nxn ∈ Xn : max k∈{1,...,K}\n| − log qk(xn)− hk| < } . (8)\nAn interesting question to ask is if there exist a probability distribution which has typical set corresponding to the multi-typical set defined here. Here we show that we can approximately recover an un-normalized version of such a distribution.\nDefine q̃k(x) := ehkqk(x). Then we have T (n) ({q1(x), ..., qK(x)}) = {\nxn ∈ Xn : max k∈{1,...,K}\n| log q̃k(xn)| < } . (9)\n.\nDefining m̃(x) := q̃k(x), k = argmini | log q̃i(x)|, we have T (n) ({q1(x), ..., qK(x)}) = { xn ∈ Xn : | log m̃(xn)| < } . (10)\n. Now, define Z = ∫\nx∈X m̃(x)dx, and m(x) = m̃(x)/Z such that m(x) is a valid probability distribution. Then if Em[− logm(x)] = log(Z), then the distribution m(x) has typical set identical to the set T (n) ({q1(x), ..., qK(x)}). The property that Em[− logm(x)] = log(Z) will not hold exactly in practice, but often times it is approximately true. The reason this is true is because over the space X , each of the functions log q̃k(x) have zero mean. Pointwise, the function m̃(x) takes as its value the q̃k whose log value is farthest from the origin. If the q̃k are chosen such that on average, equal mass is kept on either side of the origin (in log-space), then the property will hold.\nThe result of such a procedure is a approximation of the density which has typical set identical to the multi-typical set. Since in practice computing the normalization constant Z is intractable, we can learn another divergence, using m̃(x) as a target instead of p(x). The reason we might wish to do this, is by having an explicit form of m̃(x), we can learn a density by optimizing other divergences than the forward KL-divergence, such as the reverse KL-divergence. Note that if we optimize\nDKL(q(x)‖m̃(x)) = DKL(q(x)‖Zm(x)) (11) = DKL(q(x)‖m(x))− logZ. (12)\nTherefore, knowledge of the normalization constant is not needed in this context, since the constant term doesn’t affect the optimization procedure. Similar results can be shown for other divergences.\nA.3 VISUALIZATIONS OF SAMPLES FROM THE MIXTURE OF GAUSSIANS EXPERIMENT" } ]
2,019
null
SP:8cc8c5179965778ba6b0c6e9a38eeac3d903f579
[ "This paper proposes a modification to standard Projected Gradient Descent to improve transferability of adversarial examples, when the source model is a ResNet-like model containing skip connections. The method, Skip Gradient Method (SGM) modifies the backwards pass to scale down the gradient computed in each residual branch of the model, before these gradients are combined with the gradient from the skip connection. This thus upweights the gradients from the skip connections as opposed to residual modules. The paper demonstrates significant improvements in the single-model black-box transfer setting, against a variety of undefended and defended models.", "The paper is about adversarial attacks and highlights a security weakness of skip connections in ResNet-like CNNs, namely: skip connections make it easier to obtain adversarial examples. This observation leads to new approach to adversarial attacks, named Skip Gradient Method (SGM), which weights the residual gradient w.r.t the skip connection gradient. The approach is validated on a variety of image classification attack scenarios (e. g. white-box and transfer attacks) using two families of source models (ResNet and DenseNet). The results show the superiority of SGM when comparing to other adversarial attack scenarios." ]
Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their huge success in building deeper and more powerful DNNs, we identify a surprising security weakness of skip connections in this paper. Use of skip connections allows easier generation of highly transferable adversarial examples. Specifically, in ResNet-like (with skip connections) neural networks, gradients can backpropagate through either skip connections or residual modules. We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability. Our method is termed Skip Gradient Method (SGM). We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained DNNs. We show that employing SGM on the gradient flow can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, SGM can be easily combined with existing black-box attack techniques, and obtain high improvements over state-of-the-art transferability methods. Our findings not only motivate new research into the architectural vulnerability of DNNs, but also open up further challenges for the design of secure DNN architectures.
[ { "affiliations": [], "name": "Dongxian Wu" }, { "affiliations": [], "name": "Yisen Wang" }, { "affiliations": [], "name": "Shu-Tao Xia" }, { "affiliations": [], "name": "James Bailey" }, { "affiliations": [], "name": "Xingjun Ma" } ]
[ { "authors": [ "Yang Bai", "Yan Feng", "Yisen Wang", "Tao Dai", "Shu-Tao Xia", "Yong Jiang" ], "title": "Hilbert-based generative defense for adversarial examples", "venue": null, "year": 2019 }, { "authors": [ "Arjun Nitin Bhagoji", "Warren He", "Bo Li", "Dawn Song" ], "title": "Practical black-box attacks on deep neural networks using efficient query mechanisms", "venue": null, "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In S&P,", "year": 2017 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "AISec,", "year": 2017 }, { "authors": [ "Pin-Yu Chen", "Yash Sharma", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Ead: elastic-net attacks to deep neural networks via adversarial examples", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Gavin Weiguang Ding", "Luyu Wang", "Xiaomeng Jin" ], "title": "AdverTorch v0.1: An adversarial robustness toolbox based on pytorch", "venue": "arXiv preprint arXiv:1902.07623,", "year": 2019 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": null, "year": 2018 }, { "authors": [ "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Evading defenses to transferable adversarial examples by translation-invariant attacks", "venue": null, "year": 2019 }, { "authors": [ "Ivan Evtimov", "Kevin Eykholt", "Earlence Fernandes", "Tadayoshi Kohno", "Bo Li", "Atul Prakash", "Amir Rahmati", "Dawn Song" ], "title": "Robust physical-world attacks on deep learning models", "venue": null, "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Qian Huang", "Isay Katsman", "Horace He", "Zeqi Gu", "Serge Belongie", "Ser-Nam Lim" ], "title": "Enhancing adversarial example transferability with an intermediate level attack", "venue": null, "year": 2019 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Nathan Inkawhich", "Wei Wen", "Hai Helen Li", "Yiran Chen" ], "title": "Feature space perturbations yield more transferable adversarial examples", "venue": null, "year": 2019 }, { "authors": [ "Linxi Jiang", "Xingjun Ma", "Shaoxiang Chen", "James Bailey", "Yu-Gang Jiang" ], "title": "Black-box adversarial attacks on video recognition models", "venue": "In ACM MM,", "year": 2019 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Yingwei Li", "Song Bai", "Yuyin Zhou", "Cihang Xie", "Zhishuai Zhang", "Alan Yuille" ], "title": "Learning transferable adversarial examples via ghost networks", "venue": "arXiv preprint arXiv:1812.03413,", "year": 2018 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Xingjun Ma", "Yuhao Niu", "Lin Gu", "Yisen Wang", "Yitian Zhao", "James Bailey", "Feng Lu" ], "title": "Understanding adversarial attacks on deep learning based medical image analysis systems", "venue": null, "year": 1907 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Apostolos Modas", "Seyed-Mohsen Moosavi-Dezfooli", "Pascal Frossard" ], "title": "Sparsefool: a few pixels make a big difference", "venue": null, "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "EuroS&P,", "year": 2016 }, { "authors": [ "Mahmood Sharif", "Sruti Bhagavatula", "Lujo Bauer", "Michael K Reiter" ], "title": "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition", "venue": "In CCS,", "year": 2016 }, { "authors": [ "Carl-Johann Simon-Gabriel", "Yann Ollivier", "Leon Bottou", "Bernhard Schölkopf", "David LopezPaz" ], "title": "First-order adversarial vulnerability of neural networks and input dimension", "venue": null, "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Kouichi Sakurai" ], "title": "One pixel attack for fooling deep neural networks", "venue": "In IEEE Transactions on Evolutionary Computation", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In ICLR,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": null, "year": 2016 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A Alemi" ], "title": "Inception-v4, inception-resnet and the impact of residual connections on learning", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Andreas Veit", "Michael J Wilber", "Serge Belongie" ], "title": "Residual networks behave like ensembles of relatively shallow networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "James Bailey", "Jinfeng Yi", "Bowen Zhou", "Quanquan Gu" ], "title": "On the convergence and robustness of adversarial training", "venue": null, "year": 2019 }, { "authors": [ "Yisen Wang", "Difan Zou", "Jinfeng Yi", "James Bailey", "Xingjun Ma", "Quanquan Gu" ], "title": "Improving adversarial robustness requires revisiting misclassified examples", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan L Yuille" ], "title": "Improving transferability of adversarial examples with input diversity", "venue": null, "year": 2019 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Dong" ], "title": "ResNet-v2-152 hold-out from group 1 or group", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In deep neural networks (DNNs), a skip connection builds a short-cut from a shallow layer to a deep layer by connecting the input of a convolutional block (also known as the residual module) directly to its output. While different layers of a neural network learn different “levels” of features, skip connections can help preserve low-level features and avoid performance degradation when adding more layers. This has been shown to be crucial for building very deep and powerful DNNs such as ResNet (He et al., 2016a;b), WideResNet (Zagoruyko & Komodakis, 2016), DenseNet (Huang et al., 2017) and ResNeXt (Xie et al., 2017). In the meantime, despite their superior performance, DNNs have been found extremely vulnerable to adversarial examples (or attacks), which are input examples slightly perturbed with an intention to fool the network to make a wrong prediction (Szegedy et al., 2013; Goodfellow et al., 2014; Ma et al., 2018; Bai et al., 2019; Wang et al., 2019; 2020). Adversarial examples often appear imperceptible to human observers, and are transferable across different models (Liu et al., 2017). This has raised security concerns on the deployment of DNNs in security critical scenarios, such as face recognition (Sharif et al., 2016), autonomous driving (Evtimov et al., 2018), video analysis (Jiang et al., 2019) and medical diagnosis (Ma et al., 2019).\nAdversarial examples can be crafted following either a white-box setting (the adversary has full access to the target model) or a black-box setting (the adversary has no information of the target model). White-box methods such as Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014), Basic Iterative Method (BIM) (Kurakin et al., 2016), Projected Gradient Decent (PGD) (Madry et al., 2018) and Carlini and Wagner (CW) (Carlini & Wagner, 2017) often suffer from low transferability\n†Correspondence to: Yisen Wang (eewangyisen@gmail.com)\nin a black-box setting, thus posing only limited threats to DNN models which are usually kept secret in practice (Dong et al., 2018; Xie et al., 2019). Several techniques have been proposed to improve the transferability of black-box attacks crafted on a surrogate model, such as momentum boosting (Dong et al., 2018), diverse input (Xie et al., 2019) and translation invariance (Dong et al., 2019). Although these techniques are effective, they (as well as white-box methods) all treat the entire network (either the target model or the surrogate model) as a single component while ignore its inner architectural characteristics. The question of whether or not the DNN architecture itself can expose more transferability of adversarial attacks is an unexplored problem.\nIn this paper, we identify one such weakness about the skip connections used by many state-of-theart DNNs. We first conduct a toy experiment with the BIM attack and ResNet-18 on the ImageNet validation dataset (Deng et al., 2009) to investigate how skip connections affect the adversarial strength of attacks crafted on the network. At each of the last 3 skip connections and residual modules of ResNet-18, we illustrate the success rate of attacks crafted using gradients backpropagate through either the skip connection or the residual module in Figure 1. As can be observed, the success rate drops more drastically whenever using gradients from a residual module instead of the skip connection. This implies that gradients from the skip connections are more vulnerable (high success rate). In addition, we surprisingly find that skip connections expose more transferable information. For example, the black-box success rate was even improved from 52.52% to 62.10% when the attack skips the last two residual modules (following the path in green color).\nMotivated by the above observations, in this paper, we propose the Skip Gradient Method (SGM) to generate adversarial examples using gradients more from the skip connections rather than the residual modules. In particular, SGM utilizes a decay factor to reduce gradients from the residual modules. We find that this simple adjustment on the gradient flow can generate highly transferable adversarial examples, and the more skip connections in a network, the more transferable are the crafted attacks. This is in sharp contrast to the design principles (e.g., “going deeper” with skip connections) underpinning many modern DNNs. In particular, our main contributions are:\n• We identify one surprising property of skip connections in ResNet-like neural networks, i.e., they allow an easy generation of highly transferable adversarial examples.\n• We propose the Skip Gradient Method (SGM) to craft adversarial examples using gradients more from the skip connections. Using a single decay factor on gradients, SGM is an appealingly simple and generic technique that can be used by any existing gradient-based attack methods.\n• We provide comprehensive transfer attack experiments, from different source models against 10 state-of-the-art DNNs, showing that SGM can greatly improve the transferability of crafted adversarial examples. When combined with existing transfer techniques, SGM improves the state-ofthe-art transferability benchmarks by a large margin." }, { "heading": "2 RELATED WORK", "text": "Existing adversarial attacks can be categorized into two groups: 1) white-box attacks and 2) blackbox attacks. In the white-box setting, the adversary has full access to the parameters of the target model, while in the black-box setting, the target model is kept secret from the adversary." }, { "heading": "2.1 WHITE-BOX ATTACKS", "text": "Given a clean example x with class label y and a target DNN model f , the goal of an adversary is to find an adversarial example xadv that fools the network into making an incorrect prediction (eg. f(xadv) 6= y), while still remaining in the -ball centered at x (eg. ‖xadv − x‖∞ ≤ ). Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014). FGSM perturbs clean example x for one step by the amount of along the gradient direction:\nxadv = x + · sign(∇x`(f(x), y)). (1) The Basic Iterative Method (BIM) (Kurakin et al., 2016) is an iterative version of FGSM that perturbs for T steps with step size /T .\nProjected Gradient Descent (PGD) (Madry et al., 2018). PGD perturbs normal example x for T steps with smaller step size. After each step of perturbation, PGD projects the adversarial example back onto the -ball of x, if it goes beyond the -ball:\nxt+1adv = Π ( xtadv + α · sign(∇x`(f(xtadv), y)) ) , (2)\nwhere Π (·) is the projection operation. Different to BIM, PGD allows step size α > /T . There are also other types of white-box attacks including sparsity-based methods such as Jacobianbased Saliency Map Attack (JSMA) (Papernot et al., 2016), sparse attack (Modas et al., 2019), onepixel attack (Su et al., 2019), and optimization-based methods such as Carlini and Wagner (CW) (Carlini & Wagner, 2017) and elastic-net (EAD) (Chen et al., 2018)." }, { "heading": "2.2 BLACK-BOX ATTACKS", "text": "Black-box attacks can be generated by either attacking a surrogate model or using gradient estimation methods in combination with queries to the target model. Gradient estimation methods estimate the gradients of the target model using black-box optimization methods such as Finite Differences (FD) (Chen et al., 2017; Bhagoji et al., 2018) or Natural Evolution Strategies (NES) (Ilyas et al., 2018; Jiang et al., 2019). These methods all require a large number of queries to the target model, which not only reduces efficiency but also potentially exposes the attack. Alternatively, black-box adversarial examples can be crafted on a surrogate model then applied to attack the target model. Although the white-box methods can be directly applied on the surrogate model, they are far less effective in the black-box setting (Dong et al., 2018; Xie et al., 2019). Several transfer techniques have been proposed to improve the transferability of black-box attacks.\nMomentum Iterative boosting (MI) (Dong et al., 2018). MI incorporates a momentum term into the gradient to boost the transferability:\nxt+1adv = Π ( xtadv + α · sign(gt+1) ) , gt+1 = µ · gt + ∇x`(f(x t adv), y)\n‖∇x`(f(xtadv), y)‖1 , (3)\nwhere gt is the adversarial gradient at the t-th step, α = /T is the step size for a total of T steps, µ is a decay factor, and ‖ · ‖1 is the L1 norm. Diverse Input (DI) (Xie et al., 2019). DI proposes to craft adversarial exampels using gradient with respect to the randomly-transformed input example:\nxt+1adv = Π ( xtadv + α · sign(∇x`(f(H(xtadv; p)), y)) ) , (4)\nwhere H(xtadv; p) is a stochastic transformation function on x t adv for a given probability p.\nTranslation Invariant (TI) (Dong et al., 2019). TI targets to evade robustly trained DNNs by generating adversarial examples that are less sensitive to the discriminative regions of the surrogate model. More specifically, TI computes the gradients with respect to a set of translated versions of the original input:\nxt+1adv = Π ( xtadv + α · sign(W ∗ ∇x`(f(xtadv), y)) ) , (5)\nwhere W is a predefined kernel (e.g., uniform, linear, and Gaussian) matrix of size (2k + 1)(2k + 1) (k being the maximal number of pixels to shift). This kernel convolution is equivalent to the weighted sum of gradients over (2k + 1)2 number of shifted input examples.\nFurthermore, there are other studies focusing on intermediate feature representations. For example, Activation Attack (Inkawhich et al., 2019) drives the activation of a specified layer on a given image towards the layer of a target image, to yield a highly transferable targeted example. Intermediate Level Attack (Huang et al., 2019) attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified layer of the source model.\nAlthough the above transfer techniques are effective, they (including white-box attacks) either 1) treat the network (either the surrogate model or the target model) as a single component or 2) only use the intermediate layer output of the network. In other words, they do not directly consider the effects of different DNN architectural characteristics. Li et al. (2018) investigated the use of skip connections and dropout layers for sampling networks, which generates a huge set of ghost networks to perform an ensemble attack. Here, we focus on the architectural property of skip connections from the gradient view without modifying or generating any extra networks." }, { "heading": "3 PROPOSED SKIP GRADIENT ATTACK", "text": "In this section, we first introduce the gradient decomposition of skip connection and residual module. Following that, we propose our Skip Gradient Method (SGM), then demonstrate the adversarial transferability property of skip connection via a case study." }, { "heading": "3.1 GRADIENT DECOMPOSITION WITH SKIP CONNECTIONS", "text": "In ResNet-like neural networks, a skip connection uses identity mapping to bypass residual layers, allowing data flow from a shallow layer directly to subsequent deep layers. Thus, we can decompose the network into a collection of paths of different lengths (Veit et al., 2016). We denote a skip connection together with its associated residual module as a building block (residual block) of a network. Considering three successive building blocks (eg. zi+1 = zi + fi+1(zi)) in a residual network from input z0 to output z3, the output z3 can be expanded as:\nz3 = z2 + f3(z2) = [z1 + f2(z1)] + f3(z1 + f2(z1)) = [z0 + f1(z0) + f2(z0 + f1(z0))] + f3 ( (z0 + f1(z0)) + f2(z0 + f1(z0)) ) .\n(6)\nAccording to the chain rule in calculus, the gradient of a loss function ` with respect to input z0 can then be decomposed as,\n∂`\n∂z0 =\n∂`\n∂z3 ∂z3 ∂z2 ∂z2 ∂z1 ∂z1 ∂z0 = ∂` ∂z3 (1 + ∂f3 ∂z2 )(1 + ∂f2 ∂z1 )(1 + ∂f1 ∂z0 ). (7)\nExtending this toy example to a network with L residual blocks, the gradient can be decomposed from L-th to the (l + 1)-th (0 ≤ l < L) residual block as,\n∂` ∂x = ∂`\n∂zL L−1∏ i=l (∂fi+1 ∂zi + 1 )∂zl ∂x . (8)\nThe example illustrated in Figure 1 is a the above decomposition of a ResNet-18 at the last 3 building blocks (l = L− 3)." }, { "heading": "3.2 SKIP GRADIENT METHOD (SGM)", "text": "In order to use more gradient from the skip connections, here, we introduce a decay parameter into the decomposed gradient to reduce the gradient from the residual modules. Following the decomposition in Equation (8), the “skipped” gradient is,\n∇x` = ∂`\n∂zL L−1∏ i=0 ( γ ∂fi+1 ∂zi + 1 )∂z0 ∂x , (9)\nwhere z0 = x is the input of the network, and γ ∈ (0, 1] is the decay parameter. Accordingly, given a clean example x and a DNN model f , an adversarial example can be crafted iteratively by,\nxt+1adv = Π ( xtadv + α · sign ( ∂` ∂zL L−1∏ i=0 (γ ∂fi+1 ∂zi + 1) ∂z0 ∂x )) . (10)\nSGM is a generic technique that can be easily implemented on any neural network that has skip connections. During the backpropagation process, SGM simply multiplies the decay parameter to the gradient whenever it passes a residual module. Therefore, SGM does not require any computation overhead, and works efficiently even on densely connected networks such as DenseNets. The reduction of residual gradients is accumulated along the backpropagation path, that is, the residual gradients at lower layers will be reduced more times than those at higher layers. This is because, compared to high-level features, low-level features have already been well preserved by skip connections (see feature decompositions in Equation (6))." }, { "heading": "3.3 ADVERSARIAL TRANSFERABILITY WITH SKIP CONNECTIONS: A CASE STUDY", "text": "To demonstrate the adversarial transferability of skip connections, we conduct a case study on 10- step PGD, and their corresponding SGM versions, to investigate the success rates of black-box attacks crafted with or without manipulating the skip connections. The black-box attacks are generated on 8 different source (surrogate) models ResNet(RN)-18/34/50/101/152 and DenseNet(DN)121/169/201, then applied to attack a Inception V3 target model. All models were trained on ImageNet training set. We randomly select 5000 ImageNet validation images that are correctly classified by all source models, and craft untargeted attacks under maximum L∞ perturbation = 16, which is a typical black-box setting (Dong et al., 2018; Xie et al., 2019; Dong et al., 2019). The step size of PGD was set to α = 2, and the decay parameter of SGM was set to γ = 0.5.\nWe run the attack for 5 times with different random seeds, and report the success rates (transferability) of different methods in Table 1. As can be seen, when the skip connections are manipulated with our SGM, the transferability of PGD is greatly improved across all source models. On all source models except RN18, the improvements are more than 13%. Without SGM, the best transferability against the Inception-V3 target model is 35.48% which is achieved by PGD on DN201, however, this is improved further by our proposed SGM to 65.38% ( > 29% gain). This not only highlights the surprising property of skip connections in terms of the generation of highly transferable attacks, but also indicates the significance of this property, as such a huge boost in transferability only takes a single decay factor.\nThe 8 source models can be interpreted as from 3 ResNet families: 1) RN18/34 are ResNets with normal residual blocks, 2) RN50/101/152 are ResNets with “bottleneck” residual blocks, and 3) DN121/169/201 are densely connected ResNets. Another important observation is that when there are more skip connections in a network within the same ResNet family (e.g., RN34 > RN18, RN152 > RN101 > RN50, and DN201 > DN169 > DN121), or from ResNets to DenseNets (e.g. DN121/169/201 > RN18/34 and DN121/169/201 > RN50/101/152), the crafted adversarial examples become more transferable, especially when the skip connections are manipulated by our SGM. This raises questions about the design principle behind many state-of-the-art DNNs: “going deeper” with techniques like skip connection and 1× 1 convolution." }, { "heading": "4 COMPARISON TO EXISTING TRANSFER ATTACKS", "text": "In this section, we compare the transferability of adversarial examples crafted by our proposed SGM and existing methods on ImageNet against both unsecured and secured target models.\nBaselines. We compare SGM with FGSM, PGD, and 3 state-of-the-art transfer attacks: (1) Momentum Iterative (MI) (Dong et al., 2018), (2) Diverse Input (DI) (Xie et al., 2019), and (3) Transition Invariant (TI) (Dong et al., 2019). Note that the TI attack was originally proposed to attack secured models, although here we include TI to attack both unsecured models and secured models. For TI and our SGM, we test both the one-step and the iterative version, however, the other methods DI and MI only have an iterative version. The iteration step is set to 10 and 20 for unsecured and secured target models respectively. For all iterative methods PGD, TI and our SGM, the step size is set to α = 2. For our proposed SGM, the decay parameter is set to γ = 0.2 (0.5) and γ = 0.5 (0.7)\non ResNet and DenseNet source models in PGD (FGSM) respectively. For simplicity, we utilize SGM to indicate FGSM+SGM in one-step attacks, and PGD+SGM in multi-step attacks. Other parameters of existing methods are configured as in their original papers.\nThreat Model. We adopt a black-box threat model in which adversarial examples are generated by attacking a source model and then applied to attack the target model. The target model is of a different architecture (indicated by the model name) to the source model, expect when the source and target models are of the same architecture, where we directly use the source model as the target model (equivalent to a white-box setting). The attacks are crafted on 5000 randomly selected ImageNet validation images that are classified correctly by all source models, and are repeated for 5 times with different random seeds. For all attack methods, we follow the standard setting (Dong et al., 2018; Xie et al., 2019) to craft untargeted attacks under maximum L∞ perturbation = 16 with respect to pixel values in [0, 255].\nTarget Models. We consider two types of target models: 1) unsecured models that are trained on ImageNet training set using traditional training; and 2) secured models trained using adversarial training. For unsecured target model, we choose 7 state-of-the-art DNNs: VGG19 (with batch normalization) (Simonyan & Zisserman, 2015), ResNet-152 (RN152) (He et al., 2016a), DenseNet201 (DN152), 154 layer Squeeze-and-Excitation network (SE154) (Hu et al., 2018), Inception V3 (IncV3) (Szegedy et al., 2016), Inception V4 (IncV3) (Szegedy et al., 2017) and Inception-ResNet V2 (IncResV2) (Szegedy et al., 2017). For secured target models, we consider 3 robustly trained DNNs using ensemble adversarial training (Tramèr et al., 2018): IncV3ens3 (ensemble of 3 IncV3 networks), IncV3ens4 (ensemble of 4 IncV3 networks) and IncResV2ens3 (ensemble of 3 IncResV2 networks).\nSource Models. We choose 8 different source models from the ResNet family: ResNet(RN)18/34/50/101/152 and DenseNet(DN)-121/169/201. Whenever the input size of the source model does not match the target model, we resize the crafted adversarial images to the input size of the target model. For VGG19, ResNet and DenseNet models, images are cropped and resized to 224×224, while for Inception/Inception-ResNet models, images are cropped and resized to 299× 299." }, { "heading": "4.1 TRANSFERABILITY AGAINST UNSECURED MODELS", "text": "We first investigate the transferability of all attack methods against 7 unsecured models, which is to find the best method that can generate the most transferable attacks on one source model against all target models.\nOne-step Transferability. The one-step transferability is measured by the success rate of one-step attacks, as reported in Table 2. Here, we only show the results on two source models: 1) RN152 which is the best ResNet source model with the highest success rate on average against all target models, and 2) DN201 which is the best DenseNet source model. Also note that, when the source and target models are the same, the result represents the white-box success rate. Overall, adversarial examples crafted on DN201 have significantly better transferability than those crafted on RN152, especially for our SGM method. This is because there are ∼ 30× more skip connections that can be manipulated by our SGM in DN201 compared to RN152. In comparison to to both FGSM and TI, transferability is improved considerably by SGM in almost all test scenarios, except when transferring from RN152 to VGG19/IncV3/IncV4 where SGM is outperformed by TI. This implies that, when transfereing across different architectures (eg. ResNet → VGG/Inception), translation adaptation may help increase the transferability of one-step perturbations. However, this advantage of TI disappears when there are more skip connections, as is the case for the DN201 source model.\nMulti-step Transferability. First we provide a detailed study about the transferability of all attack methods from the 8 source models to the 3 representative unsecured target models. We then compare different attack methods on two best source models against all unsecured target models: the best ResNet source model and the best DenseNet source model. The multi-step (e.g., 10 step) transferability from all source models to three representative target models (VGG19, SE154 and IncV3) is illustrated in Figure 2. In all transfer scenarios, our proposed SGM outperforms existing methods consistently on almost all source models except RN18. Adversarial attacks crafted by SGM become more transferable when there are more skip connections in the source model (e.g., from RN18 to DN201). An interesting observation is that, when the target model is shallow such as VGG19 (left figure in Figure 2), shallow source models transfer better, however, when the target model is deep such as SE154 and IncV3 (middle and right figures in Figure 2), deeper source models tend to have better transferability. We suspect this is due to the architectural similarities shared by the target and source models. Note that against the VGG19 target model, the success rate of baseline methods all drop significantly when the ResNet source models become more complex (from RN18 to RN152). The small variations at RN50 and DN121 source models may be caused by the architectural difference between RN18/34 which consist of normal residual blocks, RN50/101/152 which consist of “bottleneck” residual blocks and DN121/169/201 which has dense skip connections.\nResults for the best source models RN152 and DN201 against unsecured target models are reported in Table 3. The proposed SGM attack outperforms existing methods by a large margin consistently against different target models. Particularly, for transfer DN201→ SE154 (a recent state-of-the-art DNN with only 2.251% top-5 error on ImageNet), SGM achieves a success rate of 72.03%, which is > 7% and > 10% higher than MI and DI respectively.\nCombining with Existing Methods. We further demonstrate that the adversarial transferability of skip connections can be exploited in combination with existing techniques. The experiments are conducted on DN201 (the best source model in the above multi-step experiments), and TI attack is excluded as it was originally proposed against secured models and demonstrates limited improvement over PGD against unsecured models. The results are reported in Table 4*. The transferability of MI and DI is improved remarkably of 11.98%∼ 21.98% by SGM. When combined with both MI\n*For simplicity, we omit the standard deviations here as they are very small, which hardly effect the results.\nand DI, SGM improves the state-of-the-art (MI+DI) transferability by a huge margin consistently against all target models. In particular, SGM pushes the new state-of-the-art to at least 80.52% which previously was only 71%. This illustrates that skip connections can be easily manipulated to craft highly transferable attacks against many state-of-the-art DNN models." }, { "heading": "4.2 TRANSFERABILITY AGAINST ROBUSTLY TRAINED MODELS", "text": "The success rates of our SGM and other baseline methods against the 3 secured target models are reported in Table 5. Overall, with translation adaptation specifically designed for evading adversarially trained models, TI achieves the best standalone transferability, while SGM is the second best with higher success rates than either PGD, MI or DI. When combined with TI, SGM also improves the TI attack by a considerable margin across all transfer scenarios. This indicates that, although manipulating the skip connections alone may not sufficient to attack secured models, it still can make existing attacks more powerful. One interesting observation is that attacks crafted here on RN152 are more transferable than those crafted on DN201, which is quite the opposite to attacking unsecured models." }, { "heading": "4.3 A CLOSER LOOK AT SGM", "text": "In this part, we conduct more experiments to investigate the gradient decay factor of our proposed SGM, and explore the potential use of SGM for ensemble-based attacks and white-box attacks.\nEffect of Residual Gradient Decay γ. We test the transferability of our proposed SGM with varying decay parameter γ ∈ [0.1, 1.0], where γ = 1.0 means no decay on the residual gradients. The attacks are crafted by 10-step SGM on 5000 random ImageNet validation images. The results against 3 target models (VGG19, SE154 and IncV3) are illustrated in Figure 3. As can be observed, the trends are very consistent against different target models. On DenseNet source models, decreasing decay parameter (increasing decay strength) tends to improve transferability until it exceeds a certain threshold, e.g., γ = 0.5. This is because the decay encourages the attack to focus on more transferable low-level information, however, it becomes less sufficient if all high-level class-relevant information is ignored. On ResNet source models, decreasing decay parameter can constantly improve transferability for γ ≥ 0.2. Compared to DenseNet source models, ResNets require more\ndecay on the residual gradients. Recalling that skip connections reveal more transferable information of the source model, ResNets require more penalty on the residual gradients to increase the importance of skip gradients that reveal more transferable information of the source model.\nAs for the selection of γ under a scenario without knowing the target model, from Figure 3 and Appendix C, we can see that the influence of γ is more related to the source model rather than the target model, that is, given a source model, the best γ against different target models are generally the same. This makes the selection of γ quite straightforward: choosing the best γ on the source model(s). For instance, in Figure 3, suppose the unknown target model is SE154 (middle figure), the adversary could tune γ on source model DN201 to attack VGG19 (left figure) and find the best γ = 0.5. The attacks crafted on DN201 with γ = 0.5 indeed achieved the best success rate against the SE154 target model (and other target models).\nEnsemble-based Attacks. It has been shown that attacking multiple source models simultaneously can improve the transferability of the crafted adversarial examples, and is commonly adopted in practice. We follow the ensemble-based strategy (Liu et al., 2017) and craft attacks on an ensemble of RN34, RN152 and DN201. According to the discussion above, we select the best γ individually for each source model: choose γ for source RN34 and RN152 against target DN201, and γ for source DN201 against target RN152. The success rates (transferability) against 7 unsecured models and 3 secured models are reported in Table 6 and Table 7 respectively. Similar to the results of single source model attacks, against unsecured target models, SGM has a similar standalone performance with DI, better than the others (except the two “white-box” scenarios against RN152 and DN201). When combined with other methods, e.g. DI, it improves the success rates again by a large margin.\nAgainst secured models, SGM achieves the second best standalone transferability, with TI is still the best. When combined with TI, SGM improves the success rate by ∼ 10% consistently against all secured target models. In particular, against IncV3ens3, TI+SGM achieves higher success rate (87.65%) than reported in (Dong et al., 2019) (84.8%), even if only 3 source models are used here and the source models (e.g. RN34, RN152 and DN201) are all of different architecture to IncV3 target model ((Dong et al., 2019) uses 6 source models including even the IncV3 model). From all aspects analyzed above, the existence of skip connections makes transferable attacks much easier to craft in practice.\nImproving Weak White-box Attacks. In addition to the black-box transferability, we next show that SGM can also improve the weak (one-step) white-box attack FGSM. Note that the one-step version of SGM is equivalent to FGSM plus residual gradient decay. Our experiments are conducted on the 8 source models, and the white-box success rates under maximum L∞ perturbation = 8 (a typical white-box setting) are shown in Figure 4a. As can be observed, using SGM can help improve the adversarial strength (i.e., higher success rate). We then vary the maximum perturbation ∈ [1, 64], and show the results on ResNet and DenseNet models separately in Figure 4b and Figure 4c. Compared to FGSM, SGM can always give better adversarial strength, except when is extremely small ( ≤ 2). When the perturbation space becomes infinitely small, the loss landscape within the space becomes flat and the gradient points to the optimal perturbation direction. However, when the perturbation space expands, one-step gradient becomes less accurate due to changes in the loss landscape (success rate decreases as increases from 4 to 16), and in this case, the skip gradient which contains more low-level information is more reliable than the residual gradient (the improvement is more significant for ∈ [4, 16] ). Another interesting observation is that adversarial strength decreases when the model becomes more complex from RN18 to RN152, or DN121 to DN201. This is likely because the loss landscape of complex models is steeper than shallow models, making one-step gradient less reliable." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have identified a surprising property of the skip connections used by many state-ofthe-art ResNet-like neural networks, that is, they can be easily used to generate highly transferable adversarial examples. To demonstrate this architectural “weakness”, we proposed the Skip Gradient Method (SGM) to craft adversarial examples using more gradients from the skip connections rather than the residual ones, via a decay factor on gradients. We conducted a series of transfer attack experiments with 8 source models and 10 target models including 7 unsecured and 3 secured models, and showed that attacks crafted by SGM have significantly better transferability than those crafted by existing methods. When combined with existing techniques, SGM can also boost state-of-the-art transferability by a huge margin. We believe the high adversarial transferability of skip connections is due to the fact that they expose extra low-level information which is more transferable across different DNNs. Our findings in this paper not only remind researchers in adversarial research to pay attention to the architectural vulnerability of DNNs, but also raise new challenges for secure DNN architecture design." }, { "heading": "ACKNOWLEDGEMENT", "text": "Shu-Tao Xia is supported in part by National Key Research and Development Program of China under Grant 2018YFB1800204, National Natural Science Foundation of China under Grant 61771273, R&D Program of Shenzhen under Grant JCYJ20180508152204044, and research fund of PCL Future Regional Network Facilities for Large-scale Experiments and Applications (PCL2018KP001)." }, { "heading": "B COMPARISON WITH PREVIOUSLY PUBLISHED RESULTS", "text": "In this section, we compare the experimental settings in previous and our works, and discuss some small discrepancies of the baseline performance reported in ours and previous works.\nTable 8 and 9 summarizes these differences for single-source and ensemble-based attack respectively. Out of all these works (Dong et al., 2018; 2019; Xie et al., 2019), results reported in (Xie et al., 2019) are more complete. Our reported success rate of baseline attacks (e.g. MI and DI) matches that reported in (Xie et al., 2019), sometimes even higher. The slight discrepancy is caused by the difference in experimental settings. Table 10 summarizes the different source models used by baseline attacks, and Table 11 summarizes the difference in dataset, number of test images, input image size, maximum L∞ perturbation , number of attack steps N and attack step size α. Compared to 299 × 299 image size, here we use a more standard image size 224 × 224 on ImageNet. The use of smaller input size may reduce the effectiveness of existing attacks (Simon-Gabriel et al., 2019).\nIn another work by Liu et al. (2017), 81% success rate was reported for optimization-based attack crafted on ResNet-152 against target VGG16, which is higher than our 65.52% from ResNet-152 to VGG19. This is because they did not restrict the maximum perturbation . The root mean square deviation (RMSD) of their attacks is 22.83,which indicates that many pixels are perturbed more than 16 pixel values. In our experiments, the RMSD is 6.29 for PGD, 7.71 for SGM, and 12.55 for MI. This appears to be another reason for the performance discrepancy. Note that the advantage of bounded small perturbation is increasing imperceptibility to human observers (see Figure 5).\nFor proper implementation, we use open-source codes and pretrained models for our experiments, e.g., AdverTorch (Ding et al., 2019) for FGSM, PGD and MI, and source/target models from two GitHub repositories†‡§ for all models. We reproduced DI and TI in PyTorch.\n†https://github.com/Cadene/pretrained-models.pytorch ‡https://github.com/tensorflow/models/tree/master/research/slim §https://github.com/tensorflow/models/tree/master/research/adv_\nimagenet_models\nC TRANSFERABILITY OF DECAY PARAMETER γ\nIn this section, we study the “transferability” of decay parameter γ across different target models. RN152 and DN201 are used as the source model, and the target model are varying to observe trends. As indicated in Figure 6, all black-box target models share the same best selection of γ, which makes γ selection quite simple. Even if the true target model is unknown, the adversary can tune the gamma for a resnet-like neural network through another model and obtain the best selection as well." } ]
2,020
null
SP:b5d287a76a010838b352f4ec537fd83ef1f064cc
[ "The paper investigates the role of learning rate decay in neural network training. While there are prevalent ideas of how/why learning rate decay help both optimization and generalization of neural networks, this work proposes interpretation based on pattern complexity. The mechanism the paper proposes is that initial learning rate helps ignore noise in the beginning and decayed learning rate help to learn complex patterns. ", "This paper investigates the way decaying the learning rate helps the training of neural networks. First the paper discusses about other existing hypothesis such as the “Gradient Descent Hypothesis” by Lecun et al 1991 and SGD explanation by Kleinberg et al 2018. Then the paper tries to find contradicting examples against those two hypothesis with experiments. Then they propose their explanation which suggests that initially fitting noisy data and then decaying it helps it to learn more complex data. Then the paper tries to experimentally explain why the other explanations fail and theirs is better." ]
Learning rate decay (lrDecay) is a de facto technique for training modern neural networks. It starts with a large learning rate and then decays it multiple times. It is empirically observed to help both optimization and generalization. Common beliefs in how lrDecay works come from the optimization analysis of (Stochastic) Gradient Descent: 1) an initially large learning rate accelerates training or helps the network escape spurious local minima; 2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation. Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex. We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity. And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in realworld datasets. We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks.
[]
[ { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Yoshua Bengio" ], "title": "Practical Recommendations for Gradient-Based Training of Deep Architectures", "venue": null, "year": 2012 }, { "authors": [ "Nils Bjorck", "Carla P Gomes", "Bart Selman", "Kilian Q Weinberger" ], "title": "Understanding Batch Normalization", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "JMLR, 12(Jul):2121–2159,", "year": 2011 }, { "authors": [ "Mathias Eitz", "James Hays", "Marc Alexa" ], "title": "How do humans sketch objects", "venue": "ACM ToG,", "year": 2012 }, { "authors": [ "Gregory Griffin", "Alex Holub", "Pietro Perona" ], "title": "Caltech-256 object category dataset", "venue": null, "year": 2007 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: Train 1, get m for free", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Kilian Q. Weinberger", "Laurens van der Maaten" ], "title": "Densely connected convolutional networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "venue": "In ICML, pp", "year": 2015 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Bobby Kleinberg", "Yuanzhi Li", "Yang Yuan" ], "title": "An Alternative View: When Does SGD Escape Local Minima", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Jonas Kohler", "Hadi Daneshmand", "Aurelien Lucchi", "Thomas Hofmann", "Ming Zhou", "Klaus Neymeyr" ], "title": "Exponential convergence rates for Batch Normalization: The power of length-direction decoupling in non-convex optimization", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "Simon Kornblith", "Jonathon Shlens", "Quoc V. Le" ], "title": "Do better imagenet models transfer better", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Yann LeCun", "Ido Kanter", "Sara A. Solla" ], "title": "Second Order Properties of Error Surfaces: Learning Time and Generalization", "venue": "In NIPS,", "year": 1991 }, { "authors": [ "Yann A. LeCun", "Léon Bottou", "Genevieve B. Orr", "Klaus-Robert Müller" ], "title": "Efficient BackProp", "venue": null, "year": 1998 }, { "authors": [ "Yuanzhi Li", "Colin Wei", "Tengyu Ma" ], "title": "Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive Gradient Methods with Dynamic Bound of Learning Rate", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Karttikeya Mangalam", "Vinay Uday Prabhu" ], "title": "Do deep neural networks learn shallow learnable examples first", "venue": "In ICML workshop,", "year": 2019 }, { "authors": [ "Maxime Oquab", "Leon Bottou", "Ivan Laptev", "Josef Sivic" ], "title": "Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "In ICML, pp", "year": 2013 }, { "authors": [ "A. Quattoni", "A. Torralba" ], "title": "Recognizing indoor scenes", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Maithra Raghu", "Chiyuan Zhang", "Jon Kleinberg", "Samy Bengio" ], "title": "Transfusion: Understanding Transfer Learning for Medical Imaging", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the Convergence of Adam and Beyond", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Madry" ], "title": "How Does Batch Normalization Help Optimization", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Leslie N. Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "In WACV,", "year": 2017 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to Sequence Learning with Neural Networks", "venue": "In NIPS, pp", "year": 2014 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The Marginal Value of Adaptive Gradient Methods in Machine Learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Peng Xu", "Bryan He", "Christopher De Sa", "Ioannis Mitliagkas", "Chris Re" ], "title": "Accelerated Stochastic Power Iteration", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Zhewei Yao", "Amir Gholami", "Qi Lei", "Kurt Keutzer", "Michael W Mahoney" ], "title": "Hessian-based Analysis of Large Batch Training and Robustness to Adversaries", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide Residual Networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Matthew D. Zeiler" ], "title": "ADADELTA: An Adaptive Learning Rate Method", "venue": "[cs],", "year": 2012 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern neural networks are deep, wide, and nonconvex. They are powerful tools for representation learning and serve as core components of deep learning systems. They are top-performing models in language translation (Sutskever et al., 2014), visual recognition (He et al., 2016), and decision making (Silver et al., 2018). However, the understanding of modern neural networks is way behind their broad applications. A series of pioneering works (Zhang et al., 2017; Belkin et al., 2019; Locatello et al., 2019) reveal the difficulty of applying conventional machine learning wisdom to deep learning. A better understanding of deep learning is a major mission in the AI field.\nOne obstacle in the way of understanding deep learning is the existence of magic modules in modern neural networks and magic tricks to train them. Take batch normalization module (Ioffe & Szegedy, 2015) for example, its pervasiveness in both academia and industry is undoubted. The exact reason why it expedites training and helps generalization, however, remains mysterious and is actively studied in recent years (Bjorck et al., 2018; Santurkar et al., 2018; Kohler et al., 2019). Only when we clearly understand these magical practices can we promote the theoretical understanding of modern neural networks.\nLearning rate is “the single most important hyper-parameter” (Bengio, 2012) in training neural networks. Learning rate decay (lrDecay) is a de facto technique for training modern neural networks, where we adopt an initially large learning rate and then decay it by a certain factor after pre-defined epochs. Popular deep networks such as ResNet (He et al., 2016), DenseNet (Huang et al., 2017b) are all trained by Stochastic Gradient Descent (SGD) with lrDecay. Figure 1(a) is an example of lrDecay, with the learning rate decayed by 10 every 30 epochs. The training is divided into several stages by the moments of decay. These stages can be easily identified in learning curves (such as Figure 1(b)), where the performance boosts sharply shortly after the learning rate is decayed. The lrDecay enjoys great popularity due to its simplicity and general effectiveness.\nCommon beliefs in how lrDecay works are derived from the optimization analysis in (Stochastic) Gradient Descent (LeCun et al., 1991; Kleinberg et al., 2018). They attribute the effect of an initially\nlarge learning rate to escaping spurious local minima or accelerating training and attribute the effect of decaying the learning rate to avoiding oscillation around local minima. However, these common beliefs are insufficient to explain our empirical observations from a series of carefully-designed experiments in Section 4.\nIn this paper, we provide an alternative view: the magnitude of the learning rate is closely related to the complexity of learnable patterns. From this perspective, we propose a novel explanation for the efficacy of lrDecay: an initially large learning rate suppresses the memorization of noisy data while decaying the learning rate improves the learning of complex patterns. This is validated on a carefully-constructed dataset with tractable pattern complexity. The pattern complexity in realworld datasets is often intractable. We thus validate the explanation by testing its implication on real-world datasets. The implication that additional patterns learned in later stages of lrDecay are more complex and thus less transferable across different datasets, is also justified empirically. A comparison between the proposed explanation and the common beliefs is summarized in Table 1. Our explanation is supported by carefully-designed experiments and provides a new perspective on analyzing learning rate decay.\nThe contribution of this paper is two-fold:\n• We demonstrate by experiments that existing explanations of how lrDecay works are insufficient in explaining the training behaviors in modern neural networks. • We propose a novel explanation based on pattern complexity, which is validated on a dataset with tractable pattern complexity, and its implication is validated on real-world datasets.\nThe explanation also suggests that complex patterns are only learnable after learning rate decay. Thus, when the model learns all simple patterns, but the epoch to decay has not reached, immediately decaying the learning rate will not hurt the performance. This implication is validated in Section A.1." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 UNDERSTANDING THE BEHAVIOR OF SGD", "text": "Recently, researchers reveal the behavior of SGD from multiple perspectives (Li et al., 2019; Mangalam & Prabhu, 2019; Nakkiran et al., 2019). They respect the difference among data items rather\nthan treat them as identical samples from a distribution. They study the behavior of SGD in a given dataset. In Mangalam & Prabhu (2019), they show that deep models first learn easy examples classifiable by shallow methods. The mutual information between deep models and linear models is measured in Nakkiran et al. (2019), which suggests deep models first learn data explainable by linear models. Note that they are not relevant to learning rates. Li et al. (2019) analyze a toy problem to uncover the regularization effect of an initially large learning rate. Their theoretical explanation is, however, based on a specific two-layer neural network they design. Different from these works, Section 5 studies the behavior of SGD induced by lrDecay in a modern WideResNet (Zagoruyko & Komodakis, 2016), finding that learning rate decay improves learning of complex patterns. We formally define pattern complexity by expected class conditional entropy, while the measure of pattern complexity in Mangalam & Prabhu (2019); Nakkiran et al. (2019) relies on an auxiliary model." }, { "heading": "2.2 ADAPTIVE LEARNING RATE METHODS", "text": "Adaptive learning rate methods such as AdaGrad (Duchi et al., 2011), AdaDelta (Zeiler, 2012), and ADAM (Kingma & Ba, 2015) are sophisticated optimization algorithms for training modern neural networks. It remains an active research field to study their behaviors and underlying mechanisms (Reddi et al., 2018; Luo et al., 2019). However, we focus on learning rate decay in SGD rather than on the adaptive learning rate methods. On the one hand, SGD is the de facto training algorithm for popular models (He et al., 2016; Huang et al., 2017b) while lrDecay is not common in the adaptive methods; On the other hand, many adaptive methods are not as simple as SGD and even degenerate in convergence in some scenarios (Wilson et al., 2017; Liu et al., 2019). We choose to study SGD with lrDecay, without introducing adaptive learning rate to keep away from its confounding factors." }, { "heading": "2.3 OTHER LEARNING RATE STRATEGIES", "text": "Besides the commonly used lrDecay, there are other learning rate strategies. Smith (2017) proposes a cyclic strategy, claiming to dismiss the need for tuning learning rates. Warm restart of learning rate is explored in Loshchilov & Hutter (2017). They achieve better results when combined with Snapshot Ensemble (Huang et al., 2017a). These learning rate strategies often yield better results at the cost of additional hyperparameters that are not intuitive. Consequently, it is still the de facto to decay the learning rate after pre-defined epochs as in Figure 1(a). We stick our analysis to lrDecay rather than to other fancy ones because of its simplicity and general effectiveness." }, { "heading": "2.4 TRANSFERABILITY OF DEEP MODELS", "text": "Training a model on one dataset that can be transferred to other datasets has long been a goal of AI researches. The exploration of model transferability has attracted extensive attention. In Oquab et al. (2014), deep features trained for classification are transferred to improve object detection successfully. Yosinski et al. (2014) study the transferability of different modules in pre-trained networks, indicating that higher layers are more task-specific and less transferable across datasets. By varying network architectures, Kornblith et al. (2019) show architectures with a better ImageNet accuracy generally transfer better. Raghu et al. (2019) explore transfer learning in the field of medical imaging to address domain-specific difficulties. Different from these works that only consider the transferability of models after training, we investigate another dimension of model transferability in Section 6: the evolution of transferability during training with lrDecay." }, { "heading": "3 COMMON BELIEFS IN EXPLAINING LRDECAY", "text": "" }, { "heading": "3.1 GRADIENT DESCENT EXPLANATION", "text": "The practice of lrDecay in training neural networks dates back to LeCun et al. (1998). The most popular belief in the effect of lrDecay comes from the optimization analysis of Gradient Descent (GD) (LeCun et al., 1991). Although SGD is more practical in deep learning, researchers are usually satisfied with the analysis of GD considering that SGD is a stochastic variant of GD.\nSpecifically, LeCun et al. (1991) analyze the property of a quadratic loss surface which can be seen as a second-order approximation around a local minimum in nonconvex optimization. Learning rates are characterized by the relationship with eigenvalues of the Hessian at a local minimum. Denote η the learning rate, H the Hessian, λ an eigenvalue of H , and v an eigenvector of λ. The behavior of the network along the direction v can be characterized as (1 − ηλ)kv, with k the iteration number. Convergence in the direction of v requires 0 < η < 2/λ, while η > 2/λ leads to divergence in the direction of v. If 0 < η < 2/λ holds for every eigenvalue of the Hessian, the network will converge quickly (Figure 2 left). If it holds for some directions but not for all directions, the network will diverge in some directions and thus jump into the neighborhood of another local minimum (Figure 2 middle). If the learning rate is too large, the network will not converge (Figure 2 right). In particular, when oscillation happens, it means the learning rate is too large and should be decayed. The effect of lrDecay hence is to avoid oscillation and to obtain faster convergence. Note LeCun et al. (1991) only analyze a simple one-layer network. It may not hold for modern neural networks (see Section 4.1)." }, { "heading": "3.2 STOCHASTIC GRADIENT DESCENT EXPLANATION", "text": "Another common belief is the Stochastic Gradient Descent explanation, arguing that “with a high learning rate, the system is unable to settle down into deeper, but narrower parts of the loss function.” 1 Although it is common, this argument has not been formally analyzed until very recently.\nUnder some assumptions, Kleinberg et al. (2018) prove SGD is equivalent to the convolution of loss surface, with the learning rate serving as the conceptual kernel size of the convolution. With an appropriate learning rate, spurious local minima can be smoothed out, thus helping neural networks escape bad local minima. The decay of learning rate later helps the network converge around the minimum. Figure 3 is an intuitive one-dimensional example. The first plot shows that a large learning rate helps escape bad local minima in both sides. The lrDecay in subsequent plots increases the probability of reaching the global minimum. Although intuitive, the explanation requires some assumptions that may not hold for modern neural networks (see Section 4.2).\n1http://cs231n.github.io/neural-networks-3/#anneal" }, { "heading": "4 EXPERIMENTS AGAINST EXISTING EXPLANATIONS", "text": "Although the (Stochastic) Gradient Descent explanations in Section 3 account for the effect of lrDecay to some extent, in this section, we show by carefully-designed experiments that they are insufficient to explain the efficacy of lrDecay in modern neural networks. In all the experiments except for Section 6, we use a modern neural network named WideResNet (Zagoruyko & Komodakis, 2016). It is deep, wide, nonconvex, and suitable for datasets like CIFAR10 (Krizhevsky & Hinton, 2009)." }, { "heading": "4.1 EXPERIMENTS AGAINST THE GRADIENT DESCENT EXPLANATION", "text": "We train a WideResNet on CIFAR10 dataset with GD, decay the learning rate at different epochs, and report the training loss (optimization) as well as the test accuracy (generalization) in Figure 4. WideResNet and CIFAR10 are commonly used for studying deep learning (Zhang et al., 2017). CIFAR10 is not too large so that we can feed the whole dataset as a single batch using distributed training, computing the exact gradient rather than estimating it in mini-batches. Experiments show that lrDecay brings negligible benefit to either optimization or generalization. No matter when the learning rate is decayed, the final performances are almost the same. The instability in the beginning is related to the high loss wall described in Pascanu et al. (2013), which is not the focus of this paper.\nThe above observation contradicts directly with the GD explanation in Section 3.1. The contradiction arises from the fact that LeCun et al. (1991) only analyze simple linear networks, and no wonder the explanation fails in modern non-linear deep networks. Recent studies (Keskar et al., 2017; Yao et al., 2018) reveal that large-batch training of modern networks can lead to very sharp local minima. Gradient Descent (the extreme of large batch training) can lead to even sharper local minima. In Figure 5, we calculate the largest ten eigenvalues2 of the Hessian as well as the convergence interval (0 < η < 2/λ) for each eigenvalue for a trained WideResNet. The top eigenvalues reach the order of ≈ 200. By contrast, eigenvalues of simple networks in LeCun et al. (1991) often lie in [0, 10] (Figure 1 in their original paper). The spectrum of eigenvalues in modern networks is very different from that in simple networks analyzed by LeCun et al. (1991): the Hessian of modern networks has a much larger spectral norm.\nThe GD explanation in Section 3.1 attributes the effect of lrDecay to avoiding oscillation. Oscillation means there is a small divergence in some directions of the landscape so that the network bounces among nearby minima. However, the divergence factor 1 − ηλ for the largest eigenvalue (≈ 200) is too large even for a small growth of learning rate. Thus, the learning rate is either small enough to converge in a local minimum or large enough to diverge. It is hardly possible to observe the oscillation in learning curves (Figure 2 middle), and diverging learning curves (Figure 2 right) can be discarded during hyperparameter tuning. Therefore, only stable solutions are observable where η is small enough (Figure 2 left), leaving no necessity for learning rate decay. Indeed, when the\n2Thanks to the advances of Xu et al. (2018); Yao et al. (2018), we can compute the eigenvalues directly.\nlearning rate is increased mildly, we immediately observe diverging learning curves (Section A.2). In short, the GD explanation cannot explain the effect of lrDecay in training modern neural networks." }, { "heading": "4.2 EXPERIMENTS AGAINST THE STOCHASTIC GRADIENT DESCENT EXPLANATION", "text": "We follow the experiment setups in Section 4.1, but replace GD with SGD in Figure 7. According to the SGD explanation in Section 3.2, the effect of learning rate decay is to increase the probability of reaching a good minimum. If it is true, the model trained before decay can also reach minima, only by a smaller probability compared to the model after decay. In other words, the SGD explanation indicates the best performances before and after decay are the same. It predicts learning curves like Figure 6. However, Figure 7 does not comply with the SGD explanation: the best performances before and after lrDecay are different by a noticeable margin. Without lrDecay (the rightmost column in Figure 7), the performance plateaus and oscillates, with no chance reaching the performance of the other columns after decay. The performance boost after learning rate decay is widely observed (Figure 1(b) for example). However, possibly due to the violation of its assumptions (Kleinberg et al., 2018), the SGD explanation cannot explain the underlying effect of lrDecay." }, { "heading": "5 AN EXPLANATION FROM THE VIEW OF PATTERN COMPLEXITY", "text": "Section 4 uncovers the insufficiency of common beliefs in explaining lrDecay. We thus set off to find a better explanation. Mangalam & Prabhu (2019); Nakkiran et al. (2019) reveal that SGD (without learning rate decay) learns from easy to complex. As learning rates often change from large to small in typical learning rate strategies, we hypothesize that the complexity of learned patterns is related to the magnitude of learning rates. Based on this, we provide a novel explanation from the view of pattern complexity: the effect of learning rate decay is to improve the learning of complex patterns while the effect of an initially large learning rate is to avoid memorization of noisy\ndata. To justify this explanation, we carefully construct a dataset with tractable pattern complexity, and record model accuracies in simple and complex patterns separately with and without lrDecay." }, { "heading": "5.1 PATTERN SEPARATION 10 (PS10) DATASET WITH TRACTABLE PATTERN COMPLEXITY", "text": "The explanation we propose involves pattern complexity, which is generally conceptual and sometimes measured with the help of a simple auxiliary model as in Mangalam & Prabhu (2019); Nakkiran et al. (2019). Here we try to formalize the idea of pattern complexity: the complexity of a dataset is defined as the expected class conditional entropy: C({(xi, yi)}ni=1) = EyH(P (x|y)), where H denotes the entropy functional. The complexity of patterns depends on the complexity of the dataset they belong to. Higher C means larger complexity because there are averagely more patterns in each class to be recognized (consider an animal dataset with 10 subspecies in each species vs. an animal dataset with 100 subspecies in each species).\nEquipped with the formal definition of complexity, we construct a Pattern Separation 10 (PS10) dataset with ten categories and explicitly separated simple patterns and complex patterns. We first generate a simple sub-dataset together with a complex sub-dataset in R3. As shown in Figure 8(a) and Figure 8(b), patterns are visualized as colors because they lie in R3. The category label can be identified by either simple patterns or complex patterns. We then merge the two sub-datasets into one dataset. The merging method in Figure 8(c) is specifically designed such that the simple subset and complex subset are fed into different channels of the WideResNet. This mimics the intuition of patterns as the eye pattern and the nose pattern have different locations in an image of human face. To be compatible with the sliding window fashion of convolutional computation, we make patterns the same across spatial dimensions of height and weight to have the same image size as CIFAR10." }, { "heading": "5.2 THE EFFECT OF DECAY: IMPROVE LEARNING OF MORE COMPLEX PATTERNS", "text": "To reveal the effect of decaying the learning rate, we compare experiments with and without lrDecay. For those without lrDecay, we set the learning rates equal to the learning rate of each stage in lrDecay. We measure not only the total accuracy but also the accuracies on simple and complex patterns separately. These accuracies are plotted in Figure 9.\nThe first plot in Figure 9 clearly shows the model first learns simple patterns quickly. The boost in total accuracy mainly comes from the accuracy gain on complex patterns when the learning rate is decayed. Plots 2, 3, and 4 show the network learns more complex patterns with a smaller learning rate, leading to the conclusion that learning rate decay helps the network learn complex patterns." }, { "heading": "5.3 THE EFFECT OF AN INITIALLY LARGE LEARNING RATE: AVOID FITTING NOISY DATA", "text": "Figure 9 seems to indicate that an initially large learning rate does nothing more than accelerating training: in Plot 4, a small constant learning rate can achieve roughly the same accuracy compared with lrDecay. However, by adding 10% noisy data to mimic real-world datasets, we observe something interesting. Figure 10 shows the accuracies on simple patterns, complex patterns, and noise data when we add noise into the dataset. Plot 2 in Figure 10 shows an initially large learning rate helps the accuracy on complex patterns. Plot 3 in Figure 10 further shows the accuracy gain on complex patterns comes from the suppression of fitting noisy data. (Note that a larger accuracy on noisy data implies overfitting the noisy data, which is undesirable.) In short, the memorizing noisy data hurts the learning of complex patterns but can be suppressed by an initially large learning rate.\nEmpirically, Li et al. (2019) report that an initially large learning rate with decay outperforms a small and constant learning rate. They suspect that the network starting with an initially small learning rate will be stuck at some spurious local minima. Our experiments provide an alternative view that spurious local minima may stem from noisy data. And the regularization effect of an initially large learning rate is to suppress the memorization of noisy data." }, { "heading": "6 THE IMPLICATION OF LRDECAY ON MODEL TRANSFERABILITY", "text": "Section 5 examines the proposed explanation on the PS10 dataset. Now we further validate the explanation on real-world datasets. Because there are no clearly separated simple and complex patterns in real-world datasets, it is difficult to directly validate the explanation. The proposed explanation suggests that SGD with lrDecay learns patterns of increasing complexity. Intuitively, more complex patterns are less transferable, harder to generalize across datasets. Thus an immediate implication is that SGD with lrDecay learns patterns of decreasing transferability. We validate it by transfer-learning experiments on real-world datasets, to implicitly support the proposed explanation.\nThe transferability is measured by transferring a model from ImageNet to different target datasets. To get models in different training stages, we train a ResNet-50 on ImageNet from scratch and save checkpoints of models in different stages. The learning rate is decayed twice, leading to three\nstages. Target datasets for transferring are: (1) Caltech256 (Griffin et al., 2007) with 256 general object classes; (2) CUB-200 (Wah et al., 2011) with 200 bird classes; (3) MITIndoors (Quattoni & Torralba, 2009) with 67 indoor scenes; (4) Sketch250 (Eitz et al., 2012) with sketch painting in 250 general classes. Sketch250 is the most dissimilar to ImageNet because it contains sketch paintings.\nWe study two widely-used strategies of transfer learning: “fix” (ImageNet snapshot models are only used as fixed feature extractors) and “finetune” (feature extractors are jointly trained together with task-specific layers). Let acci denotes the accuracy of stage i snapshot model on ImageNet and tacci denotes the accuracy of transferring the snapshot to the target dataset, then the transferability of additional patterns learned in stage i is defined as tacci−tacci−1acci−acci−1 , i = 2, 3. By definition, the transferability of patterns from ImageNet to ImageNet is 1.0, complying with common sense. The transferability is plotted in Figure 11. Table 2 contains the accuracies used to compute it.\nIn all experiments, we find that the transferability of additional patterns learned in stage 3 is less than that in stage 2. Besides, in Sketch250 dataset, the transferability of additional patterns learned in stage 3 is negative. These findings support our claim that additional patterns learned in later stages of lrDecay are more complex and thus less transferable. They also suggest deep model-zoo developer provide pre-trained model snapshots in different stages so that downstream users can select the most transferable snapshot model according to their tasks." }, { "heading": "7 CONCLUSION", "text": "In this paper, we dive into how learning rate decay (lrDecay) helps modern neural networks. We uncover the insufficiency of common beliefs and propose a novel explanation: the effect of decaying learning rate is to improve the learning of complex patterns, and the effect of an initially large learning rate is to avoid memorization of noisy data. It is supported by experiments on a dataset with tractable pattern complexity as well as on real-world datasets. It would be interesting to further bridge the proposed explanation and the formal analysis of optimization procedure." }, { "heading": "A APPENDIX", "text": "A.1 AUTODECAY\nExperiments in Section 5.2 implies that not all complex patterns are learnable under a constant learning rate. The training under a certain learning rate has no effect when the loss plateaus. This indicates we can expedite the training process by killing the over-training of each stage (decay the learning rate when the loss plateaus) with little influence on the performance. To validate the implication, we propose AutoDecay to shorten the useless training and check if the performance of the model can be untouched. In Figure 7, it appears obvious to decide the optimal moment to decay when we have a big picture of the training process. The problem is, however, how can we make a decision to decay depending on the current and past observations. It is a non-trivial problem given that the statistics exhibit noticeable noise.\nA.1.1 PROBLEM FORMULATION\nWe formalize the observed training loss into two parts: ˆ̀(t) = `(t)+ (t), with `(t) the ground truth loss (unobservable) and (t) the noise introduced by SGD. Here t indicates the training process (typically the epoch number) and takes value in N = {1, 2, 3, . . .}. To simplify the problem, we assume (t) is independent with t and (t) is independent of (t′)(t′ 6= t) in SGD. The nature of noise gives rise to the zero-expectation property E (t) = 0. Denote σ2 = Var (t) the variance of the noise. Due to the noise of SGD, the observed training loss usually vibrates in a short time window but decreases in a long time window. Our task is to find out whether the loss value is stable in the presence of noise.\nA.1.2 PROBLEM SOLUTION\nExponential Decay Moving Average (EDMA) with Bias Correction. Observations with lower variance are more trustworthy. However, there is nothing we can do about the variance of ˆ̀(t). We consider computing a low-variance statistic about ˆ̀(t). We adopt moving average with bias correction(Kingma & Ba, 2015). Let g(t) be the moving average of `(t) and ĝ(t) be the moving average of ˆ̀(t). The explicit form is in Equation 1, where β ∈ (0, 1) is the decay factor in EDMA.\ng(t) =\n∑t−1 i=0 β\ni`(t− i)∑t−1 i=0 β i = 1− β 1− βt t−1∑ i=0 βi`(t− i), t ≥ 1\nĝ(t) =\n∑t−1 i=0 β\ni ˆ̀(t− i)∑t−1 i=0 β i = 1− β 1− βt t−1∑ i=0 βi ˆ̀(t− i), t ≥ 1 (1)\nThe recursive (and thus implicit) form is in Equation 2. It enables us to compute the statistic ĝ online (without storing all the previous {ˆ̀(i)|i < t}) at the cost of maintaining f̂(t).\nf(0) = 0, f(t) = βf(t− 1) + (1− β)`(t) =⇒ g(t) = f(t) 1− βt (t ≥ 1)\nf̂(0) = 0, f̂(t) = βf̂(t− 1) + (1− β)ˆ̀(t) =⇒ ĝ(t) = f̂(t) 1− βt (t ≥ 1)\n(2)\nAs ĝ(t) is a linear combination of {ˆ̀(i)|i ≤ t}, it is easy to show ĝ(t) is unbiased:\nEĝ(t) = E ∑t−1 i=0 β\ni ˆ̀(t− i)∑t−1 i=0 β i =\n∑t−1 i=0 β\niEˆ̀(t− i)∑t−1 i=0 β i =\n∑t−1 i=0 β\ni`(t− i)∑t−1 i=0 β i = g(t)\nThe variance of ĝ(t) is\nVarĝ(t) = (1− β)2 (1− βt)2Var t−1∑ i=0 βi ˆ̀(t− i) = (1− β) 2σ2 (1− βt)2 t−1∑ i=0 β2i = 1− β 1 + β 1 + βt 1− βtσ 2 (3)\nThe fact that β ∈ (0, 1) indicates Varĝ(t) is monotonically decreasing. Typically β = 0.9 (Figure 13), and the variance can rapidly converge to 0.05σ2, much smaller than the variance of the noise. ĝ(t) well represents the unobservable g(t). If `(t) gets stable, we shall observe that ĝ(t) is stable, too.\nCriterion of Being Stable. We only want to decay the learning rate when the loss plateaus, i.e. when the loss is stable. For observed values of G = {ĝ(i)|i−W + 1 ≤ i ≤ t} within the window size of W , we call them stable if maxG−minGminG+ < η, where is a small constant that prevents zero-division error, and η indicates the tolerance of variation.\nCriterion of Significant Drop. When we keep decaying the learning rate, there comes a time when the learning rate is too small and the network cannot make any progress. When it happens, we should terminate the training. Termination is adopted when their is no significant drop between the stable value and the original value ĝ(0). To be specific, the criterion of significant drop is ĝ(t)+ ĝ(0)+ ≤ ζ, where is a small constant that prevents zero-division error, and ζ indicates the degree of drop.\nThe entire procedure of AutoDecay is described in Figure 12.\nA.1.3 EXPERIMENTS\nWe try AutoDecay on ImageNet (Russakovsky et al., 2015) to test whether it can expedite the training without hurting the performance. We are not trying to set up a new state-of-the-art record. We train a ResNet-50 model on ImageNet following the official code of PyTorch. The only change is we replace the StepDecay strategy with the proposed AutoDecay strategy. Each experiment costs roughly two days with 8 TITAN X GPUs. The results in Table 14 show that AutoDecay can shorten the training time by 10% without hurting the performance (even bringing a slight improvement), successfully vaidates the proposed explanation in this paper.\n0 20 40 60 80 100t 0.0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\ny = 1−β1+β\ny = 1−β1+β 1+βt 1−βt\nFigure 13: Variance reduction when β = 0.9\nMethod ImageNet\nepochs top1 top5\nStepDecay 90 75.80 92.76 AutoDecay 81 75.91 92.81\nFigure 14: Results of AutoDecay.\nA.2 LARGER LR LEADS TO DIVERGENCE IN GD FOR MODERN NEURAL NETWORKS\nWhen we increase the learning rate mildly for Gradient Descent, we immediately observe diverging learning curves (Figure 15), which echos with the reason mentioned in Section 4.1 why the Gradient Descent explanation fails to work in modern neural networks: modern neural networks have a very\nlarge spectrum norm at a local minimum, and even a small growth of learning rate can lead to divergence. In other words, training modern neural networks with GD must use a small enough learning rate, dismissing the value of learning rate decay.\nA.3 ACCURACIES TO COMPUTE THE TRANSFERABILITY IN SECTION 6" } ]
2,019
HOW DOES LEARNING RATE DECAY HELP MODERN NEURAL NETWORKS?
SP:a2a80f52b722eafc6b8d751ecfeb8f85dacfe0b8
[ "The paper proposes a cross-lingual data augmentation method to improve the language inference and question answering tasks. The core idea is to replace a port of the input text (such as one of the sentence in a sentence pair in the language inference tasks) with its translation in another language. The authors empirically show that deploying the XLDA data augment improves the baseline methods for both the XNLI language inference data sets and the SQuAD task. ", "The paper provides an analysis of a cross-lingual data augmentation technique dubbed XLDA, which consists of replacing parts of an input text with its translation in another language. Building on the mBERT approach, the authors show that at fine-tuning time it is beneficial to augment the training set of XNLI with cross-lingual hypotheses and premises instead of in-language pairs. For each language in XNLI, they show results by augmenting with each of the 14 other languages in the dataset, and show significant improvements over per-language performance." ]
While natural language processing systems often focus on a single language, multilingual transfer learning has the potential to improve performance, especially for low-resource languages. We introduce XLDA, cross-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language. XLDA enhances performance of all 14 tested languages of the crosslingual natural language inference (XNLI) benchmark. With improvements of up to 4.8%, training with XLDA achieves state-of-the-art performance for Greek, Turkish, and Urdu. XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language. On the SQuAD question answering task, we see that XLDA provides a 1.0% performance increase on the English evaluation set. Comprehensive experiments suggest that most languages are effective as crosslingual augmentors, that XLDA is robust to a wide range of translation quality, and that XLDA is even more effective for randomly initialized models than for pretrained models.
[]
[ { "authors": [ "Mikel Artetxe", "Gorka Labaka", "Eneko Agirre" ], "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", "venue": "arXiv preprint arXiv:1805.06297,", "year": 2018 }, { "authors": [ "Akari Asai", "Akiko Eriguchi", "Kazuma Hashimoto", "Yoshimasa Tsuruoka" ], "title": "Multilingual extractive reading comprehension by runtime machine translation", "venue": "arXiv preprint arXiv:1809.03275,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Alexis Conneau", "Guillaume Lample", "Marc’Aurelio Ranzato", "Ludovic Denoyer", "Hervé Jégou" ], "title": "Word translation without parallel data", "venue": "arXiv preprint arXiv:1710.04087,", "year": 2017 }, { "authors": [ "Alexis Conneau", "Guillaume Lample", "Ruty Rinott", "Adina Williams", "Samuel R Bowman", "Holger Schwenk", "Veselin Stoyanov" ], "title": "Xnli: Evaluating cross-lingual sentence representations", "venue": "arXiv preprint arXiv:1809.05053,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Manaal Faruqui", "Chris Dyer" ], "title": "Improving vector space word representations using multilingual correlation", "venue": "In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics,", "year": 2014 }, { "authors": [ "Stephan Gouws", "Yoshua Bengio", "Gregory S. Corrado" ], "title": "Bilbowa: Fast bilingual distributed representations without word alignments", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Jiatao Gu", "Hany Hassan", "Jacob Devlin", "Victor OK Li" ], "title": "Universal neural machine translation for extremely low resource languages", "venue": "arXiv preprint arXiv:1802.05368,", "year": 2018 }, { "authors": [ "Karl Moritz Hermann", "Phil Blunsom" ], "title": "Multilingual models for compositional distributed semantics", "venue": "arXiv preprint arXiv:1404.4641,", "year": 2014 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "arXiv preprint arXiv:1801.06146,", "year": 2018 }, { "authors": [ "Melvin Johnson", "Mike Schuster", "Quoc V Le", "Maxim Krikun", "Yonghui Wu", "Zhifeng Chen", "Nikhil Thorat", "Fernanda Viégas", "Martin Wattenberg", "Greg Corrado" ], "title": "Google’s multilingual neural machine translation system: Enabling zero-shot translation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Alexandre Klementiev", "Ivan Titov", "Binod Bhattarai" ], "title": "Inducing crosslingual distributed representations of words", "venue": "Proceedings of COLING", "year": 2012 }, { "authors": [ "Guillaume Lample", "Alexis Conneau" ], "title": "Cross-lingual language model pretraining", "venue": "arXiv preprint arXiv:1901.07291,", "year": 2019 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Bilingual word representations with monolingual quality in mind", "venue": "In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing,", "year": 2015 }, { "authors": [ "Bryan McCann", "James Bradbury", "Caiming Xiong", "Richard Socher" ], "title": "Learned in translation: Contextualized word vectors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tomas Mikolov", "Quoc V Le", "Ilya Sutskever" ], "title": "Exploiting similarities among languages for machine translation", "venue": "arXiv preprint arXiv:1309.4168,", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Graham Neubig", "Junjie Hu" ], "title": "Rapid adaptation of neural machine translation to new languages", "venue": "arXiv preprint arXiv:1808.04189,", "year": 2018 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "URL https://s3-us-west-2. amazonaws. com/openaiassets/research-covers/languageunsupervised/language understanding paper", "year": 2018 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Edinburgh neural machine translation systems for wmt 16", "venue": "arXiv preprint arXiv:1606.02891,", "year": 2016 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 conference on empirical methods in natural language processing,", "year": 2013 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alex Wang", "Amapreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "arXiv preprint arXiv:1704.05426,", "year": 2017 }, { "authors": [ "Adams Wei Yu", "David Dohan", "Minh-Thang Luong", "Rui Zhao", "Kai Chen", "Mohammad Norouzi", "Quoc V Le" ], "title": "Qanet: Combining local convolution with global self-attention for reading comprehension", "venue": "arXiv preprint arXiv:1804.09541,", "year": 2018 }, { "authors": [ "Will Y Zou", "Richard Socher", "Daniel Cer", "Christopher D Manning" ], "title": "Bilingual word embeddings for phrase-based machine translation", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent work on pretraining natural language processing systems (Devlin et al., 2018; Radford et al., 2018; Howard & Ruder, 2018; Peters et al., 2018; McCann et al., 2017) has led to improvements across a wide variety of natural language tasks (Wang et al., 2018; Rajpurkar et al., 2016; Socher et al., 2013; Conneau et al., 2018). For several of these tasks, data can be plentiful for high-resource languages like English, Chinese, German, Spanish and French, but both the collection and proliferation of data is limited for low-resource languages like Urdu. Even when a large language model is pretrained on large amounts of multilingual data (Devlin et al., 2018; Lample & Conneau, 2019), languages like English can contain orders of magnitude more data in common sources for pretraining like Wikipedia.\nOne of the most common ways to leverage multilingual data is to use transfer learning. Word embeddings such as Word2Vec (Mikolov et al., 2013b) or GloVe (Pennington et al., 2014) use large amounts of unsupervised data to create task-agnostic word embeddings which have been shown to greatly improve downstream task performance. Multilingual variants of such embeddings (Bojanowski et al., 2017) have also shown to be useful at improving performance on common tasks across several languages. More recently, contextualized embeddings such as CoVe, ElMo, ULMFit and GPT (McCann et al., 2017; Peters et al., 2018; Howard & Ruder, 2018; Radford et al., 2018) have been shown to significantly improve upon aforementioned static embeddings.\nBERT (Devlin et al., 2018) employs a similar strategy by using a masked version of the language modeling objective. Unlike other approaches, BERT also provides a multilingual contextual representation which is enabled by its shared sub-word vocabulary and multilingual training data. Often, for languages for which large amounts of data is not available, aforementioned techniques for creating embeddings (static or contextualized) is not possible and additional strategies need to be employed.\nWe demonstrate the effectiveness of cross-lingual data augmentation (XLDA) as a simple technique that improves generalization across multiple languages and tasks. XLDA can be used with both pretrained and randomly initialized models without needing to explicitly further align the embeddings. To apply XLDA to any natural language input, we simply take a portion of that input and replace it with its translation in another language. This makes XLDA compatible with recent methods for pretraining (Lample & Conneau, 2019; Devlin et al., 2018). Additionally, the approach seamlessly scales for many languages and improves performance on all high- and low-resource languages tested including English, French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Chinese, Hindi, Swahili and Urdu.\nThis paper makes the following contributions:\n• We propose cross-lingual data augmenation (XLDA), a new technique for improving the performance of NLP systems that simply replaces part of the natural language input with its translation in another language.\n• We present experiments that show how XLDA can be used to improve performance for every language in XNLI, and in three cases XLDA leads to state-of-the-art performance.\n• We demonstrate the ability of our method to improve exact-match and F1 on the SQuAD question-answering dataset as well." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Multilingual Methods. Much prior work that seeks to leverage multilingual data attempts to first train word embeddings from monolingual corpora (Klementiev et al., 2012; Zou et al., 2013; Hermann & Blunsom, 2014) and then align those embeddings using dictionaries between languages (Mikolov et al., 2013a; Faruqui & Dyer, 2014). Some instead train multilingual word embeddings jointly from parallel corpora (Gouws et al., 2014; Luong et al., 2015). Johnson et al. (2017) demonstrate how training multilingual translation in a single model can be used for zero-shot translation (i.e., for translation pairs with no parallel training data). This approach also attained state-of-the-art results for many languages. More recently, similar techniques have been adapted for extremely low resource languages (Gu et al., 2018). Neubig & Hu (2018) showed how to further fine-tune a multilingual model by explicitly using a high-resource language with a linguistically related low-resource language to improve translation quality. More recently, Conneau et al. (2017) and Artetxe et al. (2018) show how to obtain cross-lingual word embeddings through entirely unsupervised methods that do not use any dictionaries or parallel data.\nNatural Language Inference.\nThe Multi-Genre Natural Language Inference (MultiNLI) corpus (Williams et al., 2017) uses data from ten distinct genres of English language for the the task of natural language inference (prediction of whether the relationship between two sentences represents entailment, contradiction, or neither). XNLI (Conneau et al., 2018) is an evaluation set grounded in MultiNLI for cross-lingual understanding (XLU) in 15 different languages that include low-resource languages such as Swahili and Urdu. XNLI\nserves as the primary testbed for our proposed method, XLDA, which improves over the baseline model in all languages and achieves state-of-the-art performance on Greek, Turkish, and Urdu even without state-of-the-art pretraining Lample & Conneau (2019) introduced concurrently to our work.\nQuestion Answering. We also include experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016). This dataset consists of context-question-answer triplets such that the answer is completely contained, as a span, in the context provided. For this task we translate only the training set into 4 languages using a neural machine translation system. Due to the fact that (machine or human) translation may not necessarily retain span positions, the translation of this dataset is more nuanced than the classification datasets discussed previously. For this reason, we do not tamper with the original SQuAD validation data; the test set is not publicly available either, so we constrain ourselves to a setting in which XLDA is used at training time but the target language remains English during validation and testing.\nUnsupervised Language Models. Recently, large, pretrained, unsupervised language models have been used for XNLI, SQuAD, and the GLUE benchmark (Wang et al., 2018) to improve performance across the board. BERT (Devlin et al., 2018) pretrains deep bidirectional representations by jointly conditioning on both left and right contexts in all layers. BERT can then be fine-tuned for a specific task with an additional output layer. BERT achieved significant improvements on both XNLI and SQuAD, and is this model that serves as the base for the application of XLDA across these tasks.\nBack-translation. Akin to XLDA, back-translation is often used to provide additional data through the use of neural machine translation systems. As a recent example, QANet (Yu et al., 2018) employed this technique to achieve then state-of-the-art results on question answering, and machine translation systems (Sennrich et al., 2016) regularly use such techniques to iteratively improve through a backand-forth process. XLDA also translates training data from a source language into a target language, but XLDA does not translate back into the source language. In fact, experimental results show that XLDA provides improved performance over using even the original source data, let alone a noisier version provided through back-translation. This indicates that signal in multiple languages can be beneficial to training per se rather than only as an intermediary for back-translation." }, { "heading": "3 XLDA: CROSS-LINGUAL DATA AUGMENTATION", "text": "Let D = {(xi, yi, zi)} be a dataset of input text sequences xi and yi with labels zi. We create a new dataset Dlm = {(x(l)i , y (m) i , zi)}, where x (l) i is the translation of xi into language l and y (m) i the translation of yi into language m by a neural machine translation system.\nWhen l = m, all inputs are in the same language, which is the monolingual setting. When a training set is the union of multiple Dll, this is the disjoint, multilingual setting (DMT) as multiple languages are used for training, but each individual example only contains one language. When l 6= m, each input is in a different language, which is the cross-lingual setting. Experiments below demonstrate that the cross-lingual setting can improve upon the monolingual and DMT settings for natural language inference and a combination of monolingual and cross-lingual training yields the best results for span-extractive question answering." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "We experiment with using various subsets of DXLDA and demonstrate empirically that some provide better learning environments than the monolingual and disjoint, multilingual settings. First, we provide details on how different translation systems T were used to generate cross-lingual datasets for both tasks under consideration." }, { "heading": "4.1 CROSS-LINGUAL DATA", "text": "The first task we consider in these experiments is MultiNLI (Williams et al., 2017). The XNLI (Conneau et al., 2018) dataset provides the disjoint, multilingual version of MultiNLI necessary for XLDA. To create the XNLI dataset, the authors used 15 different neural machine translation systems (each for a different language pair) to create 15 separate single language training sets. The validation and test sets were translated by humans. The fact that XNLI is aligned across languages for each\nsentence pair allows our method to be trained between the examples in the 15 different training sets. Since XLDA only pertains to training, the validation and test settings remain the same for each of the 15 languages, but future work will explore how this method can be used at test time as well. Because we have human translated validation and test sets for XNLI, it is the primary task under examination in our experiments.\nWe follow a similar process for the Stanford Question Answering Dataset, SQuAD (Rajpurkar et al., 2016). Given SQuAD is a span-extraction problem, translation of the training set required special care. Questions, context paragraphs, and gold answers were translated separately. We then used exact string matching between the translated answers and the translated context paragraphs to determine if the translated answer could still be found in the translated context. If the answer exists, we take its first instance as the ground truth translated span. 65% of German, 69% of French, 70% of Spanish, and 45% of Russian answer spans were recoverable in this way. To translate the rest of the questions we placed a special symbol on both sides of the ground truth span in the English context paragraph before translation. Occasionally, even with this special symbol, the translated answers could not be recovered from the marked, translated context. Additionally, the presence of the symbol did influence gender and tense in the translation as well. Using this approach on examples that failed span-recovery in the first phase, we were able to recover 81% of German, 96% of French, 97% of Spanish, and 96% of Russian answer spans. However, adding this additional data from the marked, translations led to worse performance across all of the languages. Hence, we only use the translated examples from the first phase for all training settings below, effectively reducing training set size. The validation and test splits for SQuAD remain in English alone1. This still allows us to explore how XLDA can be used to leverage multilingual supervision at train time for a single target language at validation time." }, { "heading": "4.2 MODELS", "text": "For XNLI, we demonstrate that XLDA improves over the multilingual BERT model (BERTML) fine-tuned for different languages using a trained classification layer (Devlin et al., 2018). In XNLI, there are two inputs, a premise and a hypothesis. The BERT model takes as input both sequences concatenated together. It then makes a prediction off of a special CLS token appended to the start of the combined input sequence. We experiment with a variety of settings, but the primary evaluation of XLDA shows that replacing either one of the XNLI inputs with a non-target language can always improves performance over using only the target language throughout. These experiments are discussed in detail in sections 4.3-4.5. Similar experiments with an LSTM baseline that is not pretrained are outlined in 4.6, which demonstrates that XLDA is also effective for models that are not pretrained or as complex as BERT.\nFor SQuAD, we demonstrate that XLDA also improves over BERTML fine-tuned by using only two additional parameter vectors: One for identifying the start token and for the end token again following the recommendations in Devlin et al. (2018). Experiments with these BERT models demonstrate that XLDA can improve even the strongest multilingual systems pretrained on large amounts of data.\nFor all of our experiments we use the hyperparameters and optimization strategies recommended by Devlin et al. (2018). For both datasets the learning rate is warmed up for 10% of the total number of updates (which are a function of the user specified batch size and number of epochs) and then linearly decayed to zero over training. For XNLI the batch size is 32, learning rate is 2e-5 and number of epochs is 3.0. For SQuAD the batch size is 12, learning rate is 3e-5 and number of epochs is 3.0." }, { "heading": "4.3 PAIRWISE EVALUATION", "text": "Our first set of experiments comprise a comprehensive pairwise evaluation of XLDA on the XNLI dataset. The results are presented in Figure 2a. The language along a row of the table corresponds to the language evaluated upon. The language along a column of the table corresponds to the auxiliary language used as an augmentor for XLDA. Diagonal entries are therefore the validation scores for the standard, monolingual training approach. Numbers on the diagonal are absolute performance. Numbers on the off-diagonal indicate change over the diagonal entry in the same row. Through color normalization, deep green represents a large relative improvement of the number depicted over the standard approach (the diagonal entry in the same row). Deep red represents a larger relative decrease\n1The test set is not publicly available, so it could not be translated as XNLI was.\ncompared to the standard approach. Mild yellow reflects little to no improvement over the standard approach.\nThere exists a cross-lingual augmentor that improves over the monolingual approach. First note that the highest performance for each row is off-diagonal. This demonstrates the surprising result that for every target language, it is actually better to train with cross-lingual examples than it is to train with monolingual examples. For example, if Hindi is the target language, the standard monolingual approach would train with both premise and hypothesis in Hindi, which gives a validation performance of 67.3%. XLDA with German in this case includes only examples that have either the premise or hypothesis in German and the other in Hindi. Therefore, there are no examples in which both premise and hypothesis are in the same language. This improves performance by 3.3% to 70.6%. Similarly, for every language a XLDA approach exists that improves over the standard approach. Hindi augmented by German represents the strongest improvement, whereas Vietnamese augmented by Spanish represents the weakest at a 0.6% improvement over the standard approach.\nMost languages are effective augmentors. With the exception of Urdu, Swahili, Hindi, and Greek, the remaining 10 languages provided a nonnegative performance improvement as a augmentor for each of the 14 target languages. This demonstrates that as long as one avoids the lowest resource languages as augmentors, it is likely that XLDA will improve over the standard approach. It also demonstrates that with limited translation resources, one should translate into relatively higher resource languages for training machine translation systems (e.g. Spanish, German, French, English).\nLower resource languages are less effective augmentors, but benefit greatly from XLDA. Examining this further, it is clear that languages that are often considered relatively lower resource in the machine translation community tend to be less effective cross-lingual augmentors. For example, the abundance of red in the Urdu column reveals that Urdu is often the worst augmentor for any given language under consideration. On average, augmenting with Urdu actually hurts performance by 1.8%. On the other hand, looking at the row for Urdu reveals that it often benefits strongly from XLDA with other languages. Similarly, Hindi benefits the most from XLDA, and is only a mildly successful augmentor.\nXLDA is robust to translation quality. In creating the XNLI dataset, the authors used 15 different neural machine translation systems with varying translation qualities according to BLEU score evaluation. The translation qualities are present in the caption of Figure 2b, which shows that even when controlling for BLEU score, most languages can be used as effective augmentors." }, { "heading": "4.4 A GREEDY ALGORITHM FOR XLDA", "text": "Given that the pairwise evaluation of Section 4.3 reveals that most languages are effective cross-lingual augmentors, we turn to the case in which we would like to maximize the benefits of XLDA by using multiple augmenting languages. In these experiments, we use a simple greedy approach to build off of the pairwise results. For any given target language, we sort the languages in order of decreasing effectiveness as an augmentor (determined by relative improvement over the standard approach in the pairwise setting). We start with the augmentor that is most effective for that target language and add augmentors one at a time in decreasing order until we reach an augmentor that hurt. The results are presented in Figure 3 where each subplot is corresponds to performance on a target language as number of cross-lingual augmentors increases\nGreedy XLDA always improves over using the single best cross-lingual augmentor.\nFor every target language, the greedy approach improves over the best pairwise XLDA by a minimum of 0.9% and provides a minimum of 2.1% improvement over the original standard approach that does not use any form of XLDA. Somewhat surprisingly, it is not the case that more data always helps for XLDA. Most languages have peak validation per-\nformance with fewer than the total number of augmentors that benefited in pairwise evaluation. In the best cases (Russian, Greek, and Arabic), greedy XLDA improves over the best pairwise XLDA by more than 2%. When compared to the standard approach, greedy XLDA improves by as much as 4.9% (Hindi)." }, { "heading": "4.5 TARGETED XLDA", "text": "Since most languages are effective augmentors and few are actually harmful to performance (Section 4.3), we now consider the setting in which a comprehensive pairwise evaluation and greedy\nsearch cannot be done. We use this setting to evaluate whether XLDA improves over the disjoint, multilingual setting described in Section 3. Recall that in the XLDA setting each input to the model is in a different language for each example. In the disjoint, multilingual setting, each input is in the same language, but examples themselves may come from different languages.\nThe ‘cross’ in cross-lingual is crucial. Figure 4 shows three selected target languages (English, German, and Russian) as well as six augmentors (English, German, Russian, French, Bulgarian, and Arabic). We compare, for each target language, how incrementally adding augmentors influences performance in both the cross-lingual and the disjoint, multilingual settings. It is clear that while both improve over the monolingual setting, XLDA provides a much greater improvement as additional augmentors are used. This demonstrates that it is indeed the cross-over of languages in XLDA that makes it so effective. It is not as effective to train with translated versions of the training datasets without cross-linguality in each example." }, { "heading": "4.6 XLDA WITHOUT BERT", "text": "In order to test whether the benefits of XLDA come only from the multilinguality of the BERT model, we partially replicate the pairwise evaluation of Section 4.3 for an LSTM baseline. For this baseline, we use the same tokenization as BERTML. We also use the embeddings from BERTML, but we keep them fixed. This NLI model reads the input with a two-layer BiLSTM, projects the outputs from the final layer to a lower dimension, max-pools, and passes that through a final classification layer.\nXLDA is equally effective for randomly initialized and pretrained models. As seen in Figure 5a, the LSTM baseline sees gains from XLDA as substantial as BERTML. In the best case (Greek augmented by German), performance improved by 3.3%, just as high as the highest gain for BERTML.\nGreedy XLDA is more effective for randomly initialized models than pretrained models. As can be seen in Figure 5b, the LSTM baseline sees gains from greedy XLDA that are even greater than they were for BERTML. German’s XLDA performance was improved by 3.3% over using pairwise XLDA alone. This represents an absolute improvement of 5.3% over the standard, monolingual approach. In the best case (Greek), the absolute gain was 5.5% and in the worst case it was 4.0%. This demonstrates that greedy XLDA is a powerful technique that can be used with pretrained and randomly initialized models alike." }, { "heading": "4.7 XLDA FOR SQUAD", "text": "We continue with experiments on the Stanford Question Answering Dataset (SQuAD). In this setting, evaluation is always performed with English contexts and questions. In Table 2, validation results are depicted that vary over which languages were used during training for either the question or the context. The en-en row represents the case in which only English inputs are used.\nIn the first group of four rows of Table 2, XLDA is only applied to translate the contexts. In the second group of four rows of Table 2, we also translate the question. When French is used as the cross-lingual augmentor over the context input, we see an improvement of 0.5 EM and 1 F1. We also ran these experiments with Russian, Spanish, and German, each of which proved to be effective cross-lingual augmentors over the context.\nWhen cross-lingual augmentors are used over the question input as well, we still often see improvement over the baseline, but the improvement is less than when using XLDA only over the context channel. This is in keeping with the findings of Asai et al. (2018), which show that the ability to correctly translate questions is crucial for question answering. In other words, SQuAD is sensitive to the translation quality of the question, and it is not surprising that machine translations of the questions are less effective than translating the context, which is less sensitive.\nError analysis on SQuAD provides insight into how XLDA improves performance. By inspection of 300 examples, we found the baseline BERTML model often makes mistakes that suggest a faulty heuristic based on fuzzy, pattern matching. The model regularly depends on key words from the question in the context as well. When these keywords are nearer to plausible, incorrect spans than they are to the correct span, the model chooses the incorrect spans. In one example, the context contains the text “...The Super Bowl 50 halftime show was headlined by the British rock group Coldplay with special guest performers Beyonce and Bruno\nMars, who headlined the Super Bowl XLVII and Super Bowl XLVIII halftime shows, respectively...”, the question is “What halftime performer previously headlined Super Bowl XLVII?”, and the true answer is “Beyonce”. The baseline model outputs “Bruno Mars” seemingly because it is closer to key words “Super Bowl XLVII”, but the XLDA model correctly outputs “Beyonce”. Because these models rely on attention (Bahdanau et al., 2014; Vaswani et al., 2017) between the question and context sequences, word-similarity-based matching seems to localize keywords. Such examples are often correctly answered after XLDA is used during training. This suggests that translation of the context into a different language during training (via XLDA) breaks the strong dependence on the word-similarity-based heuristic. XLDA instead forces the model to consider the semantics of the context instead." }, { "heading": "5 CONCLUSION", "text": "We introduce XLDA, cross-lingual data augmentation, a method that improves the training of NLP systems by replacing a segment of the input text with its translation in another language. We show how reasoning across languages is crucial to the success of XLDA. We show the effectiveness of the approach with both pretrained models and randomly initialized models. We boost performance on all languages in the XNLI dataset, by up to 4.8%, and achieve state of the art results on 3 languages including the low resource language Urdu. Further investigation is needed to understand the causal and linguistic relationship between XLDA and performance on downstream tasks." } ]
2,019
null
SP:8671654fe46f948a79f905bd815939dc284ca873
[ "The paper describes a Dual-Attention model using Gated- and Spatial-Attention for disentanglement of attributes in feature representations for visually-grounded multitask learning. It has been shown that these models are capable of learning navigational instructions and answering questions. However, they addressed two limitations of previous works about visually-grounded embodied language learning models. The first is the inability to transfer grounded knowledge across different", "The paper explores multi-task learning in embodied environments and proposes a Dual-Attention Model that disentangles the knowledge of words and visual attributes in the intermediate representations. It addresses two tasks, namely Semantic Goal Navigation (SGN) and Embodied Question Answering (EQA), using a simple synthetic environment. The paper compares against a few simple baselines and baselines adapted from models in each task." ]
Visually-grounded embodied language learning models have recently shown to be effective at learning multiple multimodal tasks such as following navigational instructions and answering questions. In this paper, we address two key limitations of these models, (a) the inability to transfer the grounded knowledge across different tasks and (b) the inability to transfer to new words and concepts not seen during training using only a few examples. We propose a multitask model which facilitates knowledge transfer across tasks by disentangling the knowledge of words and visual attributes in the intermediate representations. We create scenarios and datasets to quantify cross-task knowledge transfer and show that the proposed model outperforms a range of baselines in simulated 3D environments. We also show that this disentanglement of representations makes our model modular and interpretable which allows for transfer to instructions containing new concepts.†
[]
[ { "authors": [ "Peter Anderson", "Qi Wu", "Damien Teney", "Jake Bruce", "Mark Johnson", "Niko Sünderhauf", "Ian Reid", "Stephen Gould", "Anton van den Hengel." ], "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3674–3683.", "year": 2018 }, { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein." ], "title": "Neural module networks", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 39–48.", "year": 2016 }, { "authors": [ "Yoav Artzi", "Luke Zettlemoyer." ], "title": "Weakly supervised learning of semantic parsers for mapping instructions to actions", "venue": "Transactions of the Association for Computational Linguistics, 1:49–62.", "year": 2013 }, { "authors": [ "Lawrence W Barsalou." ], "title": "Grounded cognition", "venue": "Annu. Rev. Psychol., 59:617–645.", "year": 2008 }, { "authors": [ "Valts Blukis", "Dipendra Misra", "Ross A Knepper", "Yoav Artzi." ], "title": "Mapping navigation instructions to continuous control actions with position-visitation prediction", "venue": "Proceedings of The 2nd Conference on Robot Learning.", "year": 2018 }, { "authors": [ "Devendra Singh Chaplot", "Kanthashree Mysore Sathyendra", "Rama Kumar Pasumarthi", "Dheeraj Rajagopal", "Ruslan Salakhutdinov." ], "title": "Gated-attention architectures for task-oriented language grounding", "venue": "AAAI.", "year": 2018 }, { "authors": [ "David L Chen", "Raymond J Mooney." ], "title": "Learning to interpret natural language navigation instructions from observations", "venue": "Twenty-Fifth AAAI Conference on Artificial Intelligence.", "year": 2011 }, { "authors": [ "Howard Chen", "Alane Shur", "Dipendra Misra", "Noah Snavely", "Yoav Artzi." ], "title": "Touchdown: Natural language navigation and spatial reasoning in visual street environments", "venue": "CVPR.", "year": 2019 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Dzmitry Bahdanau", "Yoshua Bengio." ], "title": "On the properties of neural machine translation: Encoder-decoder approaches", "venue": "arXiv preprint arXiv:1409.1259.", "year": 2014 }, { "authors": [ "Abhishek Das", "Samyak Datta", "Georgia Gkioxari", "Stefan Lee", "Devi Parikh", "Dhruv Batra." ], "title": "Embodied question answering", "venue": "CVPR.", "year": 2018 }, { "authors": [ "Akira Fukui", "Dong Huk Park", "Daylen Yang", "Anna Rohrbach", "Trevor Darrell", "Marcus Rohrbach." ], "title": "Multimodal compact bilinear pooling for visual question answering and visual grounding", "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.", "year": 2016 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio." ], "title": "Deep sparse rectifier neural networks", "venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315–323.", "year": 2011 }, { "authors": [ "Daniel Gordon", "Aniruddha Kembhavi", "Mohammad Rastegari", "Joseph Redmon", "Dieter Fox", "Ali Farhadi." ], "title": "Iqa: Visual question answering in interactive environments", "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4089–4098.", "year": 2018 }, { "authors": [ "Tanmay Gupta", "Kevin J Shih", "Saurabh Singh", "Derek Hoiem", "Arun Mallya", "Wei Di", "Vignesh Jagadeesh", "Robinson Piramuthu", "K Shih" ], "title": "Aligned image-word representations improve inductive transfer across vision-language", "venue": null, "year": 2017 }, { "authors": [ "Sachithra Hemachandra", "Felix Duvallet", "Thomas M Howard", "Nicholas Roy", "Anthony Stentz", "Matthew R Walter." ], "title": "Learning models for following natural language directions in unknown environments", "venue": "2015 IEEE International Conference on Robotics and Automation (ICRA), pages 5608–5615. IEEE.", "year": 2015 }, { "authors": [ "Karl Moritz Hermann", "Felix Hill", "Simon Green", "Fumin Wang", "Ryan Faulkner", "Hubert Soyer", "David Szepesvari", "Wojtek Czarnecki", "Max Jaderberg", "Denis Teplyashin" ], "title": "Grounded language learning in a simulated 3d world", "venue": "arXiv preprint arXiv:1706.06551", "year": 2017 }, { "authors": [ "Roger A Horn." ], "title": "The hadamard product", "venue": "Proc. Symp. Appl. Math, volume 40, pages 87–169.", "year": 1990 }, { "authors": [ "Drew A Hudson", "Christopher D Manning." ], "title": "Compositional attention networks for machine reasoning", "venue": "ICLR.", "year": 2018 }, { "authors": [ "Michał Kempka", "Marek Wydmuch", "Grzegorz Runc", "Jakub Toczek", "Wojciech Jaśkowski." ], "title": "Vizdoom: A doom-based ai research platform for visual reinforcement learning", "venue": "2016 IEEE Conference on Computational Intelligence and Games (CIG), pages 1–8. IEEE.", "year": 2016 }, { "authors": [ "Yann LeCun", "Yoshua Bengio" ], "title": "Convolutional networks for images, speech, and time series", "venue": "The handbook of brain theory and neural networks,", "year": 1995 }, { "authors": [ "Matt MacMahon", "Brian Stankiewicz", "Benjamin Kuipers." ], "title": "Walk the talk: Connecting language, knowledge, and action in route instructions", "venue": "Def, 2(6):4.", "year": 2006 }, { "authors": [ "Cynthia Matuszek", "Nicholas FitzGerald", "Luke Zettlemoyer", "Liefeng Bo", "Dieter Fox." ], "title": "A joint model of language and perception for grounded attribute learning", "venue": "arXiv preprint arXiv:1206.6423.", "year": 2012 }, { "authors": [ "Hongyuan Mei", "Mohit Bansal", "Matthew R Walter." ], "title": "Listen, attend, and walk: Neural mapping of navigational instructions to action sequences", "venue": "Thirtieth AAAI Conference on Artificial Intelligence.", "year": 2016 }, { "authors": [ "Dipendra Misra", "Andrew Bennett", "Valts Blukis", "Eyvind Niklasson", "Max Shatkhin", "Yoav Artzi." ], "title": "Mapping instructions to actions in 3d environments with visual goal prediction", "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2667–2678.", "year": 2018 }, { "authors": [ "Dipendra Misra", "John Langford", "Yoav Artzi." ], "title": "Mapping instructions and visual observations to actions with reinforcement learning", "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1004–1015.", "year": 2017 }, { "authors": [ "Dipendra K Misra", "Jaeyong Sung", "Kevin Lee", "Ashutosh Saxena." ], "title": "Tell me dave: Contextsensitive grounding of natural language to manipulation instructions", "venue": "The International Journal of Robotics Research, 35(1-3):281–300.", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli." ], "title": "Zero-shot task generalization with multi-task deep reinforcement learning", "venue": "Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2661–2670. JMLR. org.", "year": 2017 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm De Vries", "Vincent Dumoulin", "Aaron Courville." ], "title": "Film: Visual reasoning with a general conditioning layer", "venue": "AAAI.", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov." ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347.", "year": 2017 }, { "authors": [ "Linda Smith", "Michael Gasser." ], "title": "The development of embodied cognition: Six lessons from babies", "venue": "Artificial life, 11(1-2):13–29.", "year": 2005 }, { "authors": [ "Stefanie A Tellex", "Thomas Fleming Kollar", "Steven R Dickerson", "Matthew R Walter", "Ashis Banerjee", "Seth Teller", "Nicholas Roy" ], "title": "Understanding natural language commands for robotic navigation and mobile manipulation", "venue": null, "year": 2011 }, { "authors": [ "Harm de Vries", "Kurt Shuster", "Dhruv Batra", "Devi Parikh", "Jason Weston", "Douwe Kiela." ], "title": "Talk the walk: Navigating new york city through grounded dialogue", "venue": "arXiv preprint arXiv:1807.03367.", "year": 2018 }, { "authors": [ "Huijuan Xu", "Kate Saenko." ], "title": "Ask, attend and answer: Exploring question-guided spatial attention for visual question answering", "venue": "European Conference on Computer Vision, pages 451–466. Springer.", "year": 2016 }, { "authors": [ "Haonan Yu", "Xiaochen Lian", "Haichao Zhang", "Wei Xu." ], "title": "Guided feature transformation (gft): A neural language grounding module for embodied agents", "venue": "Conference on Robot Learning, pages 81–98.", "year": 2018 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Andrew Rouditchenko", "Carl Vondrick", "Josh McDermott", "Antonio Torralba." ], "title": "The sound of pixels", "venue": "Proceedings of the European Conference on Computer Vision (ECCV), pages 570–586.", "year": 2018 }, { "authors": [ "FiLM: Perez" ], "title": "2018) introduced a general-purpose conditioning method called Feature-wise Linear Modulation (FiLM) for Visual Question Answering. Using FiLM, xS = γ(xsent) xI+β(xsent) where γ(xsent) and β(xsent) are learnable projections of the sentence representation", "venue": null, "year": 2018 }, { "authors": [ "PACMAN: Das" ], "title": "2018) presented a hierarchical RL model for EQA. We adapt their method by using the attention mechanism in their QA module, which takes the last 5 frames and the text as input, and computes the similarity of the text with each frame using dot products between image and sentence-level text representations. These similarities are converted into attention weights using softmax, and the attention-weighted image features are concatenated with question embedding and", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans learn language by interacting with a dynamic perceptual environment, grounding words into visual entities and motor actions (Smith and Gasser, 2005; Barsalou, 2008). In recent years, there has been an increased focus on training embodied agents capable of visually-grounded language learning. These include multimodal tasks involving one-way communication, such as mapping navigational instructions to actions (MacMahon et al., 2006; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Mei et al., 2016; Misra et al., 2018); and tasks involving two-way communication such as embodied question answering (Gordon et al., 2018; Das et al., 2018) and embodied dialogue (de Vries et al., 2018). Other studies have shown that grounded semantic goal navigation agents can be effective at exploiting the compositionality of language to generalize to unseen instructions with an unseen composition of semantic attributes (Hermann et al., 2017; Chaplot et al., 2018), or an unseen composition of steps in a multi-step instruction (Oh et al., 2017).\nHowever, current grounded language learning models have certain limitations. Firstly, these models are typically trained only for a single multimodal task and lack the ability to transfer grounded knowledge of ‘concepts’∗ across tasks. For example, if an agent learns to follow the instruction ‘Go to the red torch’ and answer the question ‘What color is the pillar?’, then ideally it should also understand ‘Go to the red pillar’ and ‘What color is the torch?’ without additional training. Training multitask grounded-language models can also improve training sample efficiency, as these multimodal tasks share many common learning challenges including perception, grounding, and navigation.\nThe second limitation is the inability of trained models to quickly transfer to tasks involving unseen concepts. For example, consider a household instruction-following robot trained on an existing set of objects. We would like the robot to follow instructions involving a new object ‘lamp’ that has been added to the house. Existing models would need to be trained with the new object, which typically requires thousands of samples and can also lead to catastrophic forgetting of known objects. Even if the models were given some labeled samples to detect the new objects, they would require additional training to learn to combine existing grounded knowledge with the new concept (e.g., ‘blue lamp’ if ‘blue’ is already known). In this paper, we train a multimodal multitask learning model for two tasks: Semantic Goal Navigation, where the agent is given a language instruction to navigate to a goal location, and Embodied Question Answering, where the agent is asked a question and it can navigate in the environment to gather information to answer the question (see Figure 5). We make the following contributions in this paper:\n†See https://sites.google.com/view/emml-iclr2020 for demo videos. ∗In this paper, we refer to the knowledge of a word and its grounding in the visual world as the knowledge\nof a concept (for example, concept ‘torch’ involves word ‘torch’ and how torch looks visually).\nFirst, we define a cross-task knowledge transfer evaluation criterion to test the ability of multimodal multi-task models to transfer knowledge of concepts across tasks. We show that several prior single-task models, when trained on both tasks, fail to achieve cross-task knowledge transfer. This is because the visual grounding of words is often implicitly learned as a by-product of end-to-end training of the underlying task, which leads to the entanglement of knowledge of concepts in the learnt representations. We propose a novel Dual-Attention model which learns task-invariant disentangled visual and textual representations and explicitly aligns them with each other. We create datasets and simulation scenarios for testing crosstask knowledge transfer and show an absolute improvement of 43-61% on instructions and 5-26% for questions over baselines.\nSecond, the disentanglement and explicit alignment of representations makes our model modular and interpretable. We show that this allows us to transfer the model to handle instructions involving unseen concepts by incorporating the output of object detectors. We also show that our model is able to combine the knowledge of existing concepts with a new concept without any additional policy training.\nFinally, we show that the modularity and interpretability of our model also allow us to use trainable neural modules (Andreas et al., 2016) to handle relational tasks involving negation and spatial relationships and also tackle relational instructions involving new concepts." }, { "heading": "2 RELATED WORK", "text": "A lot of early work on visual instruction-following in the embodied space such as in robotics applications (Tellex et al., 2011; Matuszek et al., 2012; Hemachandra et al., 2015; Misra et al., 2016) and on mapping natural language instructions to actions (MacMahon et al., 2006; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Mei et al., 2016) required hand-designed symbolic representations. Recently, there have been efforts on learning to follow navigational instructions from raw visual observations (Anderson et al., 2018; Misra et al., 2018; Chen et al., 2019; Blukis et al., 2018). Oh et al. (2017); Chaplot et al. (2018); Hermann et al. (2017) study the language learning aspect of instruction-following in a more controlled setting, and show that grounded language learning agents are able to learn spatial and logical reasoning and exploit the compositionality of language to generalize to new instructions.\nQuestion Answering in the embodied space has been comparatively less-studied with recent work studying QA which requires exploration, navigation, and interaction with objects in the environment (Gordon et al., 2018; Das et al., 2018). In contrast to the prior work which tackles a single grounding task, we tackle both instruction-following and question answering in the embodied space and study the ability to transfer the knowledge of concepts across the tasks and tackle instructions with new concepts.\nIn addition to the above, there is a large body of work on multimodal learning in static settings which do not involve navigation or reinforcement learning. Some relevant works which use attention mechanisms similar to the ones used in our proposed model include Perez et al. (2018); Fukui et al. (2016); Xu and Saenko (2016); Hudson and Manning (2018); Gupta et al. (2017) for Visual Question Answering and Zhao et al. (2018) for grounding audio to vision." }, { "heading": "3 PROBLEM FORMULATION", "text": "Consider an autonomous agent interacting with an episodic environment as shown in Figure 1. At the beginning of each episode, the agent receives a textual input T specifying a task. T could be an instruction to navigate to a target object or a question querying some visual detail of objects in the environment. At each time step t, the agent observes a state st = (It, T ) where It is the first-person (egocentric) view of the environment, and takes an action at, which could be a navigational action or an answer action. The agent’s objective is to learn a policy π(at|st) which leads to successful completion of the task specified by T .\nEnvironments. We adapt the ViZDoom-based (Kempka et al., 2016) language grounding environment proposed by Chaplot et al. (2018) for embodied multitask learning. It consists of a single room with 5 objects. The objects are randomized in each episode based on the textual input. We use two difficulty settings for the Doom domain as shown in Figure 2: Easy: The agent is spawned at a fixed location. The candidate objects are spawned at five fixed locations along a single horizontal line in the field of view of the agent. Hard: The candidate objects and the agent are spawned at random locations and the objects may or may not be in the agent’s field of view in the initial configuration. The agent must explore the map to view all objects. The agent can take 4 actions: 3 navigational actions (forward, left, right) and 1 answer action. When the agent takes the answer action, the answer with the maximum probability in the output answer distribution is used.\nDatasets. We use the set of 70 instructions from Chaplot et al. (2018) and create a dataset of 29 questions using the same set of objects and attributes. These datasets include instructions and questions about object types, colors, relative sizes (tall/short) and superlative sizes (smallest/largest). We create train-test splits for both instructions and questions datasets to explicitly test a multitask model’s ability to transfer the knowledge of concepts across different tasks. Each instruction in the test set contains a word that is never seen in any instruction in the training set but is seen in some questions in the training set. Similarly, each question in the test set contains a word never seen in any training set question. Table 1 illustrates the train-\ntest split of instructions and questions used in our experiments†. Note that for the EQA trainset, unseen words can be present in the answer." }, { "heading": "4 PROPOSED METHOD", "text": "In this section, we describe our proposed architecture (illustrated in Figure 3). At the start of each episode, the agent receives a textual input T (an instruction or a question) specifying the task. At each time step t, the agent observes an egocentric image It which is passed through a convolutional neural network (LeCun et al., 1995) with ReLU activations (Glorot et al., 2011) to produce the image representation xI = f(It; θconv) ∈ RV×H×W , where θconv denotes the parameters of the convolutional network, V is the number of feature maps in the convolutional network output which is by design set equal to the vocabulary size (of the union of the instructions and questions training sets), and H and W are the height and width of each feature map. We use two representations for the textual input T : (1) the bag-of-words representation denoted by xBoW ∈ 0, 1V and (2) a sentence representation xsent = f(T ; θsent) ∈ RV , which is computed by passing the words in T through a Gated Recurrent Unit (GRU) (Cho et al., 2014) network followed by a linear layer. Here, θsent denotes the parameters of the GRU network and the linear layer with ReLU activations. Next, the\n†More details about the datasets and the environments are deferred to the supplementary material.\nDual-Attention unit fDA combines the image representation with the text representations to get the complete state representation xS and answer prediction xAns:\nxS, xAns = fDA(xI , xBoW, xsent) (1)\nFinally, xS and xAns, along with a time step embedding and a task indicator variable (for whether the task is SGN or EQA), are passed to the policy module to produce an action." }, { "heading": "4.1 DUAL-ATTENTION UNIT", "text": "The Dual-Attention unit uses two types of attention mechanisms, Gated-Attention fGA and SpatialAttention fSA, to align representations in different modalities and tasks.\nGated-Attention (GA). The GA unit (Chaplot et al., 2018) attends to the different channels in the image representation based on the text representation. For example, if the textual input is the instruction ‘Go to the red pillar’, then the GA unit can learn to attend to channels which detect red things and pillars. Specifically, the GA unit takes as input a 3-dimensional tensor image representation yI ∈ Rd×H×W and a text representation yT ∈ Rd, and outputs a 3-dimensional tensor z ∈ Rd×H×W . Note that the dimension of yT is equal to the number of feature maps and the size of the first dimension of yI . In the GA unit, each element of yT is expanded to aH×W matrix, resulting in a 3-dimensional tensor MyT ∈ Rd×H×W , whose (i, j, k)th element is given by MyT [i, j, k] = yT [i]. This matrix is multiplied element-wise with the image representation: z = fGA(yI , yT ) = MyT yI , where denotes the Hadamard product (Horn, 1990).\nSpatial-Attention (SA). We propose an SA unit which is analogous to the Gated-Attention unit except that it attends to different pixels in the image representation rather than the channels. For example, if the textual input is the question ‘Which object is blue in color?’, then we would like to spatially attend to the parts of the image which contain a blue object in order to recognize the type of the blue object. The Spatial-Attention unit takes as input a 3-dimensional tensor image representation yI ∈ Rd×H×W and a 2-dimensional spatial attention map yS ∈ RH×W , and outputs a tensor z ∈ Rd×H×W . Note that the height and width of the spatial attention map are equal to the height and width of the image representation. In the spatial-attention unit, each element of the spatial attention map is expanded to a d dimensional vector. This again results in a 3-dimensional tensor MyS ∈ Rd×H×W , whose (i, j, k)th element is given by: MyS [i, j, k] = yS [j, k]. Just like in the Gated-Attention unit, this matrix is multiplied element-wise with the image representation: z = fSA(yI , yS) =MyS yI . Dual-Attention. We now describe the operations in the Dual-Attention unit shown in Figure 4, as well as motivate the intuitions behind each operation. Given xI , xBoW, and xsent, the Dual-Attention unit first computes a Gated-Attention over xI using xBoW:\nxGA1 = fGA(xI , xBoW) ∈ RV×H×W (2)\nIntuitively, this GA unit grounds each word in the vocabulary with a feature map in the image representation. A particular feature map is activated if and only if the corresponding word occurs in the textual input. Thus, the feature maps in the convolutional output learn to detect different objects and attributes, and words in the textual input specify which objects and attributes are relevant to the current task. The Gated-Attention using BoW representation attends to feature maps detecting corresponding objects and attributes, and masks all other feature maps. We use the BoW representation for the first GA unit as it explicitly aligns the words in textual input irrespective of whether it is a question or an instruction.\nNext, the output of the GA unit xGA1 is converted to a spatial attention map by summing over all channels followed by a softmax over H ×W elements:\nxspat = σ ( V∑ i xGA1[i, :, :] ) ∈ RH×W (3)\nwhere the softmax σ(z)j = exp(zj)/ ∑\nk exp(zk) ensures that the attention map is spatially normalized. Summation of xGA1 along the depth dimension gives a spatial attention map which has high activations at spatial locations where relevant objects or attributes are detected. ReLU activations in the convolutional feature maps makes all elements positive, ensuring that the summation aggregates the activations of relevant feature maps.\nxspat and xI are then passed through a SA unit:\nxSA = fSA(xI , xspat) ∈ RV×H×W (4)\nThe SA unit outputs all attributes present at the locations where relevant objects and attributes are detected. This is especially helpful for question answering, where a single Gated-Attention may not be sufficient. For example, if the textual input is ‘Which color is the pillar?’, then the model needs to attend not only to feature maps detecting pillars (done by the Gated-Attention), but also to other attributes at the spatial locations where pillars are seen in order to predict their color.\nxSA is then passed through another GA unit with the sentence-level text representation:\nxGA2 = fGA(xSA, xsent) ∈ RV×H×W (5)\nThis second GA unit enables the model to attend to different types of attributes based on the question. For instance, if the question is asking about the color (‘Which color is the pillar?’), then the model needs to attend to the feature maps corresponding to colors; or if the question is asking about the object type (‘Which object is green in color?’), then the model needs to attend to the feature maps corresponding to object types. The sentence embedding xsent can learn to attend to multiple channels based on the textual input and mask the rest.\nNext, the output is transformed to answer prediction by again doing a summation and softmax but this time summing over the height and width instead of the channels:\nxAns = σ H,W∑ j,k xGA2[:, j, k] ∈ RV (6) Summation of xGA2 along each feature map aggregates the activations for relevant attributes spatially. Again, ReLU activations for sentence embedding ensure aggregation of activations for each attribute or word. The answer space is identical to the textual input space RV .\nFinally, the Dual-Attention unit fDA outputs the answer prediction xAns and the flattened spatial attention map xS = vec(xspat), where vec(·) denotes the flattening operation. Policy Module. The policy module takes as input the state representation xS from the Dual-Attention unit, a time step embedding t, and a task indicator variable I (for whether the task is SGN or EQA). The inputs are concatenated then passed through a linear layer, then a recurrent GRU layer, then linear layers to estimate the policy function π(at | It, T ) and the value function V (It, T ). All above operations are differentiable, making the entire architecture trainable end-to-end. Note that all attention mechanisms in the Dual-Attention unit only modulate the input image representation, i.e., mask or amplify specific feature maps or pixels. This ensures that there is an explicit alignment between the words in the textual input, the feature maps in the image representation, and the words in the answer space. This forces the convolutional network to encode all the information required with respect to a certain word in the corresponding output channel. For example, to predict ‘red’ as the answer, the model must detect red objects in the corresponding feature map. This explicit task-invariant alignment between convolutional feature maps and words in the input and answer space facilitates grounding and allows for cross-task knowledge transfer. As shown in the results later, this also makes our model modular and allows easy addition of objects and attributes to a trained model." }, { "heading": "4.2 OPTIMIZATION", "text": "The entire model is trained to predict both navigational actions and answers jointly. The policy is trained using Proximal Policy Optimization (PPO) (Schulman et al., 2017). For training the answer predictions, we use a supervised cross-entropy loss. Both types of losses have common parameters as the answer prediction is essentially an intermediate representation for the policy.\nAuxiliary Task. As mentioned earlier, the feature maps in the convolutional output are expected to detect different objects and attributes. Consequently, we add a spatial auxiliary task (trained with cross-entropy loss) to detect the object or attribute in the convolutional output channels corresponding to the word in the bag-of-words representation. Rather than doing fine-grained object detection, we keep the size of the auxiliary predictions the same as the convolutional output to avoid an increase in the number of parameters, and maintain the explicit alignment on the convolutional feature maps with the words. Consequently, auxiliary labels are (V ×H ×W )-dimensional tensors, where each of the V channels corresponds to a word in the vocabulary, and each element in a channel is 1 if the corresponding object or attribute is present in the corresponding frame." }, { "heading": "5 EXPERIMENTS & RESULTS", "text": "Jointly learning semantic goal navigation and embodied question answering essentially involves a fusion of textual and visual modalities. While prior methods are designed for a single task, we adapt several baselines for our environment and tasks by using their multimodal fusion techniques. We use two naive baselines, Image only and Text only; two baselines based on prior semantic goal navigation models, Concat (used by Hermann et al. (2017); Misra et al. (2017)) and GatedAttention (GA) (Chaplot et al., 2018); and two baselines based on Question Answering models, FiLM (Perez et al., 2018) and PACMAN (Das et al., 2018). For fair comparison, we replace the proposed Dual-Attention unit with multimodal fusion techniques in the baselines and keep everything else identical to the proposed model‡." }, { "heading": "5.1 RESULTS", "text": "We train all models for 10 million frames in the Easy setting and 50 million frames in the Hard setting. We use a +1 reward for reaching the correct object in SGN episodes and predicting the correct answer in EQA episodes. We use a small negative reward of -0.001 per time step to encourage shorter paths to the target and answering questions as soon as possible. We also use distance-based reward shaping for SGN episodes, where the agent receives a small reward proportional to the decrease in distance to the target. In the next subsection, we evaluate the performance of the proposed model without the reward shaping. SGN episodes end when the agent reaches any object, and EQA episodes end when the agent predicts any answer. All episodes have a maximum length of 210 time steps. We train all models with and without the auxiliary tasks using identical reward functions.\nAll models are trained jointly for both the tasks and tested on each task separately§. In Table 2, we report the performance of all models for both Easy and Hard settings¶. The Dual-Attention (DA) model and many baselines achieve 99% accuracy during training in the Easy-Aux setting; however,\n‡See the supplementary material for more implementation details of all baselines. § See https://sites.google.com/view/emml-iclr2020/ for visualization videos. ¶See the supplementary material for training performance curves for all models.\nthe test performance of all the baselines is considerably lower than that of the DA model (see Table 2 (left)). Performance of all the baselines is worse than the ‘Text only’ model on the EQA test set, although the training accuracy is higher. This indicates that baselines tend to overfit on the training set and fail to generalize to questions which contain words never seen in training questions. As expected, using spatial auxiliary tasks improves performance of all models. Even without auxiliary tasks, the DA model achieves a test accuracy 86% (SGN) and 53% (EQA), compared to the best baseline performance of 33% (SGN & EQA).\nFor the Hard setting, the DA model achieves a higher training (90% vs 71% with Aux) as well as test performance (82% vs. 39% for SGN, 59% vs. 33% for EQA with Aux) than the baselines (see Table 2 (right)). These results confirm the hypothesis that prior models, which are designed for a single task, lack the ability to align the words in both the tasks and transfer knowledge across tasks.\nLower test accuracy on EQA (vs. SGN) for most models (Table 2) indicates that EQA is more challenging as it involves alignment between not just input textual and visual representations but also with the answer space." }, { "heading": "5.2 ABLATION TESTS", "text": "We perform a series of ablation tests in order to analyze the contribution of each component in the Dual-Attention unit: without Spatial-Attention (w/o SA), without the first Gated-Attention with xBoW (w/o GA1), and without the second Gated-Attention with xsent (w/o GA2). We also try removing the task indicator variable (w/o Indicator Variable), removing reward shaping (w/o Reward Shaping), and training the proposed model on a single task, SGN or EQA (DA Single-Task).\nIn Table 3, we report the test performance of all ablation models. The results indicate that SA and GA1 contribute the most to the performance of the full Dual-Attention model. GA2 is critical for performance on EQA but not SGN (see Table 3). This is expected as GA2 is designed to attend to different objects and attributes based on the question and is used mainly for answer prediction. It is not critical for SGN as the spatial attention map consists of locations of relevant objects, which is sufficient for navigating to the correct object ‖.\nWe observe that reward shaping and indicator variable help with learning speed, but have little effect on the final performance (see Table 3). DA models trained only on single tasks work well on SGN, especially with auxiliary tasks, because the auxiliary task for single task models includes object detection labels corresponding to the words in the test set. This highlights a key advantage of our model’s modular and interpretable design: the model can be used for transferring the policy to new objects and attributes without fine-tuning as later discussed in the Section 5.4." }, { "heading": "5.3 HANDLING RELATIONAL TASKS", "text": "The instructions and questions considered so far contained a single target object. We propose a simple extension to our model to handle relational tasks, such as ‘Which object is to the left of the torch?’, where the agent is required to attend to the region left of the torch, not the torch itself.\n‖See the supplementary material for visualization of convolutional feature outputs, spatial attention map, sentence representation of the textual input and answer predictions\nnot green\ngreen\nobs obs\nblue\nleft_of blue\nobs\nkeycard\nright_of keycard\nFigure 5: Outputs for relations ‘not’, ‘left of’, and ‘right of’ learned by the relational modules.\nWe consider three relational operations: ‘left of’, ‘right of’ and ‘not’. We add questions and instructions with all objects and attributes using these relational operations to the existing dataset and perform experiments in the Easy-Aux setting. We assume that the knowledge of relational words, and the words they modify, are given. We train a separate module corresponding to each relational operation, and apply it to the convolutional output of the words that are modified. For example, for the above question, we apply the module for relation ‘left of’ to the convo-\nlutional output channel corresponding to the word ‘torch’. Each relational module is a trainable convolutional network which preserves the size of the input. The rest of the operations are identical to the Dual-Attention Unit. The relational modules are learned end-to-end without any additional supervision.\nIn Figure 5, we show convolutional outputs of the relational modules learned by our model. While the original DA model achieves test performance of 0.48 (SGN) and 0.44 (EQA), this simple extension achieves 0.97 (SGN) and 0.64 (EQA)." }, { "heading": "5.4 TRANSFER TO NEW CONCEPTS", "text": "Suppose that the user wants the agent to follow instructions about a new object such as ‘pillar’ or a new attribute such as ‘red’ which the agent has never seen during training. Prior SGN models (Chaplot et al., 2018; Hermann et al., 2017; Yu et al., 2018) cannot handle instructions containing a new concept. In contrast, our model can be used for handling such instructions by training an object detector for each new concept and appending it to the image representation xI . In order to test this, we train the DA model in the Easy setting on the training set for only instructions. We use auxiliary tasks but only for words in the vocabulary of the instructions training set. After training the policy, we test the agents on instructions containing test concept words ‘red’ and ‘pillar’, which the agent has never seen in textual input during training and never received any supervision about how this attribute or object looks visually.\nFor transferring the policy, we assume access to two object detectors for ‘red’ and ‘pillar’ separately. We resize the object detections to the size of a feature map in the image representation (H ×W ) and append them as channels to the image representation. We also append the words ‘red’ and ‘pillar’ to the bag-of-words representations in the same order such that they are aligned with the appended feature maps. We randomly initialize the embeddings of the new words for computing the sentence embedding.\nThe results in Table 4 show that this policy generalizes well to different types of instructions with unseen concepts, including: combining knowledge of existing attributes with a new object, or knowledge of existing objects with a new attribute; and composing a new attribute with a new object. The results shown in the lower part of Table 4 indicate that the model also generalizes well to relational instructions containing new concepts. This means that given an object detector for a new object ‘pillar’, the model can (without any additional training) detect and differentiate between green and blue pillars, or between tall and short pillars; and understand left of/right of pillar, or the negation of pillar. The model can also combine ‘pillar’ with another new attribute ‘red’ to detect red pillars and understand relational instructions involving both red objects and pillars. This suggests that a trained policy can be scaled to more objects provided the complexity of navigation remains consistent." }, { "heading": "6 CONCLUSION", "text": "We proposed a Dual-Attention model for visually-grounded multitask learning which uses Gated- and Spatial-Attention to disentangle attributes in feature representations and align them with the answer space. We show that the proposed model is able to transfer the knowledge of concepts across tasks and outperforms the baselines on both Semantic Goal Navigation and Embodied Question Answering by a considerable margin. We showed that disentangled and interpretablew representations make our model modular and allow for easy addition of new objects or attributes to a trained model. For future work, the model can potentially be extended to transferring knowledge across different domains by using modular interpretable representations of objects which are domain-invariant." }, { "heading": "B ADDITIONAL RESULTS FOR THE DOOM ENVIRONMENT", "text": "" }, { "heading": "C DOOM ENVIRONMENT DETAILS", "text": "The Doom objects used in our experiments are illustrated in Figure 11. Instructions and questions used for training and evaluation are listed in Tables 5.\nD ADDITIONAL EXPERIMENTAL DETAILS\nD.1 HYPERPARAMETERS AND NETWORK DETAILS\nThe input image is rescaled to size 3 × 168 × 300. The convolutional network for processing the image consisted of 3 convolutional layers: conv1 containing 32 8x8 filters with stride 4, conv2 containing 64 4x4 filters with stride 2, and conv3 containing V 3× 3 filters with stride 2. We use ReLU activations for conv1 and conv2 and sigmoid for conv3, as its output is used as auxiliary task predictions directly. We use word embeddings and GRU of size 32 followed by a linear layer of size V to get the sentence-level representation. The policy module uses hidden dimension 128 for the linear and GRU layers (see Figure 12).\nFor reinforcement learning, we use Proximal Policy Optimization (PPO) with 8 actors and a time horizon of 128 steps. We use a single batch with 4 PPO epochs. The clipping parameter for PPO is set to 0.2. The discount factor (γ) is 0.99. We used Adam optimizer with learning rate 2.5e-4 for all experiments.\nFigures 13 and 14 illustrate the Gated-Attention unit and Spatial-Attention unit discussed in Section 4.1.\nD.2 RELATIONAL TASKS\nHere, we discuss additional experimental details for the relational tasks described in Section 5.3. We consider three relations,R = {‘not′, ‘left of′, ‘right of′}. As mentioned previously, we assume that the knowledge of relational words, and which words are modified by relational words, are given as input to the model.\nD.2.1 EXTENDED DUAL-ATTENTION ARCHITECTURE\nGiven textual input T and a relation r ∈ R, define yr(T ) ∈ RV to be the indicator vector for words that are modified by relation r. For example, if T is the instruction ‘Go to the torch that is not red and left_of pillar’, then ynot(T ) has 1 at the index for ‘red’ and 0 everywhere else, and yleft_of(T ) has 1 at the index for ‘pillar’ and 0 everywhere else.\nGiven an indicator vector y ∈ RV , let My ∈ RV×H×W be the 3-dimensional tensor obtained by expanding each element of y to a H × W matrix. Thus, the (i, j, k)th element is given by My[i, j, k] = y[i]. This matrix is multiplied element-wise with a convolutional output tensor x ∈ RV×H×W to select the channels corresponding to words indicated by y.\nIn the extended Dual-Attention architecture, we train a separate module fr for each relation r ∈ R, where each fr is a convolutional network (5× 5 kernel size, stride 1, and padding 2) that preserves the size of the input. We apply the module fr to the channels in xGA1 ∈ RV×H×W which correspond to the words that are modified by relation r:\nxRA = ( 1V×H×W −\n∑ r∈R Myr(T )\n) xGA1 +\n∑ r∈R Myr(T ) fRA(xGA1) ∈ R V×H×W (7)\nThen we use xRA to create the spatial attention map in equation 3:\nxspat = σ ( V∑ i xRA[i, :, :] ) ∈ RH×W (8)\nWe also zero out the convolutional channels of xI corresponding to relational words, in order to improve generalization across different modified words. The rest of the operations are identical to the Dual-Attention architecture.\nD.2.2 DATASET GENERATION\nThe relational tasks are generated in the Easy-Aux setting (Section 5.3), which has five candidate objects spawned along a horizontal line in the field of view of the agent. At the start of each episode, we sample a word fromR∪ {‘none′}.\n• If ‘none’ is sampled, then an instruction (SGN) or a question (EQA) is sampled identically as in the original experiments. • If ‘not’ is sampled, then we sample an instruction (SGN) or a question (EQA) from the\noriginal dataset, then sample a word in this instruction or question and negate it (e.g. ‘Go to the red torch’→ ‘Go to the torch which is not red’). • If ‘left of’ is sampled, then we sample a correct object among the candidate objects in\npositions {0, 1, 2, 3}, generate a short description x for the object immediately to the right of the correct object (e.g., ‘red object’, ‘,red torch’, ‘short torch’, or ‘torch’), and append ‘left of x’ to the instruction or question. • If ‘right of’ is sampled, then we sample a correct object among the candidate objects in\npositions {1, 2, 3, 4}, generate a short description x for the object immediately to the left of the correct object, and append ‘right of x’ to the instruction or question.\nD.3 BASELINE DETAILS\nImage only: Naive baseline of just using the image representation: xS = vec(xI) where vec(.) denotes the flattening operation.\nText only: Naive baseline of just using the textual representations: xS = [xBoW, xsent].\nConcat: The image and textual representations are concatenated: xS = [vec(xI), xBoW, xsent]. Note that concatenation is the most common method of combining representations. Hermann et al. (2017) concatenate convolutional image and bag-of-words textual representations for SGN, whereas Misra et al. (2017) use concatenation with sentence-level textual representations.\nGated-Attention: Adapted from Chaplot et al. (2018), who used Gated-Attention with sentence-level textual representations for SGN: xS = fGA(xI , xsent).\nFiLM: Perez et al. (2018) introduced a general-purpose conditioning method called Feature-wise Linear Modulation (FiLM) for Visual Question Answering. Using FiLM, xS = γ(xsent) xI+β(xsent) where γ(xsent) and β(xsent) are learnable projections of the sentence representation.\nPACMAN: Das et al. (2018) presented a hierarchical RL model for EQA. We adapt their method by using the attention mechanism in their QA module, which takes the last 5 frames and the text as input, and computes the similarity of the text with each frame using dot products between image and sentence-level text representations. These similarities are converted into attention weights using softmax, and the attention-weighted image features are concatenated with question embedding and\npassed through a softmax classifier to predict the answer distribution. For this particular baseline, we use the last 5 frames as input at each time step, unlike the proposed model and all other baselines which use a single frame as input. The attention-weighted image features are used as the state representation. The PACMAN model used a pretrained QA module, but we train this module jointly with the Navigation model for fair comparison with the proposed model.\nFor each of the above method except PACMAN, we use a linear layer f with ReLU activations followed by softmax σ to get a V -dimensional answer prediction from the state representations: xAns = σ(f(xS; θLin)). xS and xAns are concatenated and passed to the policy module along with the time step and task indicator variable just as in the proposed model." }, { "heading": "E SCALABILITY EXPERIMENTS", "text": "We perform additional experiments in a symbolic 2D environment to test the scalability of our model with respect to vocabulary size. We use 5 different abstract attribute types (e.g., object type, size, color, texture, etc.), where each attribute can take on one of K values (e.g., the ‘color’ attribute can take on values ‘blue’, ‘red’, etc.). We construct a square maze of size 7x7, and spawn the agent with a 3x5 field of view and 5 candidate objects at random locations identical to\nthe Hard setting in the 3D environment. The questions and instructions are created identically to the 3D datasets except superlative questions and instructions. We assume perfect perception, meaning that the dual-attention unit receives convolutional output which detects each attribute perfectly. We perform experiments with different values of K ∈ {5, 20, 100} leading to a vocabulary size of up to 500 words and more than 1010 instructions and 108 questions.\nThe results in Table 6 indicate that the cross-task knowledge transfer performance scales well with vocabulary size. Furthermore, results in the previous subsection indicate that attributes can be swapped in and out as per requirement due to the modularity and interpretability of the model." }, { "heading": "F HOUSE3D EXPERIMENTS", "text": "In the House3D domain, we train on one house environment and randomize the colors of each object at the start of each episode. The agent’s spawn location is fixed. We create instructions and questions dataset for this house similar to the Doom domain. The House3D objects used in our experiments are illustrated in Figure 15. Instructions and questions used for training and evaluation are listed in Table 8.\nSGN EQA Model Train Test Train Test\nText only 0.63 0.33 0.22 0.23 Image only 0.28 0.01 0.12 0.22 Concat 0.65 0.13 0.31 0.13 GA 0.98 0.20 0.92 0.03 FiLM 0.99 0.37 0.92 0.24 PACMAN 0.73 0.20 0.40 0.21 Dual-Attention 0.99 0.47 0.89 0.29\nIn Table 7, we report the train and test performance of all the models on both SGN and EQA. The results are similar as in Doom: the Dual-Attention model outperforms the baselines by a considerable margin." } ]
2,019
null
SP:27d5ff5c5032974b4cc0c6af29e414d496d99dfd
[ "This paper introduces a structured drop-in replacement for linear layers in a neural network, referred to as Kaleidoscope matrices. The class of such matrices are proven to be highly expressive and includes a very general class of sparse matrices, including convolution, Fastfood, and permutation matrices. Experiments are carried in a variety of settings: (i) can nearly replace a series of hand-designed feature extractor, (ii) can perform better than fixed permutation matrices (though parameter count also increased by 10%), (iii) can learn permutations, and (iv) can help reduce parameter count and increase inference speed with a small performance degradation of 1.0 BLEU on machine translation.", "The authors introduce kaleidoscope matrices (K-matrices) and propose to use them as a substitute for structured matrices arising in ML applications (e.g. circulant matrix used for the convolution operation). The authors prove that K-matrices are expressive enough to capture any structured matrix with near-optimal space and matvec time complexity. The authors demonstrate that learnable K-matrices achieve similar metrics compared to hand-crafted features on speech processing and computer vision tasks, can learn from permuted images, achieve performance close to a CNN trained on unpermuted images and demonstrate the improvement of inference speed of a transformer-based architecture for a machine translation task." ]
Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps. However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy. We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity. We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality. For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%. K-matrices can also simplify hand-engineered pipelines—we replace filter bank feature computation in speech data preprocessing with a learnable kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task. In addition, K-matrices can capture latent structure in models: for a challenging permuted image classification task, a K-matrix based representation of permutations is able to learn the right latent structure and improves accuracy of a downstream convolutional model by over 9%. We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task.
[ { "affiliations": [], "name": "Tri Dao" }, { "affiliations": [], "name": "Nimit Sharad Sohoni" }, { "affiliations": [], "name": "Albert Gu" }, { "affiliations": [], "name": "Matthew Eichhorn" }, { "affiliations": [], "name": "Amit Blonder" }, { "affiliations": [], "name": "Megan Leszczynski" }, { "affiliations": [], "name": "Atri Rudra" }, { "affiliations": [], "name": "Christopher Ré" } ]
[ { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Shaojie Bai", "J. Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": "arXiv preprint arXiv:1803.01271,", "year": 2018 }, { "authors": [ "Peter L. Bartlett", "Vitaly Maiorov", "Ron Meir" ], "title": "Almost linear VC dimension bounds for piecewise polynomial networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 1999 }, { "authors": [ "Peter Bürgisser", "Michael Clausen", "Mohammad A. Shokrollahi" ], "title": "Algebraic complexity theory, volume 315", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Krzysztof Choromanski", "Mark Rowland", "Wenyu Chen", "Adrian Weller" ], "title": "Unifying orthogonal Monte Carlo methods", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ronan Collobert", "Christian Puhrsch", "Gabriel Synnaeve" ], "title": "Wav2Letter: an end-to-end ConvNetbased speech recognition system", "venue": "arXiv preprint arXiv:1609.03193,", "year": 2016 }, { "authors": [ "James W. Cooley", "Peter A.W. Lewis", "Peter D. Welch" ], "title": "The fast fourier transform and its applications", "venue": "IEEE Transactions on Education,", "year": 1969 }, { "authors": [ "Tri Dao", "Albert Gu", "Matthew Eichhorn", "Atri Rudra", "Christopher Ré" ], "title": "Learning fast algorithms for linear transforms using butterfly factorizations", "venue": "In The International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Christopher De Sa", "Albert Gu", "Rohan Puttagunta", "Christopher Ré", "Atri Rudra" ], "title": "A two-pronged progress in structured dense matrix vector multiplication", "venue": "In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2018 }, { "authors": [ "Tim Dettmers", "Luke Zettlemoyer" ], "title": "Sparse networks from scratch: Faster training without losing performance", "venue": "arXiv preprint arXiv:1907.04840,", "year": 2019 }, { "authors": [ "J.R. Driscoll", "D.M. Healy", "Jr.", "D.N. Rockmore" ], "title": "Fast discrete polynomial transforms with applications to data analysis for distance transitive graphs", "venue": "SIAM J. Comput.,", "year": 1997 }, { "authors": [ "Utku Evci", "Trevor Gale", "Jacob Menick", "Pablo S. Castro", "Erich Elsen" ], "title": "Rigging the lottery: Making all tickets winners", "venue": null, "year": 1911 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Hormozd Gahvari", "Mark Hoemmen", "James Demmel", "Katherine Yelick" ], "title": "Benchmarking sparse matrix-vector multiply in five minutes", "venue": "In SPEC Benchmark Workshop,", "year": 2007 }, { "authors": [ "John S. Garofolo", "Lori F. Lamel", "William M. Fisher", "Jonathan G. Fiscus", "David S. Pallett", "Nancy L. Dahlgren", "Victor Zue" ], "title": "TIMIT acoustic-phonetic continuous speech corpus LDC93S1", "venue": "Web Download. Philadelphia: Linguistic Data Consortium,", "year": 1993 }, { "authors": [ "Pegah Ghahremani", "Vimal Manohar", "Daniel Povey", "Sanjeev Khudanpur" ], "title": "Acoustic modelling from the signal domain using CNNs", "venue": "In Interspeech,", "year": 2016 }, { "authors": [ "Scott Gray", "Alec Radford", "Diederik P. Kingma" ], "title": "GPU kernels for block-sparse weights", "venue": "arXiv preprint arXiv:1711.09224,", "year": 2017 }, { "authors": [ "Jiuxiang Gu", "Zhenhua Wang", "Jason Kuen", "Lianyang Ma", "Amir Shahroudy", "Bing Shuai", "Ting Liu", "Xingxing Wang", "Li Wang", "Gang Wang", "Jianfei Cai", "Tsuhan Chen" ], "title": "Recent advances in convolutional neural networks", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "Song Han", "Huizi Mao", "William J. Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Fredric J. Harris" ], "title": "On the use of windows for harmonic analysis with the discrete fourier transform", "venue": "In Proceedings of the IEEE,", "year": 1978 }, { "authors": [ "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight VC-dimension bounds for piecewise linear neural networks", "venue": "Proceedings of the 2017 Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "David P. Helmbold", "Manfred K. Warmuth" ], "title": "Learning permutations with exponential weights", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Alston S. Householder" ], "title": "Unitary triangularization of a nonsymmetric matrix", "venue": "J. ACM,", "year": 1958 }, { "authors": [ "Li Jing", "Yichen Shen", "Tena Dubcek", "John Peurifoy", "Scott Skirlo", "Yann LeCun", "Max Tegmark", "Marin Soljačić" ], "title": "Tunable efficient unitary neural networks (eunn) and their application to rnns", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Thomas Kailath", "Sun-Yuan Kung", "Martin Morf" ], "title": "Displacement ranks of matrices and linear equations", "venue": "Journal of Mathematical Analysis and Applications,", "year": 1979 }, { "authors": [ "Donald Ervin Knuth" ], "title": "The art of computer programming, Volume 3: Sorting and Searching", "venue": "Pearson Education,", "year": 1997 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Quoc Le", "Tamás Sarlós", "Alexander Smola" ], "title": "Fastfood-computing hilbert space expansions in loglinear time", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Quoc V. Le", "Navdeep Jaitly", "Geoffrey E. Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "arXiv preprint arXiv:1504.00941,", "year": 2015 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Yingzhou Li", "Haizhao Yang", "Lexing Ying" ], "title": "Multidimensional butterfly factorization", "venue": "Applied and Computational Harmonic Analysis,", "year": 2018 }, { "authors": [ "Fu-Hua Liu", "Richard M. Stern", "Xuedong Huang", "Alejandro Acero" ], "title": "Efficient cepstral normalization for robust speech recognition", "venue": "In ARPA Workshop on Human Language Technology,", "year": 1993 }, { "authors": [ "Jiancheng Lyu", "Shuai Zhang", "Yingyong Qi", "Jack Xin" ], "title": "Autoshufflenet: Learning permutation matrices via an exact lipschitz continuous penalty in deep convolutional neural networks", "venue": null, "year": 1901 }, { "authors": [ "J. Makhoul" ], "title": "A fast cosine transform in one and two dimensions", "venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing,", "year": 1980 }, { "authors": [ "Michael Mathieu", "Yann LeCun" ], "title": "Fast approximation of rotations and Hessians matrices", "venue": "arXiv preprint arXiv:1404.7195,", "year": 2014 }, { "authors": [ "Gonzalo Mena", "David Belanger", "Scott Linderman", "Jasper Snoek" ], "title": "Learning latent permutations with Gumbel-Sinkhorn networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zakaria Mhammedi", "Andrew Hellicar", "Ashfaqur Rahman", "James Bailey" ], "title": "Efficient orthogonal parametrisation of recurrent neural networks using householder reflections", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Decebal C. Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H. Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "Marcin Moczulski", "Misha Denil", "Jeremy Appleyard", "Nando de Freitas" ], "title": "ACDC: a structured efficient linear layer", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Hesham Mostafa", "Xin Wang" ], "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization", "venue": "In The International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Marina Munkhoeva", "Yermek Kapushev", "Evgeny Burnaev", "Ivan Oseledets" ], "title": "Quadrature-based features for kernel approximation", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Vadim Olshevsky", "Mohammad Amin Shokrollahi" ], "title": "Matrix-vector product for confluent Cauchylike matrices with application to confluent rational interpolation", "venue": "In Proceedings of the ThirtySecond Annual ACM Symposium on Theory of Computing, May 21-23,", "year": 2000 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Dimitri Palaz", "Ronan Collobert", "Mathew Magimai-Doss" ], "title": "Estimating phoneme class conditional probabilities from raw speech signal using convolutional neural networks", "venue": "In Interspeech,", "year": 2013 }, { "authors": [ "Kuldip Paliwal" ], "title": "On the use of filter-bank energies as features for robust speech recognition", "venue": "In International Symposium on Signal Processing and its Applications (ISSPA),", "year": 1999 }, { "authors": [ "Victor Y. Pan" ], "title": "Structured Matrices and Polynomials: Unified Superfast Algorithms", "venue": null, "year": 2001 }, { "authors": [ "Victor M. Panaretos", "Shahin Tavakoli" ], "title": "Fourier analysis of stationary time series in function space", "venue": "The Annals of Statistics,", "year": 2013 }, { "authors": [ "D. Stott Parker" ], "title": "Random butterfly transformations with applications in computational linear algebra", "venue": "Technical report,", "year": 1995 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "In Advances in Neural Information Processing Systems (NeurIPS) - Autodiff Workshop,", "year": 2017 }, { "authors": [ "Daniel Povey", "Arnab Ghoshal", "Gilles Boulianne", "Lukas Burget", "Ondrej Glembek", "Nagendra Goel", "Mirko Hannemann", "Petr Motlicek", "Yanmin Qian", "Petr Schwarz", "Jan Silovsky", "Georg Stemmer", "Karel Vesely" ], "title": "The kaldi speech recognition toolkit", "venue": "In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society,", "year": 2011 }, { "authors": [ "Mirco Ravanelli", "Yoshua Bengio" ], "title": "Speaker recognition from raw waveform with sincnet", "venue": "In IEEE Workshop on Spoken Language Technology,", "year": 2018 }, { "authors": [ "Mirco Ravanelli", "Philemon Brakel", "Maurizio Omologo", "Yoshua Bengio" ], "title": "Light gated recurrent units for speech recognition", "venue": "In IEEE Transactions on Emerging Topics in Computational Intelligence,", "year": 2018 }, { "authors": [ "Mirco Ravanelli", "Titouan Parcollet", "Yoshua Bengio" ], "title": "The PyTorch-Kaldi speech recognition toolkit", "venue": "In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Vladimir Rokhlin", "Mark Tygert" ], "title": "Fast algorithms for spherical harmonic expansions", "venue": "SIAM Journal on Scientific Computing,", "year": 2006 }, { "authors": [ "Leonid I. Rudin", "Stanley Osher", "Emad Fatemi" ], "title": "Nonlinear total variation based noise removal algorithms", "venue": "Physica D: nonlinear phenomena,", "year": 1992 }, { "authors": [ "Tara N. Sainath", "Brian Kingsbury", "Vikas Sindhwani", "Ebru Arisoy", "Bhuvana Ramabhadran" ], "title": "Lowrank matrix factorization for deep neural network training with high-dimensional output targets", "venue": "In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Tara N. Sainath", "Ron J. Weiss", "Andrew Senior", "Kevin W. Wilson", "Oriol Vinyals" ], "title": "Learning the speech front-end with raw waveform CLDNNs", "venue": "In Interspeech,", "year": 2015 }, { "authors": [ "Vikas Sindhwani", "Tara N. Sainath", "Sanjiv Kumar" ], "title": "Structured transforms for small-footprint deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "S.S. Stevens", "J. Volkmann", "E.B. Newman" ], "title": "A scale for the measurement of the psychological magnitude pitch", "venue": "Journal of the Acoustic Society of America,", "year": 1937 }, { "authors": [ "G. Szegö" ], "title": "Orthogonal Polynomials. Number v. 23 in American Mathematical Society colloquium publications", "venue": "American Mathematical Society,", "year": 1967 }, { "authors": [ "Anna T. Thomas", "Albert Gu", "Tri Dao", "Atri Rudra", "Christopher Ré" ], "title": "Learning compressed transforms with low displacement rank", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Trieu H. Trinh", "Andrew M Dai", "Minh-Thang Luong", "Quoc V. Le" ], "title": "Learning longer-term dependencies in RNNs with auxiliary losses", "venue": "arXiv preprint arXiv:1803.00144,", "year": 2018 }, { "authors": [ "Joseph Tsidulko" ], "title": "Google showcases on-device artificial intelligence breakthroughs at I/O", "venue": null, "year": 2019 }, { "authors": [ "Mark Tygert" ], "title": "Fast algorithms for spherical harmonic expansions, ii", "venue": "Journal of Computational Physics,", "year": 2008 }, { "authors": [ "Mark Tygert" ], "title": "Fast algorithms for spherical harmonic expansions, iii", "venue": "Journal of Computational Physics,", "year": 2010 }, { "authors": [ "Mark Tygert" ], "title": "Recurrence relations and fast algorithms", "venue": "Applied and Computational Harmonic Analysis,", "year": 2010 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "WaveNet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Scott Wisdom", "Thomas Powers", "John Hershey", "Jonathan Le Roux", "Les Atlas" ], "title": "Full-capacity unitary recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann N Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Felix X. Yu", "Sanjiv Kumar", "Henry A. Rowley", "Shih-Fu Chang" ], "title": "Compact nonlinear maps and circulant extensions", "venue": "CoRR, abs/1503.03893,", "year": 2015 }, { "authors": [ "Felix X. Yu", "Ananda T. Suresh", "Krzysztof M. Choromanski", "Daniel N. Holtmann-Rice", "Sanjiv Kumar" ], "title": "Orthogonal random features", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Xiyu Yu", "Tongliang Liu", "Xinchao Wang", "Dacheng Tao" ], "title": "On compressing deep models by low rank and sparse decomposition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Neil Zeghidour", "Nicolas Usunier", "Iasonas Kokkinos", "Thomas Schatz", "Gabriel Synnaeve", "Emmanuel Dupoux" ], "title": "Learning filterbanks from raw speech for phone recognition", "venue": "In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ritchie Zhao", "Yuwei Hu", "Jordan Dotzel", "Christopher De Sa", "Zhiru Zhang" ], "title": "Building efficient deep neural networks with unitary group convolutions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv preprint arXiv:1710.01878,", "year": 2017 }, { "authors": [ "Bengio" ], "title": "2018) is a CNN-based architecture parameterized with sinc functions, designed so that the first convolutional layer imitates a band-pass filter. Zeghidour et al. (2018) formulate a learnable version of a filter bank featurization; their filters are initialized as an approximation of MFSC features and then fine-tuned jointly with the rest of the model", "venue": "Sainath et al", "year": 2015 }, { "authors": [ "Le" ], "title": "Permuted MNIST task, in which the model has to classify digit images with all the pixels permuted. Many new RNN architectures, with unitary or orthogonal weight matrices to avoid gradient explosion or vanishing, have been proposed and tested on this task", "venue": "(Le et al.,", "year": 2015 }, { "authors": [ "Zhao" ], "title": "2019) propose to instead use the Hadamard transform before and after each grouped", "venue": null, "year": 2019 }, { "authors": [], "title": "P, where B ∈ B and P is a permutation (the bit reversal permutation)", "venue": "From Theorem", "year": 1995 }, { "authors": [ "Lemma K" ], "title": "Let M be an n× n orthogonal/unitary matrix. Then M ∈ (OBB)n−1. Proof. We consider the QR decomposition of M. It is known that we can compose M into a product of n− 1 Householder reflections and an orthogonal/unitary diagonal matrix (Householder, 1958).10 From Lemma K.1, each Householder reflection is in OBB", "venue": null, "year": 1958 }, { "authors": [ "Thomas" ], "title": "2018) for the case where the entries of the weight matrices interact multiplicatively, but with polynomially bounded degrees. This proof is similar to the VC bound for ReLU networks whose weight matrices are butterfly matrices (Dao et al., 2019)", "venue": null, "year": 2019 }, { "authors": [ "Thomas" ], "title": "2018), we simply need to check that the entries of the linear layer, as polynomials of the parameters", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Structured linear maps are fundamental and ubiquitous in modern machine learning. Their efficiency in speed (fast algorithms) and space (few parameters) can reduce computation and memory usage. The class of structured linear maps includes fixed specialized transforms such as the discrete Fourier transform (DFT) and Hadamard transform used in signal processing (Cooley et al., 1969), convolutions for image, language, and speech modeling (Gu et al., 2018), and low-rank and sparse matrices for efficient storage and inference on edge devices (Yu et al., 2017). Forms of structure such as sparsity have been at the forefront of recent advances in ML (Frankle & Carbin, 2019), and are critical for on-device and energy-efficient models, two application areas of tremendous recent interest (Tsidulko, 2019; Schwartz et al., 2019).\nThere are a plethora of classes of structured linear maps, each with a significantly different representation, algorithm, and implementation. They have different tradeoffs in terms of inference speed, training speed, and accuracy, and the conventional wisdom is that no one class works uniformly well across all applications. As a result, ML practitioners currently hand-pick specific classes of structured linear maps for each of their applications. This is a difficult and labor-intensive task. ∗These authors contributed equally.\nIdeally, these problems should be addressed with a universal representation for structured linear maps: (i) Such a parameterization should be expressive enough to capture important classes of structure, with a nearly tight parameter count and runtime: the space required to represent the linear map should be close to optimal, and the resulting algorithm for matrix vector multiplication should be close to the fastest possible algorithm. (ii) The parameterization should be differentiable in order to be learned as a component of end-to-end ML pipelines, enabling it to easily be used as a drop-in replacement for manually engineered structured components. (iii) The parameterization should admit practically efficient algorithms for training and inference, in terms of both speed and memory.\nCurrently, no class of structured linear maps satisfies all of these criteria. Most existing classes of structured matrices—such as the class of low-rank matrices—fail to tightly capture other important types of structure. For example, the DFT has an efficient structured representation of size O(n log n), yet cannot be well-approximated by a low-rank transform of size n2. Another important type of structure is sparsity; lots of exciting recent work has focused on the design of sparse neural networks. For instance, sparse networks of comparable quality to their dense counterparts—yet an order of magnitude fewer parameters—may be created via pruning (Han et al., 2016) or by identifying “winning lottery tickets” (Frankle & Carbin, 2019). In parallel, recent theoretical results by De Sa et al. (2018) show that sparsity and the notion of structure in linear maps are fundamentally linked: any given matrix can be factored into a product of sparse matrices with total parameter count equal to the efficiency (i.e. minimum arithmetic circuit complexity) of the matrix. In other words, the representation of linear maps as products of sparse matrices tightly captures all forms of structure. Unfortunately, it is difficult to actually learn these sparse factorizations, because it requires finding the sparsity patterns of the factors—a discrete, nondifferentiable search problem. Thus, current methods for training sparse neural networks are either expensive (Frankle & Carbin, 2019) or rely on highly hand-tuned heuristics for evolving the sparsity patterns throughout training (Dettmers & Zettlemoyer, 2019).\nBy contrast, we propose a representation of linear maps as products of sparse matrices with specific predefined sparsity patterns (Section 2), and show that it does satisfy our desiderata: it retains the expressiveness of unstructured sparsity, while being differentiably learnable and efficient like other structured representations. Concretely, our representation is based on products of a particular building block known as a butterfly matrix (Parker, 1995; Dao et al., 2019); we term such products kaleidoscope matrices (K-matrices for short).1 (i) Our main theoretical contribution (Section 2.3) concerns the expressiveness of this representation: we show that any structured linear map (i.e. one that can be applied using s n2 arithmetic operations) can be represented as a K-matrix, with a nearly tight number of parameters and algorithmic complexity (both on the order of s up to logarithmic factors). (ii) The kaleidoscope representation is fully differentiable; thus, all the parameters of a K-matrix can be learned using standard optimization algorithms such as SGD. (iii) Because of their simple, regular structure, K-matrices are practical and easy to use. We provide memory- and runtime-efficient implementations of K-matrix multiplication on CPU and GPU for training and inference, with a simple PyTorch interface.\nWe empirically validate that, due to their expressiveness, learnability, and efficiency, we can use K-matrices as a drop-in replacement for linear components in deep learning models. In Section 3.1, we use K-matrices to replace hand-crafted structure in two different settings. We simplify the six steps of filter bank computation in speech preprocessing into a single learnable K-matrix step, with only an 0.4% accuracy drop on the TIMIT speech recognition task. We use K-matrices to replace channel shuffles in ShuffleNet, improving ImageNet classification accuracy by up to 5%. In Section 3.2, we show that K-matrices can successfully recover latent structure; a K-matrix is used to learn latent permutations in a permuted image dataset (Permuted CIFAR), resulting in 9 points higher accuracy in a downstream CNN model. In Section 3.3, we show that our efficient K-matrix multiplication implementation can be applied to speed up real-world tasks: we replace linear layers with K-matrices in a DynamicConv-Transformer network to attain 36% faster end-to-end inference speed with a 1.0 drop in BLEU score on the IWSLT14 German→English translation task.\n1A group of butterflies is known as a kaleidoscope." }, { "heading": "2 A NEARLY-TIGHT PARAMETERIZATION OF ALL STRUCTURED MATRICES", "text": "We first present some background on the characterization of all structured matrices (i.e. those with subquadratic multiplication algorithms) as products of sparse factors, along with the definition of butterfly matrices. We then propose a differentiable family of kaleidoscope matrices, composed of products of butterfly matrices, and prove their expressivity: all structured matrices can be represented in this form, with almost optimal parameter count and runtime." }, { "heading": "2.1 BACKGROUND: SPARSE FACTORIZATION, BUTTERFLY MATRICES", "text": "Sparse factorization One method of constructing matrices with theoretically fast matrix-vector multiplication algorithms is as a product of sparse matrices, so that multiplication by an arbitrary vector has cost proportional to the total number of nonzeros (NNZ) of the matrices in the product. Surprisingly, the converse is also true. De Sa et al. (2018) introduce the concept of sparse product width (SPW), which roughly corresponds to the total NNZ in a factorization of a matrix, and show that it is an asymptotically optimal descriptor of the algorithmic complexity of matrix-vector multiplication (Bürgisser et al., 2013). We use a similar argument in the proof of our main theorem (Section 2.3). However, attempting to learn such a factorization of a given matrix is difficult, as the sparsity constraint is not continuous. Moreover, because of the possibly irregular sparsity patterns, it is difficult to realize the theoretical speedups in practice (Gray et al., 2017; Gahvari et al., 2007).\nButterfly matrices Butterfly matrices, encoding the recursive divide-and-conquer structure of the fast Fourier transform (FFT) algorithm, have long been used in numerical linear algebra (Parker, 1995; Li et al., 2015) and machine learning (Mathieu & LeCun, 2014; Jing et al., 2017; Munkhoeva et al., 2018; Dao et al., 2019; Choromanski et al., 2019). Here we define butterfly matrices, which we use as a building block for our hierarchy of kaleidoscope matrices. Definition 2.1. A butterfly factor of size k ≥ 2 (denoted as Bk) is a matrix of the form Bk =[ D1 D2 D3 D4 ] where each Di is a k2 × k 2 diagonal matrix. We restrict k to be a power of 2.\nDefinition 2.2. A butterfly factor matrix of size n with block size k (denoted as B(n)k ) is a block diagonal matrix of nk (possibly different) butterfly factors of size k:\nB (n) k = diag ( [Bk]1 , [Bk]2 , . . . , [Bk]nk ) Definition 2.3. A butterfly matrix of size n (denoted as B(n)) is a matrix that can be expressed as a product of butterfly factor matrices: B(n) = B(n)n B\n(n) n 2 . . .B (n) 2 . Equivalently, we may define B (n)\nrecursively as a matrix that can be expressed in the following form:\nB(n) = B(n)n\n[ [B(\nn 2 )]1 0 0 [B( n 2 )]2 ] (Note that [B( n 2 )]1 and [B( n 2 )]2 may be different.)" }, { "heading": "2.2 THE KALEIDOSCOPE HIERARCHY", "text": "Using the building block of butterfly matrices, we formally define the kaleidoscope (BB∗) hierarchy and prove its expressiveness. This class of matrices serves as a fully differentiable alternative to products of sparse matrices (Section 2.1), with similar expressivity. In Appendix J, we show where various common structured matrix classes are located within this hierarchy.\nThe building block for this hierarchy is the product of a butterfly matrix and the (conjugate) transpose of another butterfly matrix (which is simply a product of butterfly factors taken in the opposite order). Figure 1 visualizes the sparsity patterns of the butterfly factors in BB∗, where the red and blue dots represent the allowed locations of nonzero entries. Definition 2.4 (Kaleidoscope hierarchy, kaleidoscope matrices).\n• Define B as the set of all matrices that can be expressed in the form B(n) (for some n). • Define BB∗ as the set of matrices M of the form M = M1M∗2 for some M1,M2 ∈ B.\n• Define (BB∗)w as the set of matrices M that can be expressed as M = Mw . . .M2M1, with each Mi ∈ BB∗ (1 ≤ i ≤ w). (The notation w represents width.) • Define (BB∗)we as the set of n × n matrices M that can be expressed as M = SEST for some en × en matrix E ∈ (BB∗)w, where S ∈ Fn×en = [In 0 . . . 0] (i.e. M is the upper-left corner of E). (The notation e represents expansion relative to n.)\n• M is a kaleidoscope matrix, abbreviated as K-matrix, if M ∈ (BB∗)we for some w and e.\nThe kaleidoscope hierarchy, or (BB∗) hierarchy, refers to the families of matrices (BB∗)1e ⊆ (BB∗)2e ⊆ . . . , for a fixed expansion factor e. Each butterfly matrix can represent the identity matrix, so (BB∗)we ⊆ (BB∗)w+1e . We show that the inclusion is proper in Appendix E. This hierarchy generalizes the BP hierarchy proposed by Dao et al. (2019), as shown in Appendix J.\nEfficiency in space and speed Each matrix in (BB∗)we is a product of 2w total butterfly matrices and transposes of butterfly matrices, each of which is in turn a product of log(ne) factors with 2ne nonzeros (NNZ) each. Therefore, each matrix in (BB∗)we has 4wne log(ne) parameters and a matrix-vector multiplication algorithm of complexity O(wne log ne) (by multiplying the vector with each sparse factor sequentially). We prove this more formally in Appendix E. For the applications in Section 3, w and e are small constants (up to 2), so those K-matrices have O(n log n) parameters and runtime." }, { "heading": "2.3 ALL LOW-DEPTH STRUCTURED MATRICES ARE IN THE KALEIDOSCOPE HIERARCHY", "text": "We now present our main theoretical result: the fact that general linear transformations, expressed as low-depth linear arithmetic circuits, are captured in the BB∗ hierarchy with low width. Arithmetic circuits are commonly used to formalize algebraic algorithmic complexity (Bürgisser et al., 2013); we include a primer on this in Appendix M. The quantities of interest are the total number of gates in the circuit, representing the total number of steps required to perform the algorithm for a serial processor, and the depth, representing the minimum number of steps required for a parallel processor. Theorem 1. Let M be an n×n matrix such that multiplication of M times an arbitrary vector v can be represented as a linear arithmetic circuit with s total gates and depth d. Then, M ∈ (BB∗)O(d)O( sn ).\nThe representation of such a matrix M in the BB∗ hierarchy has O(ds log s) parameters and yields a O(ds log s) multiplication algorithm, compared to the O(s) parameters and runtime of the circuit representation. To the best of our knowledge, the most general classes of efficient matrices that have been studied (De Sa et al., 2018) have depth d on the order of log n or poly log n. In these cases, the representation with K-matrices matches the best known bounds up to polylogarithmic factors.\nThe crux of the proof of Theorem 1 (shown in Appendix F) is the construction of an almost tight representation of any sparse matrix as a K-matrix (i.e. a product of butterfly matrices): specifically, we show that any n× n sparse matrix with s nonzeros is in (BB∗)O(d s ne)\nO(1) (Theorem 3, Appendix I). We then leverage the expressivity result of products of sparse matrices to represent all arithmetic circuits (similar to the sparse product width result of De Sa et al. (2018) referenced in Section 2.1) to complete the proof of Theorem 1.\nThis intermediate result is also a novel characterization of sparse matrices. For a matrix with s NNZ, the kaleidoscope representation has O(s log n) parameters and runtime, instead of the optimal O(s) parameters and runtime; so, we trade off an extra logarithmic factor in space and time for full differentiability (thanks to the fixed sparsity patterns in the representation). The intuition behind\nthe result is as follows: a sparse matrix with s NNZ can be written as a sum of ds/ne matrices each with at most n NNZ. Any n × n matrix with at most n NNZ, up to permuting the rows and columns, is a product of two butterfly matrices (Lemma I.1). Sorting networks (Knuth, 1997) imply that permutation matrices are in (BB∗)O(logn), but we tighten the result to show that they are in fact in BB∗ (Theorem 2, Appendix G). We thus obtain a kaleidoscope representation for each summand matrix with O(n log n) parameters. By the addition closure property of the BB∗ hierarchy (Lemma H.5), each sparse matrix with s NNZ then has a kaleidoscope representation with O(s log n) parameters.\nTight representation for structured linear maps common in ML Even though Theorem 1 suggests that the kaleidoscope representation can be loose by logarithmic factors, many structured linear maps common in ML can be represented in this hierarchy with an optimal number of parameters and runtime compared to the best known parameterizations, up to constant factors. Appendix J includes several examples such as discrete transforms (the DFT, discrete cosine transform (DCT), discrete sine transform (DST), and Hadamard transform), convolution (i.e. circulant matrices), Toeplitz matrices (Gray, 2006), structured matrices for kernel approximation ((HD)3 (Yu et al., 2016)) and compact neural network design (Fastfood (Le et al., 2013), ACDC (Moczulski et al., 2016)). There have been other large classes of structured matrices proposed in the machine learning literature, such as Toeplitz-like (Sindhwani et al., 2015) and low displacement rank (LDR) (Thomas et al., 2018), but they are not known to be able to capture these common structures as tightly as K-matrices can. More detailed discussions are in Appendix A." }, { "heading": "2.4 EXTENSIONS", "text": "ReLU networks with low-depth structured weight matrices In Appendix L, we prove that finding an efficient circuit for a ReLU network can be reduced to finding efficient circuits for each of its weight matrices, with at most a constant factor greater size and run-time (i.e. number of gates). We also show that ReLU networks with kaleidoscope weight matrices have near-linear VC dimension in the number of parameters, matching the bound for networks with unconstrained weight matrices (Bartlett et al., 1999; Harvey et al., 2017) and LDR (Thomas et al., 2018). This yields a corresponding sample complexity bound.\nOrthogonal kaleidoscope hierarchy Orthogonal butterfly matrices are one commonly used variant due to their improved stability (Parker, 1995), where each butterfly factor is constrained to be orthogonal: [\nC S −S C\n] with C,S being diagonal and C2 + S2 = I. Similar to the BB∗ hierarchy, in\nAppendix K, we define the OBB hierarchy consisting of products of orthogonal butterfly matrices and diagonal matrices, and show that this hierarchy has the same expressiveness as the BB∗ hierarchy." }, { "heading": "3 EMPIRICAL EVALUATION", "text": "We validate three claims that suggest that kaleidoscopes are a promising technique to learn different types of structure in modern architectures.\n1. Section 3.1: for applications in speech and lightweight computer vision relying on highly hand-crafted structured transformations, we show that we can recover—and even improve—the quality of such architectures by simply replacing existing hand-structured components with K-matrices, with only a small overhead in memory and computation.\n2. In Section 3.2, for a challenging task with latent structure (Permuted CIFAR-10), a K-matrixbased relaxation of permutations is able to learn the right latent permutation, yielding 9 points better accuracy in a downstream CNN compared to standard RNN and CNN baselines used on such permuted image classification tasks.\n3. In Section 3.3, we show that, although not yet highly optimized, our current implementation of K-matrices can improve the inference throughput of DynamicConv Transformer, a state-ofthe-art fast machine translation model, by 36%, with only a relatively small drop in translation quality.\nIn all of the above applications, as K-matrices are fully differentiable, we simply train them jointly with the rest of the model using standard learning algorithms (such as SGD). Full details for all of the experiments (precise architectures, hyperparameters, etc.) are in Appendix B 2." }, { "heading": "3.1 REPLACING HAND-CRAFTED STRUCTURES", "text": "We validate that kaleidoscope matrices can recover or improve on the performance of hand-crafted structure in ML models. For example, a single learnable kaleidoscope layer can be used to replace the hand-engineered filter bank speech preprocessing pipeline with only 0.4% loss in accuracy on the TIMIT speech recognition task (Section 3.1.1). Replacing channel shuffles in ShuffleNet with learnable K-matrices improves classification accuracy on ImageNet by up to 5.0% (Section 3.1.2)." }, { "heading": "3.1.1 SPEECH PREPROCESSING", "text": "We show that K-matrices can remove the need for hand-tuning by significantly simplifying speech recognition data preprocessing pipelines. In particular, we can entirely replace the complex handcrafted MFSC featurization commonly used in speech recognition tasks with a fully learnable kaleidoscope layer, with only 0.4% drop in accuracy on the TIMIT speech recognition benchmark. Results are presented in Table 1. Our approach is competitive with the accuracy of standard models that use hand-crafted features, and significantly outperforms current approaches for learning from raw audio input.\nModern speech recognition models currently rely on carefully hand-crafted features extracted from the audio, which are then fed into an acoustic model. By contrast, learning directly from the raw audio—i.e. end-to-end learning from the audio waveform without any manual featurization—obviates the need for this complicated and often expensive preprocessing step. There have been recent attempts to learn directly from raw audio, such as SincNet (Ravanelli & Bengio, 2018); however, they often rely on specialized architectures designed by domain experts. Instead, we use a standard RNN speech recognition architecture, but use a learnable kaleidoscope layer to replace the featurization steps.\n2Code that implements Kaleidoscope matrix multiplication is available at https://github.com/" }, { "heading": "HazyResearch/learning-circuits", "text": "3The current state-of-the-art results from Ravanelli et al. (2018) use a concatenation of three different speech audio featurizations—MFSC, MFCC, and fMLLR—as the neural network input, along with a customized RNN architecture (LiGRU) specifically designed for speech recognition.\nThe baseline architecture takes as input filter bank (MFSC) features, which are a popular standard featurization for speech recognition (Paliwal, 1999) and involve several steps hand-crafted specifically for this domain. These features are extracted from the raw audio waveform, and fed as the input into a Bi-LSTM model. We significantly simplify this pipeline by replacing the featurization step with a trainable kaleidoscope layer that is trained end-to-end together with the Bi-LSTM. The original pipeline and our modified kaleidoscope version are depicted in Figure 2.\nThe computation of MFSC features involves a series of painstakingly hand-designed steps (further described in Appendix B.1), each involving their own hyperparameters: (i) the waveform is framed (split into chunks), (ii) the waveform is dithered (noise is added), (iii) pre-emphasis is applied, (iv) the Hamming window is applied, (v) the FFT is applied and the power spectrum is computed, (vi) the result is mapped to the mel scale (which involves applying a particular linear transformation and then taking the logarithm of the result), (vii) cepstral mean and variance normalization is applied. We replace the last six steps (ii-vii) of this featurization process with a learnable kaleidoscope layer; specifically, after windowing, we multiply the input by a K-matrix, and then compute the logarithm of the power spectrum; the output is fed into the Bi-LSTM model." }, { "heading": "3.1.2 REPLACING CNN CHANNEL SHUFFLE", "text": "We evaluate how K-matrices can improve the quality of hand-crafted, lightweight architectures for computer vision tasks, without the need for hand-tuning. We select ShuffleNet (Zhang et al., 2018), which is a state-of-the-art lightweight CNN architecture that uses a manually designed “channel shuffle” permutation matrix to improve performance. By replacing this fixed permutation with a learnable K-matrix, we achieve up to 5% further improvement in classification accuracy, without hand-tuned components and with a modest space penalty of up to 10%. Results are given in Table 2.\nGrouped convolution (Krizhevsky et al., 2012) is often used to reduce parameter count and speed up inference compared to standard convolution, but, by default, channels in different groups cannot exchange information. To remedy this, ShuffleNet uses a permutation matrix to shuffle the channels after each grouped convolution. Zhao et al. (2019) propose to instead use the Hadamard transform before and after each grouped convolution to mix the channels. In place of these hand-engineered solutions, we use a K-matrix before and after each grouped convolution, and learn these end-to-end together with the rest of the network. As shown in Table 2, across a range of sizes, replacing the channel shuffles with K-matrices results in improved performance at comparable parameter counts." }, { "heading": "3.2 LEARNING A LATENT PERMUTATION", "text": "We show that K-matrices can be used in a challenging task for which existing classes of structured linear maps have not been found suitable. We investigate the problem of image classification on a permuted image dataset (Permuted CIFAR-10). This problem is challenging due to the discrete nature of learning the latent permutation of the dataset; we present a differentiable relaxation for this using a K-matrix as a key component. Results are presented in Table 3; compared to methods that do\n4Despite our best effort, we were unable to reproduce the original accuracy reported by Zhang et al. (2018), a problem similarly faced by Zhao et al. (2019) and Lyu et al. (2019). Zhao et al. (2019) use block Hadamard transform and pre-activation ShuffleNet, so their results are not directly comparable with those reported here.\nnot have a permutation learning step, our approach gets 9 points higher accuracy (84.4% to 93.6%), coming within 2 points of the accuracy on the un-permuted dataset (94.9%).\nIn this task, we use a permuted image classification dataset (Permuted CIFAR-10), wherein a fixed global permutation is applied to the pixels of every image in the original input set. Typically, only fully-connected (FC) and recurrent models are applied to such datasets (Le et al., 2015), because the permutation destroys locality in the image, presenting a difficulty for CNNs. However, CNNs are much better-suited for standard image tasks. We thus expect that learning the permutation and then applying a standard CNN should outperform these baselines. As mentioned in Section 2, the kaleidoscope hierarchy provides a nearly tight parameterization of permutations; this makes them a natural fit for the permutation learning step.\nExperimentally, we use a K-matrix to represent a distribution over permutations, which converges to a single permutation at the end of training. The correct latent structure is learned by applying samples from this distribution to the permuted training images, and minimizing an auxiliary smoothness-based loss that encourages the reconstructed images to be more “natural” (i.e. vary smoothly pixel-to-pixel). The learned permutation is evaluated by training a ResNet18 with the K-matrix permutation layer inserted at the beginning. Full details of our approach are provided in Appendix B.3.\nIn Table 3, we compare our approach to a ResNet18 without this extra K-matrix layer, a ResNet18 with an extra dense matrix at the beginning instead of a K-matrix, and other baselines. As generic representations such as unstructured matrices do not have the requisite properties to fit in the pipeline, these baselines fail to effectively learn the latent permutation. We emphasize that a K-matrix provides this ability to recover latent structure despite not being specialized for permutations. Figure 3 describes the pipeline and displays examples of permuted and unpermuted images." }, { "heading": "3.3 SPEEDING UP INFERENCE", "text": "We evaluate the inference speed benefit of using K-matrices on a real language translation model. We choose the state-of-the-art DynamicConv Transformer translation model (Wu et al., 2019), which offers 20% inference speedup over the standard Transformer model, and replace dense matrices in the decoder’s linear layers with K-matrices, which leads to a further 36% inference speedup (Table 4).\nAs outlined in Section 2.3, K-matrices admit a simple and fastO(n log n) matrix-vector multiplication algorithm. We provide fast implementations of this algorithm in C++ and CUDA, with an interface to PyTorch (Paszke et al., 2017), and use this implementation in our experiments.\nWe use K-matrices to replace all the linear layers in the decoder of DynamicConv (since 90% of inference time is spent in the decoder). As shown in Table 4, on the IWSLT-14 German-English translation task, this yields a 25% smaller model with 36% faster inference time on CPU, at the cost of 1.0 drop in BLEU score.5 (Our model also nearly matches the state-of-the-art BLEU performance of 2 years ago obtained by the Transformer model (Vaswani et al., 2017), despite being over 60% faster for inference than the Transformer.) The majority (55%) of inference time is spent in matrix-vector multiplication; our implementation of K-matrix-vector multiplication is about 2 times faster than the optimized implementation of dense matrix-vector multiplication in the Intel MKL library. Direct comparisons of K-matrix multiplication with this and other highly-optimized routines such as the FFT are further detailed in Appendix C." }, { "heading": "4 CONCLUSION", "text": "We address the problem of having to manually choose among the numerous classes of structured linear maps by proposing the universal (expressive, efficient, and learnable) family of kaleidoscope matrices. We prove that K-matrices can represent any structured linear maps with near-optimal space and time complexity. Empirical validations suggest that K-matrices are a promising and flexible way to employ structure in modern ML; they can be used to reduce the need for hand-engineering, capture challenging latent structure, and improve efficiency in models. We are excited about future work on further hardware-optimized implementations of K-matrices, to fully realize the size and speed benefits of structured matrices on a broad array of real-world applications." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Avner May and Jian Zhang for their helpful feedback.\nWe gratefully acknowledge the support of DARPA under Nos. FA87501720095 (D3M), FA86501827865 (SDH), and FA86501827882 (ASED); NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ONR under No. N000141712266 (Unifying Weak Supervision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, and members of the Stanford DAWN project: Teradata, Facebook, Google, Ant Financial, NEC, VMWare, and Infosys. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government. Matthew Eichhorn and Atri Rudra’s research is supported by NSF grant CCF-1763481." }, { "heading": "A RELATED WORK", "text": "" }, { "heading": "A.1 STRUCTURED MATRICES IN MACHINE LEARNING", "text": "Structured linear maps such as the DFT, the Hadamard transform and convolution are a workhorse of machine learning, with diverse applications including data preprocessing, random projection, featurization, and model compression. For example, the DFT is a crucial step in the standard filter bank speech preprocessing pipeline (Jurafsky & Martin, 2014), and is commonly used when dealing with time series data in general (Panaretos & Tavakoli, 2013). Fast random projection and kernel approximation methods rely on the fast Hadamard transform (Le et al., 2013; Yu et al., 2016) and convolution (Yu et al., 2015), and convolution is a critical component of modern image processing architectures (Krizhevsky et al., 2012) as well as being useful in speech recognition (Zeghidour et al., 2018) and natural language processing (Wu et al., 2019). Large learnable classes of structured matrices such as Toeplitz-like matrices (Sindhwani et al., 2015) and low-displacement rank (LDR) matrices (Thomas et al., 2018) have been used for model compression. However, despite their theoretical speedup, these structured matrix classes lack efficient implementations, especially on GPUs. Therefore, their use has largely been confined to small models (e.g. single hidden layer neural nets) and small datasets (e.g. CIFAR-10).\nButterfly matrices encode the recursive divide-and-conquer structure of the fast Fourier transform (FFT) algorithm. They were first used in numerical linear algebra for fast preconditioning (Parker, 1995). The butterfly factorization is then generalized to encompass complementary low-rank matrices commonly encountered in solving differential and integral equations (Rokhlin & Tygert, 2006; Tygert, 2008; 2010b;a; Li et al., 2015; 2018). In machine learning, butterfly matrices have been use to approximate the Hessian for fast optimization (Mathieu & LeCun, 2014), and to perform fast random projection (Jing et al., 2017; Munkhoeva et al., 2018; Choromanski et al., 2019). Dao et al. (2019) show that butterfly matrices can be used to learn fast algorithms for discrete transforms such as the Fourier transform, cosine/sine transform, Hadamard transform, and convolution." }, { "heading": "A.2 SPARSE MATRICES", "text": "Several classes of structured linear transforms are ubiquitous in modern deep learning architectures; particularly widespread examples include convolution and multiheaded attention. Recently, attempts to impose sparsity on the neural network weights have been gaining traction. State-of-the art approaches of this type typically accomplish this by pruning small weights (either gradually during training (Zhu & Gupta, 2017), or post-training (Han et al., 2016)) or by training a dense network and then identifying “winning lottery tickets”—sparse subnetworks which may then be retrained from scratch with appropriate initialization (Frankle & Carbin, 2019). Importantly, these approaches start from a dense network, and therefore training is expensive. There is also a more nascent line of work that aims to train unstructured sparse neural networks directly (Mocanu et al., 2018; Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Evci et al., 2019). These approaches maintain a constant network sparsity level throughout training, and use heuristics to evolve the sparsity pattern during training. One drawback is that the indices of the nonzero entries need to be stored in addition to the entry values themselves, which increases the memory required to store the sparse weight tensors. Another drawback is that these approaches to learn the sparsity pattern are based on intricate heuristics, which can be brittle. We note that these heuristic sparsification techniques could potentially be combined with our approach, to further sparsify the K-matrix factors." }, { "heading": "A.3 SPEECH RECOGNITION FROM RAW AUDIO", "text": "Numerous works focus on the problem of speech recognition from raw audio input, i.e. without manual featurization. SincNet (Ravanelli & Bengio, 2018) is a CNN-based architecture parameterized with sinc functions, designed so that the first convolutional layer imitates a band-pass filter. Zeghidour et al. (2018) formulate a learnable version of a filter bank featurization; their filters are initialized as an approximation of MFSC features and then fine-tuned jointly with the rest of the model. Sainath et al. (2015) proposed a powerful combined convolutional LSTM (CLDNN)-based model for learning from raw audio, using a large amount of training data. The WaveNet generative architecture (van den Oord et al., 2016), based on dilated convolutions, has been adapted to speech recognition and can be trained on raw audio. Other approaches that can learn from raw audio can be found in (Palaz et al.,\n2013; Collobert et al., 2016; Ghahremani et al., 2016). To our knowledge, the 14.6% PER achieved by our kaleidoscope + LSTM model on the TIMIT test set is the lowest error rate obtained by a model trained directly on the raw audio." }, { "heading": "A.4 LEARNING PERMUTATIONS", "text": "Permutation matrices find use in tasks such as matching and sorting (among many others). Techniques to obtain posterior distributions over permutations have been developed, such as the exponential weights algorithm (Helmbold & Warmuth, 2009) and the Gumbel-Sinkhorn network (Mena et al., 2018).\nClassifying images with permuted pixels is a standard task to benchmark the ability of RNNs to learn long range dependencies. Le et al. (2015) propose the Permuted MNIST task, in which the model has to classify digit images with all the pixels permuted. Many new RNN architectures, with unitary or orthogonal weight matrices to avoid gradient explosion or vanishing, have been proposed and tested on this task (Le et al., 2015; Arjovsky et al., 2016; Wisdom et al., 2016; Mhammedi et al., 2017; Trinh et al., 2018). Standard gated RNN architectures such as LSTM and GRU have also been found to be competitive with these new RNN architectures on this task (Bai et al., 2018)." }, { "heading": "B ADDITIONAL EXPERIMENTAL DETAILS", "text": "" }, { "heading": "B.1 SPEECH PREPROCESSING", "text": "In this section, we fully describe our settings and procedures for the speech preprocessing experiments in Section 3.1.1, and present additional auxiliary baselines and results." }, { "heading": "B.1.1 EXPERIMENTAL SETUP", "text": "We evaluate our speech recognition models on the TIMIT speech corpus (Garofolo et al., 1993), a standard benchmark for speech recognition. The input is audio (16-bit, 16 kHz .wav format), and the target is the transcription into a sequence of phonemes (units of spoken sound). Our evaluation metric is the phoneme error rate (PER) between the true phoneme sequence and the phoneme sequence predicted by our model. We use PyTorch (Paszke et al., 2017), the Kaldi speech recognition toolkit (Povey et al., 2011), and the PyTorch-Kaldi toolkit (Ravanelli et al., 2019) for developing PyTorch speech recognition models for all our experiments and evaluations." }, { "heading": "B.1.2 MODEL AND EVALUATION", "text": "Our baseline Bi-LSTM architecture is taken from the PyTorch-Kaldi repository.6 This is a strong baseline model that, to the best of our knowledge, matches state-of-the-art performance for models that use a single type of input featurization (Ravanelli et al., 2019). The original Bi-LSTM model takes as input filter bank features. These are computed as follows: (i) the waveform is framed (split into chunks of 25 ms each that overlap by 10 ms each), (ii) the waveform is dithered (zero-mean Gaussian random noise is added), (iii) pre-emphasis is applied to amplify high frequencies, (iv) the Hamming window function (Harris, 1978) is applied, (v) the FFT is applied, and the power spectrum of the resulting (complex-valued) output is computed, (vi) the power spectrum (which has dimension 512) is mapped to the “mel scale” (which is a scale intended to mimic human auditory perception (Stevens et al., 1937)) by multiplication with a specific banded matrix of dimension 512× 23, and the entrywise logarithm of the output is taken (the 23 outputs are called the filters), and (vii) cepstral mean and variance normalization (Liu et al., 1993) is applied. Numerical hyperparameters of this procedure include the dither noise scale, the pre-emphasis coefficient, the Hamming window size, the number of mel filters, and more; we kept all these the same as the Kaldi/PyTorch-Kaldi defaults.\nIn contrast, our “K-matrix version” of the model takes as input the raw waveform, split into chunks the same way as before but with no normalization, dithering, or other preprocessing, which is then fed into a complex-valued kaleidoscope [(BB∗)2] matrix. Similarly to the nonlinear steps in computing filter bank features, the logarithm of the power spectrum of the output (which has dimension 512)\n6This open-source repository can be found at https://github.com/mravanelli/ pytorch-kaldi.\nis then computed. This output is fed into the Bi-LSTM; the Bi-LSTM and kaleidoscope layer are trained together in standard end-to-end fashion. The Bi-LSTM architecture is not modified aside from changing the input dimension from 23 to 512; this (along with the ≈ 75K parameters in the kaleidoscope layer itself) results in approximately a 1.1M increase in the total number of parameters compared to the model that takes in MFSC features (a modest 8% relative increase). Total training time for our kaleidoscope-based architecture is 7% greater than that required for the model that uses MFSC features, not counting the time required to precompute the MFSC features; the FLOPs for inference-time are approximately 15% greater (mostly due to the larger dimension of the input to the Bi-LSTM; the kaleidoscope layer accounts for less than 0.5% of the total FLOPs).\nAs baselines, we also compare to inserting other types of linear transformations before the Bi-LSTM: fixed linear transformations (such as the fixed FFT, or no transform at all [i.e. the identity]), other trainable structured layers (low-rank, circulant, and sparse [using the sparse training algorithm of Dettmers & Zettlemoyer (2019)]), and a trainable unstructured (dense) linear layer. The kaleidoscope layer performs the best out of all such approaches. The fact that it outperforms even a dense linear layer with more parameters is particularly notable, as it suggests that the structural bias imposed by the K-matrix representation is beneficial for performance on this task. Full results are given in Table 5.\nIn our experiments, we grid search the initial learning rate for the “preprocessing layer” (if applicable) in {5e-5, 1e-4, 2e-4, 4e-4, 8e-4, 1.6e-3}, and fix all other hyperparameters (including the initial learning rates for the other parts of the network) to their default values in the PyTorch-Kaldi repository. The model and any preprocessing layers are trained end-to-end with the RMSProp optimizer for 24 epochs (as per the defaults in PyTorch-Kaldi). For each model, we use the validation set to select the best preprocessing learning rate, while the final error rates are reported on the separate held-out test set. For all structured matrix baselines except circulant (which always has n parameters for an n × n matrix), the number of parameters in the structured matrices is set to equal the number of parameters in the butterfly layer, while the unconstrained matrix is simply a standard dense complexvalued square matrix. For all experiments with a trainable “preprocessing layer,” we initialize the preprocessing matrix to represent the FFT (or approximate it as closely as possible [i.e. minimize the Frobenius error to the true FFT matrix], in the case of low-rank, sparse, and circulant), which we found to outperform random initialization." }, { "heading": "B.1.3 EXTENSION: COMBINING MFSC AND KALEIDOSCOPE", "text": "As an additional experiment, we sought to investigate whether combining the hand-engineered MFSC featurization pipeline and a learnable kaleidoscope layer (instead of replacing the former with the latter) could lead to accuracy gains. Specifically, in this experiment we first used the standard filter bank featurization pipeline described above, and trained end-to-end as usual. Then, we replaced the FFT step with a K-matrix initialized to the FFT, and made the weights of the Hamming window function and the mel filter bank matrix learnable as well (similarly to (Zeghidour et al., 2018)). We fine-tuned the resulting architecture for an additional 10 epochs. The final test PER% attained by this “hybrid” model is 14.0± 0.3; the model has 14.4M parameters—a negligible increase over the 14.3M in the original architecture. Thus, by combining the manually encoded domain knowledge in the filter bank featurization and allowing this structure to be learnable rather than fixed, we are able\nto nearly match the state-of-the-art 13.8% accuracy on TIMIT. While this “hybrid” model certainly involves some hand-engineering, the state-of-the-art results use a concatenation of three different speech audio featurizations—MFSC, MFCC, and fMLLR—as the neural network input, along with a customized RNN architecture (LiGRU) specifically designed for speech recognition, and thus require a more complicated pipeline that is arguably even more hand-crafted." }, { "heading": "B.2 REPLACING CNN CHANNEL SHUFFLE", "text": "" }, { "heading": "B.2.1 MODEL ARCHITECTURES", "text": "ShuffleNet is a convolutional neural network with residual (skip) connections that uses a permutation matrix to shuffle the channels after each grouped 1x1 convolution, sending the i-th channel to the (i mod g)-th group, where g is the total number of groups. The architecture for each residual block in ShuffleNet is: 1x1 group conv→ Batch norm, ReLU→ Permutation→ 3x3 depthwise conv→ Batch norm→ 1x1 group conv. The permutation is fixed. Zhao et al. (2019) propose to instead use the Hadamard transform before and after each grouped 1x1 convolution to mix the channels. Note that the Hadamard transforms are placed before the batch normalization and ReLU layer (unlike the permutation matrix in the original ShuffleNet design). In particular, the architecture for each block is: Hadamard→ 1x1 group conv→ Hadamard→ Batch norm, ReLU→ 3x3 depthwise conv→ Batch norm→ 1x1 group conv. The Hadamard transform is fixed.\nIn our architecture, we use a kaleidoscope matrix in OBB (product of an orthogonal butterfly matrix, a diagonal matrix, and the transpose of another butterfly matrix) before and after each grouped 1x1 convolution. We place the second K-matrix after the batch norm and ReLU, to more closely mimic the original ShuffleNet design. The structure for each block is: K-matrix→ 1x1 group conv→ Batch norm, ReLU→ K-matrix→ 3x3 depthwise conv→ Batch norm→ 1x1 group conv. The K-matrices are trained along with the rest of the network, rather than being fixed." }, { "heading": "B.2.2 EXPERIMENTAL SETUP", "text": "We evaluate the CNN architectures on the image classification task of the standard ImageNet dataset (Russakovsky et al., 2015). We use the standard data augmentation, training, and evaluation pipeline as in (Xie et al., 2017). We train with SGD on 8 GPUs for 90 epochs, with a total batch size of 2048 and initial learning rate 0.8. For the 1.0 ShuffleNet g8 architecture, we reduce the total batch size to 1792 to fit into GPU memory, and correspondingly linearly scale the initial learning rate to 0.7. Other hyperparameters (e.g. learning rate schedule, weight decay, etc.) are kept the same as in the ShuffleNet paper (Zhang et al., 2018). We use the training script from NVIDIA’s deep learning examples repository.7" }, { "heading": "B.2.3 ADDITIONAL RESULTS", "text": "In Table 6, we report top-5 classification accuracy on ImageNet, to complement the top-1 accuracies in Table 2.\nIn each setting, the total training time of our K-matrix approach is within 20% of the total training time of vanilla ShuffleNet.\nIn Figure 4, we plot the loss and accuracy on the training set and validation set when we train 1.0 ShuffleNet g8, with either a fixed permutation (Shuffle) or a K-matrix for channel shuffling. Even though each K-matrix is a product of multiple (sparse) matrices, the model with K-matrices takes about the same number of training steps to converge as the baseline model does. One possible reason is that we constrain the K-matrices to be orthogonal (Section 2.4), thus avoiding vanishing or exploding gradients." }, { "heading": "B.3 LEARNING PERMUTATIONS", "text": "" }, { "heading": "B.3.1 DATASET", "text": "The permuted CIFAR-10 dataset is constructed by applying a fixed permutation to every input. We choose to use the 2-D bit-reversal permutation,8 i.e., the bit reversal permutation on 32 elements is applied to the rows and to the columns. This permutation was chosen because it is locality-destroying: if two indices i, j are close, they must differ in a lower-order bit, so that the bit-reversed indices i′, j′ are far. This makes it a particularly challenging test case for architectures that rely on spatial locality such as “vanilla” CNNs." }, { "heading": "B.3.2 MODEL AND TRAINING", "text": "We describe the model architectures used in Section 3.1 (those reported in Table 3).\nOur model (K + CNN) The model represents a fixed permutation P , parametrized as a K-matrix, to learn to recover the true permutation, followed by a standard ResNet18 architecture (He et al., 2016). Because of the simple decomposable nature of the butterfly factors (Section 2.1), our parameterization is easily extensible with additional techniques:\n(i) We constrain each butterfly factor matrix in the K-matrix to be doubly-stochastic. For example, each 2 × 2 block in the butterfly factor matrix of block size 2 has the form[\na 1− a 1− a a\n] , where a ∈ [0, 1]. We treat this block as a distribution over permutations,\n8The bit-reversal permutation reverses the order of the bits in the binary representation of the indices. For example, indices [0, 1, ..., 7] with binary representations [000, 001, ..., 111] are mapped to [000, 100, ..., 111], which corresponds to [0, 4, 2, 6, 1, 5, 3, 7]\ngenerating the identity [ 1 0 0 1 ] with probability a and the swap [ 0 1 1 0 ] with probability 1−a. Butterfly factor matrices with larger block sizes are constrained to be doubly-stochastic in a similar manner. In this way, a permutation is sampled for each butterfly factor matrix, and these permutations are composed to get the final permutation that is applied to the image.\n(ii) For each minibatch, the examples Px by applying permutation samples on the (permuted) inputs are fed into an additional unsupervised reconstruction loss∑\n0≤i,j<n\n∥∥∥∥[(Px)[i+ 1, j]− (Px)[i, j](Px)[i, j + 1]− (Px)[i, j] ]∥∥∥∥\n2\n(1)\nmeasuring total variation smoothness of the de-noised inputs. Such loss functions are often used in image denoising (Rudin et al., 1992). A final regularization loss was placed on the entropy of P , which was annealed over time to encourage P to converge toward a sharper doubly-stochastic matrix (in other words, a permutation). The model is trained with just the reconstruction loss to convergence before the standard ResNet is trained on top.\nThese techniques are applicable to the K-matrix as well as specialized methods for representing permutations such as Gumbel-Sinkhorn (Mena et al., 2018) and are important for recovering the true permutation. However, they are not applicable to a general linear layer, which showcases the flexibility of K-matrices for representing generic structure despite not being specially tailored for this task. We also remark that other classes of structured linear maps such as low-rank, circulant, and so on, are even less suited to this task than dense matrices, as they are incapable of representing all permutations." }, { "heading": "Baseline architectures", "text": "1. Fully connected (FC): This is a 3-layer MLP, with hidden size 1024 and ReLU nonlinearity in-between the fully connected layers.\n2. Recurrent neural network (RNN): We use a gated recurrent unit (GRU) model (Cho et al., 2014), with hidden size 1024. Many RNN architectures have been proposed to capture long-range dependency on permuted image dataset such as Permuted MNIST (Arjovsky et al., 2016). Standard gated architectures such as LSTM and GRU have shown competitive performance on the Permuted MNIST dataset, and we choose GRU as a baseline since it has been reported to slightly outperform LSTM (Bai et al., 2018).\n3. CNN: We use the standard ResNet18 architecture, adapted to smaller image size of the CIFAR-10 dataset (changing stride from 2 to 1 of the first convolutional layer, and removing max-pooling layer that follows).\n4. Dense + CNN: We add an additional linear layer (i.e. a dense matrix) of size 1024× 1024 before the ResNet18 architecture. This dense layer can in theory represent a permutation, but cannot benefit from the additional techniques described above.\n5. Baseline CNN (unpermuted): We use the standard ResNet18 architecture applied to the unpermuted CIFAR-10 dataset.\nAll models are trained for 200 total epochs, with the Adam optimizer. We use the standard learning rate schedule and weight decay from Mostafa & Wang (2019). We use Hyperband (Li et al., 2017) to tune other hyperparameters such as the initial learning rate and annealing temperature." }, { "heading": "B.4 SPEEDING UP DYNAMICCONV’S INFERENCE", "text": "" }, { "heading": "B.4.1 MODEL ARCHITECTURE", "text": "We start with the DynamicConv Transformer architecture (Wu et al., 2019), which is a variant of the Transformer architecture (Vaswani et al., 2017) where the self-attention in each layer is replaced with a light-weight DynamicConv module. We use the implementation from the Fairseq library(Ott et al., 2019),9 with PyTorch version 1.2.\n9This library can be found at https://github.com/pytorch/fairseq\nThe architecture of each layer of the decoder is: Linear→ DynamicConv→ Linear→ LayerNorm → Encoder-decoder attention→ LayerNorm→ Linear→ ReLU→ Linear→ ReLU→ LayerNorm. In every layer of the decoder, we replace the dense weight matrix in each of the four Linear layers with a K-matrix from the B class (i.e. a butterfly matrix)." }, { "heading": "B.4.2 TRAINING AND EVALUATION", "text": "The models are trained from scratch using the training script from the Fairseq repository, with the same hyperparameters (optimizer, learning rate, number of updates, etc.) used in the DynamicConv paper (Wu et al., 2019). We note that the DynamicConv model with K-matrices in the decoder trains slightly faster than the default DynamicConv model (both models are trained for 50,000 updates, which requires approximately 7% less time for the K-matrix model than for the default model).\nTo evaluate inference speed, we run the decoding script on the IWSLT-14 De-En test set in singlethreaded mode on a server Intel Xeon CPU E5-2690 v4 at 2.60GHz, and measure wall-clock time. The test set contains 6750 sentences, with 149241 tokens. Following Wu et al. (2019), we set the batch size to 1 and beam size to 1 for this evaluation." }, { "heading": "B.4.3 ADDITIONAL COMPARISON WITH OTHER STRUCTURED MATRICES", "text": "We additionally compare the speed-quality tradeoff of K-matrices with other classes of structured matrices, when used to replace the fully-connected layers of DynamicConv’s decoder. We consider the following additional classes of structured matrices: low-rank, circulant, Toeplitz-like (Sindhwani et al., 2015), ACDC (Moczulski et al., 2016), Fastfood (Le et al., 2013), and sparse. For classes with a variable number of parameters (e.g. low-rank, sparse), we set the number of parameters to match that of K-matrices. For sparse matrices, besides the result for an ensemble of 10 models (the default setting in the Fairseq repository), we also report the result for a single model, as that could have faster inference time (since ensembling/averaging sparse matrices produces a less sparse matrix).\nIn Figure 5, we plot the tradeoff between translation quality (measured by BLEU score) and inference speed (sentences per second). Most classes of structured matrices produce similar translation quality (between 34.1 and 34.4 BLEU score). K-matrices have the second fastest inference time, only 7% slower than low-rank matrices. We note that low-rank matrices benefit from very well-tuned BLAS routines (matrix-matrix multiplication). Even though our implementation of K-matrix multiplication is not yet highly optimized, it is already quite close to the speed of low-rank matrix multiplication at an equivalent parameter count." }, { "heading": "C SPEED BENCHMARK AND IMPLEMENTATION DETAILS", "text": "Each K-matrix (for fixed width and expansion), has an O(n log n) matrix-vector multiplication algorithm: sequentially multiply the input vector with each of the sparse factors. Our implementation of this simple algorithm is surprisingly competitive with optimized subroutines, both on GPU (e.g. for training) and on CPU (e.g. for inference). In Figure 6, we compare the speed of multiplying by a K-matrix in class B (i.e. a butterfly matrix) against a specialized implementation of the FFT. We normalize the speed by the speed of dense matrix-matrix multiply (on GPU) or dense matrix-vector multiply (on CPU). On GPU, with input sizes n = 1024 and batch size 2048, the training time (forward and backward) of K-matrices matrix is about 3x faster than dense matrix multiply (GEMM from cuBLAS). For inference on CPU, the kaleidoscope fast multiplication can be one or two orders of magnitude faster than GEMV. Over a range of matrix sizes, our implementation is within a factor of 2-4x of specialized implementations of the FFT, a highly optimized kernel.\nOur implementation is also memory-efficient. In the forward pass through theO(log n) sparse factors, we do not store the intermediate results, but recompute them during the backward pass. Therefore the activation memory required is O(bn) for an input batch size of b." }, { "heading": "D SYNTHETIC MATRIX RECOVERY", "text": "We directly validate Theorem 1 on well-known types of structured matrices used in machine learning. Given a structured matrix M, we attempt to represent M as closely as possible using K-matrices as\nwell as the standard classes of structured matrices: sparse and low-rank. In Table 7, we quantify the expressivity of each of these three methods, as measured by their ability to approximate a range of different structures. Results for “global minimum” of kaleidoscope matrices are obtained from the theoretical expressiveness results in Section I and Section J. Low-rank and sparse approximation have closed form solutions: truncating the SVD and keeping the largest-magnitude entries, respectively. We also report the results using SGD for kaleidoscope matrices to validate that good approximation with K-matrices can be obtained even from standard first-order optimization algorithms. Even with imperfect optimization, kaleidoscope matrices can still capture out-of-class target matrices better than low-rank and sparse matrices.\nThe target matrices are kaleidoscope, low-rank, sparse, convolution (i.e. circulant matrices), Fastfood (Le et al., 2013), and entrywise random IID Gaussian matrix (to show the typical magnitude of the error). All target matrices M were randomly initialized such that E[MTM] = I.\nTo find a kaleidoscope approximation with SGD, we used Hyperband to tune its learning rate (between 0.001 and 0.5)." }, { "heading": "E PROPERTIES OF THE BB∗ HIERARCHY", "text": "Here, we justify why the definitions in Section 2.2 give rise to a hierarchy. We first make some basic observations about the parameterization. Observation E.1. An n× n matrix M ∈ BB∗ has 4n log n parameters.\nProof. M can be expressed as a product of 2 log n butterfly factor matrices of size n× n. Each of these factor matrices has 2 parameters per row, for a total of 2n parameters each. Hence, the total number of parameters is 4n log n.\nObservation E.2. Let M be an n×n matrix in (BB∗)we . Then, given an arbitrary vector v of length n, we can compute Mv with O(wne log(ne)) field operations.\nProof. Since M ∈ (BB∗)we , we can decompose it as SE1E2 . . .EwST , where S is as given in Definition 2.4, and each Ei is an en× en matrix in BB∗. Therefore, to compute Mv, we can use associativity of matrix multiplication to multiply the vector by one of these matrices at a time.\nSince all of these factors are sparse, we use the naïve sparse matrix-vector multiplication algorithm (begin with a 0-vector and perform the corresponding multiplication and addition for each nonzero matrix entry). S (and thus ST ) have n NNZ. Therefore, matrix-vector multiplication by S or ST requires O(n) operations, which is dominated by the butterfly matrix-vector multiplication. Each Ei can be further decomposed into 2 log(ne) matrices with at most 2ne non-zero entries each (by Observation E.1). Therefore, matrix vector multiplication by each Ei requires O(ne log(ne)). Since there are w such Ei, we require a total of O(wne log(ne)) operations.\nNow, we are ready to show that our definition of classes (BB∗)we forms a natural hierarchy. First, we must argue that all matrices are contained within the hierarchy.\nLemma E.3. Let M be an arbitrary n× n matrix. Then M ∈ (BB∗)(2n−2).\nProof. Corollary E.3 in Appendix K shows that any n × n matrix can be written in the form M1M ′ 1 ∗ . . .Mn−1M ′ n−1 ∗ MMnM ′ n ∗ . . .M2n−2M ′ n−2 ∗, where Mi,M′i are orthogonal butterfly matrices and M is a diagonal matrix. We can combine D with Mn to form another (possibly not orthogonal) butterfly matrix. This yields a decomposition of M as products of (possibly not orthogonal) butterfly matrices and their (conjugate) transposes, completing the proof.\nNext, we argue that, up to a certain point, this hierarchy is strict.\nLemma E.4. For every fixed c ≥ 1, there is an n× n matrix Mn (with n sufficiently large) such that Mn ∈ (BB∗)c+1 but Mn 6∈ (BB∗)c.\nProof. Given c, fix n to be a power of 2 such that c < n4 log2 n . For sake of contradiction, assume that every n× n matrix in (BB∗)c+1 is also in (BB∗)c. Let A be an arbitrary n× n matrix. From Lemma E.3, A ∈ (BB∗)(2n−2). From our assumption, we can replace the first c + 1 BB∗ factors of A with c (potentially different) BB∗ factors and still recover A. We can repeat this process until we are left with c BB∗ factors, implying that A ∈ (BB∗)c. From Observation E.1, we require 4cn log n < n2 (by our choice of n) parameters to completely describe A. This is a contradiction since A is an arbitrary n× n matrix, and therefore has n2 arbitrary parameters. Hence, there must be some n× n matrix in (BB∗)c+1 that is not in (BB∗)c." }, { "heading": "F ARITHMETIC CIRCUITS IN BB∗ HIERARCHY", "text": "In this appendix, we prove our main theoretical result, namely, our ability to capture general transformations, expressed as low-depth linear arithmetic circuits, in the BB∗ hierarchy. This result is recorded in Theorem 1. Theorem 1. Let M be an n×nmatrix such that matrix-vector multiplication of M times an arbitrary vector v can be represented as a be a linear arithmetic circuit C comprised of s gates (including inputs) and having depth d. Then, M ∈ (BB∗)O(d)O( sn ).\nTo prove Theorem 1, we make use of the following two theorems. Theorem 2. Let P be an n× n permutation matrix (with n a power of 2). Then P ∈ BB∗.\nTheorem 3. Let S be an n× n matrix of s NNZ. Then S ∈ (BB∗)4d s ne\n4 .\nTheorem 2 is proven in Appendix G, and Theorem 3 is proven in Appendix I.\nProof of Theorem 1. We will represent C as a product of d matrices, each of size s′ × s′, where s′ is the smallest power of 2 that is greater than or equal to s.\nTo introduce some notation, define w1, . . . wd such that wk represents the number of gates in the k’th layer of C (note that s = n + ∑d k=1 wk). Also, define z1, . . . zd such that z1 = n and zk = wk−1 + zk−1 (zk is the number of gates that have already been used by the time we get to layer k).\nLet gi denote the i’th gate (and its output) of C (0 ≤ i < s), defined such that:\ngi = { vi 0 ≤ i < n αjgi1 + βigi2 n ≤ i < s\nwhere i1, i2 are indices of gates in earlier layers.\nFor the k’th layer of C, we define the s′ × s′ matrix Mk such that it performs the computations of the gates in that layer. Define the i’th row of Mk to be:\nMk[i :] = eTi 0 ≤ i < zk αie T i1 + βie T i2\nzk ≤ i < zk + wk 0 i ≥ zk + wk\nFor any 0 ≤ k ≤ d, let vk be vector\nvk = Mk . . .M2M1 [ v 0 ] .\nWe’d like to argue that vd contains the outputs of all gates in C (i.e, the n values that make up Mv). To do this we argue, by induction on k, that vk is the vector whose first zk+1 entries are g0, g1, . . . , g(zk−1), and whose remaining entries are 0. The base case, k = 0 is trivial. Assuming this holds for the case k − 1, and consider multiplying vk−1 by Mk. The first zk rows of Mk duplicate\nthe first zk entries of vk−1 The next wk rows perform the computation of gates gzk , . . . , g(zk+1−1). Finally, the remaining rows pad the output vector with zeros. Therefore, vk is exactly as desired.\nThe final matrix product will contain all n elements of the output. By left multiplying by some permutation matrix P, we can reorder this vector such that the first n entries are exactly Mv. Hence, we are left to argue the position of PMd . . .M2M1 within the BB∗ hierarchy. Each Mk is a matrix with total 2wk + zk < 2s′ NNZ. From Theorem 3, we can, therefore, represent Mk as a product of O(1) matrices (of size 2s′) in BB∗. From Theorem 2, P ∈ BB∗. Note that s ≤ s′ < 2s, so s′ = Θ(s).\nOur final decomposition will have O(d) BB∗ factors, and requires an expansion from size n to size 2s′, or an expansion factor of O( sn ). Therefore, M ∈ (BB ∗) O(d) O( sn ) , as desired.\nRemark F.1. By applying Observation E.2, we see that Theorem 1 gives an O(sd log s) matrix vector multiplication algorithm for M." }, { "heading": "G PERMUTATIONS IN BB∗", "text": "In this appendix, we prove Theorem 2. In addition, we will also show that permutations are in B∗B, where the set B∗B is defined analogously to BB∗ (i.e. matrices of the form M = M∗1M2 for some M1,M2 ∈ B). To prove Theorem 2, we decompose permutation matrix P into P = LR, with L ∈ B and R ∈ B∗. Throughout the proof, we make use of the following definition.\nDefinition G.1. Let L be an n× n permutation matrix (n a power of 2). We say that L meets the 2j balance condition if L can be divided into chunks of 2j (with each chunk having all columns i such that ⌊ i 2j ⌋ has the same value) such that for every 0 ≤ m < 2j , each chunk has exactly one L[:, k] = eπk with πk ≡ m ( mod 2j). We say that L is modular-balanced if it meets the 2j balance condition for each 2 ≤ 2j ≤ n.\nLemma G.1. Let L be an n× n modular-balanced matrix. Then L ∈ B.\nProof. We proceed by induction on n. The base case n = 2 is trivial. As our inductive hypothesis, we assume that all modular-balanced matrices of size n2 × n 2 are butterfly matrices of size n 2 . From Definition 2.3, it is sufficient to show that L can be decomposed as:\nL = Bn [ L1 0 0 L2 ] ︸ ︷︷ ︸\nL′\n,\nwhere Bn is a butterfly factor of size n and each Lj is an n2 × n 2 modular-balanced matrix.\nDefine L1 and L2 such that: L1[i, j] = L[i, j] + L [ i+ n 2 , j ]\nL2[i, j] = L [ i, j + n\n2\n] + L [ i+ n\n2 , j +\nn\n2\n] .\nNote that since L is a permutation matrix (and thus has exactly one non-zero entry per column), at most one term of each of these sums can be non-zero.\nFor sake of contradiction, assume L1 is not modular-balanced. Then, for some 2j ≤ n2 , there are two columns c1, c2 such that ⌊ c1 2j ⌋ = ⌊ c2 2j ⌋ and such that indices of the non-zero entries of L1 in columns c1 and c2 are the same modulo 2j . However, from the definition of L1, this implies that the indices of the non-zero entries of L in columns c1 and c2 are also the same modulo 2j , contradicting L being modular-balanced. Hence, L1 is modular-balanced. An analogous argument (that instead considers columns c1 + n2 , c2 + n 2 of L) shows that L2 is also modular-balanced.\nTo complete the proof, we must argue that Bn is a butterfly factor of size n. Since each Li is modular-balanced, it is a permutation matrix. Therefore, L′ has exactly 1 non-zero entry in each of the first n2 rows and columns from L1 and exactly 1 non-zero entry in each of the second n 2 rows and columns from L2. Hence, L′ is a permutation matrix. Since both L and L′ are permutation matrices, B = L (L′)−1 must also be a permutation matrix. Therefore, we can view B as performing a permutation of the rows of L′ to get L.\nConsider the i’th row of L′, with 0 ≤ i < n2 . There are two possible cases. Case 1: L′[i, :] = L[i, :]\nIn this case, the column of L with a non-zero entry in row i is in the left n2 columns. The column of L with a non-zero entry in row i + n2 must, therefore, be in the right n 2 columns, otherwise\nL would not satisfy the n2 balance condition. Therefore, L ′ [i+ n2 , :] = L [i+ n2 , :], so we set\nB[i, i] = B [ i+ n2 , i+ n 2 ] = 1.\nCase 2: L′[i, :] 6= L[i, :] By the definition of L′, L′[i, :] = vL [ i+ n2 , : ] . In this case, the column of L with a non-zero entry in row i+ n2 must be in the left n 2 columns. By the n 2 balance condition of L, the column of L with a\nnon-zero entry in row i must be in the right n2 columns. Therefore, L ′ [i+ n2 , :] = L [i, :], so we set\nB [ i, i+ n2 ] = B [ i+ n2 , i ] = 1.\nIn both cases, the non-zero entries of B fall into the correct diagonal bands (the main diagonal, and the bands n2 away). Hence, B is a butterfly factor of size n.\nNow, we consider the process of transforming P into a modular-balanced matrix. We make use of the following lemma.\nLemma G.2. Let M be a k × k matrix with 1 non-zero entry per column, such that for each 0 ≤ m < k2 , there are exactly 2 columns with non-zero entry in a row with index ≡ m ( mod k2 ) . Then, there is a butterfly factor Bk such that MBk = M′, where M′ meets the k2 balance condition.\nProof. We construct a directed graph G with nodes in [ k 2 ] . For each 0 ≤ i < k2 we add a directed\nedge from node ( s mod k2 ) to node ( t mod k2 ) if M[:, i] = es and M [ :, i+ k2 ] = et. Each node has (undirected) degree exactly 2 by the structure of M. Hence, G is a union of disjoint (undirected) cycles.\nIf M met the k2 balance condition, then each node would additionally have in-degree exactly 1 and out-degree exactly 1. By reversing edges of G such that each (undirected) cycle becomes a directed cycle, we can achieve this. However, reversing edges corresponds to swapping columns of M that are k2 apart. Let Bk be the permutation matrix that performs all such swaps. Bk has non-zero entries only along the main diagonal and the diagonal bands k2 away, and thus is a butterfly factor of size k.\nWe are ready to present the decomposition of P.\nLemma G.3. Let P be an n × n permutation matrix. Then we can decompose P into P = LR, where L is modular-balanced and R ∈ B∗.\nProof. We repeatedly apply Lemma G.2. First, we conclude that there is a butterfly factor Bn such that\nPBn = P ′,\nwhere P′ meets the n2 balance condition. Now, we consider the first and last n 2 columns of P ′\nindependently. We can again apply Lemma G.2 (twice) to conclude that there are butterfly factors[ Bn\n2 ] 1 , [ Bn 2 ] 2 such that\nPBn\n[[ Bn\n2 ] 1\n0 0 [ Bn\n2 ] 2\n] = PB(n)n B\n(n) n 2 = P′′,\nwhere P′′ meets the n2 and n 4 balance conditions.\nWe continue this process until we obtain a matrix that meets all of the balance conditions. Our final equation is of the form:\nP ·B(n)n B (n) n 2 . . .B (n) 2 = PB = L,\nwhere B is a butterfly matrix and L is a modular-balanced matrix. Let R = B−1 = B∗ (since B is a permutation matrix, and thus is orthogonal) and hence R ∈ B∗. Then P = LR, as desired.\nTheorem 2 follows immediately from Lemmas G.3 and G.1.\nWe now show that permutations are also in B∗B. We start with the relationship between butterfly matrices and the bit-reversal permutation.\nLemma G.4. Let Pbr be the n× n bit-reversal permutation matrix where n is some power of 2, and let M1 ∈ B be an n× n butterfly matrix. Then there is some butterfly matrix M2 ∈ B such that\nM∗1 = PbrM2Pbr.\nProof sketch. For any input vector x of length n, to perform M∗1x, we trace through log2 n steps of the multiplication algorithm. At each step, we perform 2× 2 matrix multiplication on elements of x whose indices are n/2 apart (e.g. indices 0 and n/2, 1 and n/2 + 1, etc.), then n/4 apart, and so on, till indices are that 1 apart. If we apply the bit-reversal permutation on x, then indices that are n/2 apart will become 1 apart, indices that are n/4 apart will become 2 apart, and so on. So the multiplication algorithm M∗1x is equivalent to applying bit-reversal, then multiplying the permuted vector with another butterfly matrix (i.e. 2 × 2 matrix multiplication on indices that are 1 apart, then 2 apart, and so on, till indices that are n/2 apart). Finally we need to do another bit-reversal permutation to put all the indices back to the original order. If we call this other butterfly matrix M2, then we have shown that M∗1x = PbrM2Pbrx. This holds for all x (for the same matrix M2), so we have M∗1 = PbrM2Pbr.\nRemark G.5. Lemma G.4 explains the connection between the two most common fast Fourier transform algorithm, decimation in time and decimation in frequency. Using the decimation-in-time\nFFT, we can write the DFT matrix F as product of a butterfly matrix M1 and the bit-reversal permutation (see Section J):\nF = M1Pbr.\nTaking conjugate transpose, we obtain F∗ = PbrM∗1 (recall that Pbr is its own transpose/inverse). On the other hand, F∗ is just a scaled version of the inverse DFT matrix, so apply decimation-in-time FFT to the inverse DFT, we can write F∗ = M2Pbr for some other butterfly matrix M2. Hence PbrM ∗ 1 = M2Pbr, and thus PbrM ∗ 1Pbr = M2 (for these particular butterfly matrices M1 and M2). Note that this yields another decomposition of the DFT matrix, F = PbrM∗2, which is exactly the decimation-in-frequency FFT algorithm.\nWe are ready to show that permutations are in B∗B. Lemma G.6. Let P be an n× n permutation matrix (with n a power of 2). Then there are butterfly matrices M1,M2 ∈ B such that P = M∗1M2.\nProof. Consider the permutation P̃ = PbrPPbr. By Theorem 2, there are some butterfly matrices M̃1, M̃2 ∈ B such that P̃ = M̃1M̃2 ∗ . Applying Lemma G.4, we can replace M̃2 ∗ with PbrM2Pbr for some butterfly matrix M2 ∈ B. We thus have:\nPbrPPbr = M̃1PbrM2Pbr.\nPre- and post-multiply both sides by Pbr (which is its own inverse):\nP = PbrM̃1PbrM2.\nApplying Lemma G.4 again, we can replace PbrM̃1Pbr with M∗1 for some butterfly matrix M1 ∈ B. Thus:\nP = M∗1M2." }, { "heading": "H BB∗ CLOSURE LEMMAS", "text": "Here, we present some basic facts of the BB∗ hierarchy that will be useful for later constructions. For simplicity, we assume (WLOG via 0-padding) that all matrices are square matrices with size that is a power of 2.\nLemma H.1. If M ∈ B (or M ∈ B∗), then DM,MD ∈ B (B∗ resp.) for any diagonal matrix D.\nProof. Left multiplication by a diagonal matrix scales the rows of M by the corresponding diagonal entries. The same can be achieved by scaling all entries the leftmost butterfly factor matrix. Similarly, right multiplication by a diagonal matrix scales the columns of M, which can be achieved by scaling all entries in the columns of the rightmost butterfly factor matrix.\nLemma H.2. Let A,B ∈ Fn×n. If A ∈ (BB∗)w1e and B ∈ (BB∗)w2e then AB ∈ (BB∗)w1+w2e .\nProof. Let EA,EB ∈ Fen×en be defined such that A = SEAST , B = SEBST (with S as in Definition 2.4). Then\nAB = S [ In 0 0 0 ] ︸ ︷︷ ︸ en× en EA [ In 0 0 0 ] ︸ ︷︷ ︸ en× en EB S T\n[ In 0 0 0 ] EA ∈ (BB∗)w1 , [ In 0 0 0 ] EB ∈ (BB∗)w2 by Lemma H.1. Hence, AB ∈ (BB∗)w1+w2e by\nDefinition 2.4.\nLemma H.3. Let A1, . . . ,Am ∈ Fk×k. If A1, . . . ,Am ∈ (BB∗)we then Diag(A1, . . . ,Am) ∈ (BB∗)w+2e .\nProof. For each 1 ≤ i ≤ m, let EAi ∈ Fek×ek be defined such that Ai = SEAiST (with S as in Definition 2.4). Then A1 0 . . . 0 0 A2 . . . 0 ... ... . . . 0\n0 0 . . . Am\n = SP EA1 0 . . . 0 0 EA2 . . . 0 ... ... . . . 0\n0 0 . . . EAm PTST where P is a permutation that that moves the first k rows of each EAi (in order) into the top mk rows. From Theorem 2, P ∈ BB∗, (and so is PT , also a permutation). Within the RHS block matrix, the decompositions of each EAi can be done in parallel, requiring total width w. Hence, Diag(A1, . . . ,Am) ∈ (BB∗)w+2e , as desired.\nRemark H.4. If e = 1 in Lemma H.3, then P is unnecessary. Hence, Diag(A1, . . . ,Am) ∈ (BB∗)w. Lemma H.5. Let A1, . . . ,Am be k × k matrices in (BB∗)we then ∑m i=1 Ai ∈ (BB∗)mw4e .\nProof. For each 1 ≤ i ≤ m, let EAi ∈ Fek×ek be defined such that Ai = SEAiST (with S as in Definition 2.4). Note that EAi ∈ (BB∗)w. Consider matrices of the form:Iek EAi 0 00 Iek 0 00 0 0 0\n0 0 0 0 ︸ ︷︷ ︸\nMi ∈ F4ek×4ek\n=\n[ I2ek I2ek\n0 0 ] ︸ ︷︷ ︸\nL\nIek 0 0 00 Iek 0 00 0 EAi 0 0 0 0 0 ︸ ︷︷ ︸\nS\n Iek 0 0 Iek 0 0 0 0\n0 0 0 0 0 Iek Iek 0 ︸ ︷︷ ︸\nP1\n[ I2ek 0 I2ek 0 ] ︸ ︷︷ ︸\nR\n.\nHere, L and R compute the sum of the 2ek × 2ek matrices on the diagonal of SP1, where P1 is a permutation swapping EAi to the 4\nth ek-block column. Note that S is the diagonalization of four matrices in (BB∗)w, so S ∈ (BB∗)w by Remark H.4. In addition, since each block in S is a butterfly matrix of size ek, S only uses butterfly factors up to size ek, so the outer factor matrices of sizes 4ek and 2ek in S are unused. Also note that L and R are butterfly factor matrices of size 4ek (or B(4ek)4ek ), and P1 is a butterfly factor matrix of size 2ek (or B (4ek) 2ek ). This allows us to fold the surrounding matrices L,P1,R into S, so Mi ∈ (BB∗)w. Through repeated application (m times) of the identity[\nI A 0 I ] [ I B 0 I ] = [ I A + B 0 I ] ,\nwe see that Iek ∑m i=1 EAi 0 0\n0 Iek 0 0 0 0 0 0 0 0 0 0 ︸ ︷︷ ︸\nM ∈ F4en×4en\n= m∏ i=1 Mi. (2)\nFrom Lemma H.2, M ∈ (BB∗)mw. Finally, note that ∑m i=1 Ai = SMP2S\nT , where P2 is a permutation that moves the first k columns of the second block-column of M to the left. P2 can be folded into the final summation factor Mm as follows: Iek 0 0 Iek 0 0 0 0\n0 0 0 0 0 Iek Iek 0 ︸ ︷︷ ︸\nP1\n[ I2ek 0 I2ek 0 ] ︸ ︷︷ ︸\nR\n 0 Iek 0 0Iek 0 0 00 0 Iek 0 0 0 0 Iek ︸ ︷︷ ︸\nP2\n= 0 Iek Iek 0 0 0 0 0\n0 0 0 0 Iek 0 0 Iek ︸ ︷︷ ︸\nP′1\n[ I2ek 0 I2ek 0 ] ︸ ︷︷ ︸\nR\n(3)\nHence, ∑m i=1 Ai ∈ (BB∗)mw4e , as desired.\nLemma H.6. Let M be an invertible n× n matrix such that M ∈ B. Then M−1 ∈ B∗.\nProof. We prove this in a series of steps.\nFirst, let Bk be an invertible butterfly factor of size k. Consider the method of computing B−1k by performing Gaussian elimination on the matrix [Bk|Ik] to obtain the matrix [ Ik|B−1k ] . By the form of B, non-zero entries within a row or column are always exactly k2 positions apart. Therefore, the only row operations needed for this Gaussian elimination are:\n• Scaling a row by a constant factor c 6= 0\n• Addition of a row to another row exactly k2 rows apart\nPerforming these operations on Ik will only allow non-zeros on the main diagonal and k2 diagonals away from the main diagonal. Hence, B−1k is also a butterfly factor of size k.\nNext, let B(n)k be an invertible butterfly factor matrix of size n and block size k. Its inverse is the block diagonal matrix formed by the inverses of each of its constituent butterfly factors. From above,( B\n(n) k\n)−1 is also a butterfly factor matrix of size n and block size k.\nFinally, consider M ∈ B.\nM−1 = ( B(n)n B\n(n) n 2 . . .B (n) 2\n)−1 = ( B\n(n) 2\n)−1( B\n(n) 4 )−1 . . . ( B(n)n )−1 = B′2 (n) B′4 (n) . . .B′n (n) ∈ B∗\nFinally, we include a closure result for the Kronecker product, another common matrix composition operation. Although Lemma H.7 is not directly used in the subsequent proofs, it allows for examples the results for the DFT to be lifted to higher-dimensional Fourier transforms. We also note that the closure bound in Lemma H.7 can be tightened in such cases (cfṘemark H.4).\nLemma H.7. Let A,B ∈ Fn×n. If A ∈ (BB∗)w1e and B ∈ (BB∗)w2e then A⊗B ∈ (BB∗)w1+w2+6e .\nProof. Note that A⊗B = (A⊗ I)(I⊗B) = P−1(I⊗A)P(I⊗B),\nfor some permutation P. By Lemma H.3, I⊗A and I⊗B are in (BB∗)w1+2e , (BB∗)w2+2e respectively. The result follows from combining with P ∈ BB∗ and Lemma H.2." }, { "heading": "I SPARSE MATRICES IN BB∗ HIERARCHY", "text": "In this appendix, we prove Theorem 3. First, we consider matrices with at most n NNZ.\nLemma I.1. let S be an n× n matrix with at most n NNZ. Then, S ∈ (BB∗)4.\nWe use this lemma and the addition closure lemma to prove Theorem 3. Proof of Theorem 3. We note that any s sparse matrix is the sum of ⌈ s n ⌉ matrices of at most n NNZ, and we appeal to Lemma H.5.\nIn the rest of the section we will prove Lemma I.1. We begin by defining two classes of matrices that will be used in our decomposition.\nDefinition I.1. An n × n matrix H is a horizontal step matrix if for every 0 ≤ i, i′ < n and 0 ≤ j ≤ j′ < n, if H[i, j] 6= 0 and H[i′, j′] 6= 0, then j′ − j ≥ (i′ − i) mod n. An n× n matrix V is a vertical step matrix if V∗ is a horizontal step matrix.\nWith this definition, the horizontal step matrix obeys a “Lipschitz-like\" condition. Each column of a horizontal step matrix can have at most one non-zero entry, and given two non-zero columns k apart, the non-zero entry in the right column must be between 0 and k rows below the non-zero entry in the left column. Note that to show that a matrix is a horizontal step matrix, it is sufficient to argue that this condition holds for each pair of neighboring non-zero columns.\nSimilarly, each row of a vertical step matrix can have at most one non-zero entry, and given two non-zero rows k apart, the non-zero entry in the lower row must be between 0 and k columns to the right of the non-zero entry in the upper row.\nLemma I.2. Let H be an n× n horizontal step matrix. Then H ∈ B.\nProof. We proceed by induction on n. The base case n = 2 is trivial. As our inductive hypothesis, we assume that all horizontal step matrices of size n2 × n 2 are butterfly matrices of size n 2 . From Definition 2.3, it is sufficient to show that H can be decomposed as:\nH = [ D1 D2 D3 D4 ] [ H1 0 0 H2 ] = [ D1H1 D2H2 D3H1 D4H2 ] , (4)\nwhere H1,H2 are n2 × n 2 horizontal step matrices and each Dk is a n 2 × n 2 diagonal matrix. Denote the four, n2 × n 2 corner submatrices of H by:\nH = [ H11 H12 H21 H22 ] .\nThen, define H1 and H2 by:\nH1 = H11 + H21 H2 = H12 + H22\nFor sake of contradiction, assume that H1 is not a horizontal step matrix. Then, there are 0 ≤ i, i′ < n2 , 0 ≤ j ≤ j′ < n2 such that H1[i, j] 6= 0, H1[i\n′, j′] 6= 0, and j′ − j < (i′ − i) mod n2 . From our definition of H1, the non-zero entries in columns j and j′ of H are either ( (i′ − i) mod n2 ) or(\nn 2 + (i ′ − i) mod n2 ) , both of which are greater than j′ − j, rows apart. This contradicts H being a horizontal step matrix. Hence, H1 must be a horizontal step matrix, as must H2 from an analogous argument.\nNext, we define D1,D2,D3,D4 by:\nD1[k, k] =\n{ 1 H21[k, :] = 0\n0 otherwise D2[k, k] =\n{ 1 H22[k, :] = 0\n0 otherwise\nD3[k, k] =\n{ 1 H11[k, :] = 0\n0 otherwise. D4[k, k] =\n{ 1 H12[k, :] = 0\n0 otherwise.\nTo finish the proof, we argue the correctness of the decomposition by equating arbitrary entries of each of the 4 corner submatrices. We begin with the upper left submatrix.\nD1H1[i, j] =\nn 2∑\nk=0\nD1[i, k] ·H1[k, j] by definition of matrix multiplication\n= D1[i, i] ·H1[i, j] D1 is a diagonal matrix = 1(H21[i,:]=0) · (H11[i, j] + H21[i, j]) by definition of D1 and H1\nHere, we consider two cases:\nCase 1: H21[i, j] 6= 0 Since H is a horizontal step matrix (and hence may have at most one non-zero entry per column), it follows that H11[i, j] = 0. In this case, the indicator function evaluates to 0, so D1H1[i, j] = 0 = H11[i, j], as desired.\nCase 2: H21[i, j] = 0\nIf H11[i, j] = 0, then D1H1[i, j] = 0 = H11[i, j]. Otherwise, for sake of contradiction, suppose that H21[i, :] 6= 0. Then, two of the first n2 columns of H would have non-zero entries n 2 rows apart, contradicting H being a horizontal step matrix. Hence, H21[i, :] = 0, so D1H1[i, j] = H11[i, j], as desired.\nIn all cases, D1H1[i, j] = H11[i, j], so our decomposition correctly recovers the upper left corner of H. Analogous arguments show that the other three corners are also correctly recovered. Hence, our decomposition is correct, and by induction, H ∈ B.\nCorollary I.3. Let V be a vertical step matrix. Then V ∈ B∗.\nNow, we use step matrices to prove Lemma I.1.\nProof of Lemma I.1. Given S, we decompose it as S = P1HP2VP3, where eachP` is a permutation matrix, H is a horizontal step matrix, and V is a vertical step matrix. For an example of this, see Figure 9.\nWe first decompose S as S = P1S′P3, where P1 is the permutation that moves all 0 rows of S to the bottom and P3 is the permutation that moves all 0 columns of S to the right.\nNext, we further decompose S′ into S′ = HV′ as follows. Since S′ has s ≤ n NNZ, we can parameterize S′ by θ = {(ck, ik, jk) : 0 ≤ k < s} such that S′[ik, jk] = ck, with the non-zero entries indexed in row-major order. Define matrix H by:\nH[:, k] = { ck · eik 0 ≤ k < s 0 otherwise.\nDefine matrix V′ by:\nV′[k, :] = { eTjk 0 ≤ k < s 0 otherwise.\nTo show that S′ = HV′, we consider an arbitrary entry:\nHV′[i, j] = n∑ k=0 H[i, k] ·V′[k, j] by definition of matrix multiplication\n= s∑ k=0 H[i, k] ·V′[k, j] H is 0 in all but first s columns\n= s∑ k=0 ck · 1i=ik · 1j=jk by definition of H and V′\nHere, we note that (i, j) can equal (ik, jk) for at most one value of k since the locations in θ are unique. Hence, HV′[i, j] = ck only if (i, j) = (ik, jk) for some k, which is exactly the definition of S′. Hence, S′ = HV′.\nWe argue that H is a horizontal step matrix through a series of assertions. First, note that H has exactly one non-zero entry in each of its first s columns. Also, note that since θ is in row-major order, these non-zero entries are sorted (any column to the right cannot have a non-zero entry in a higher row). Hence, to show that H is a horizontal step matrix, it is sufficient to argue that adjacent columns of H have non-zero entries at most one row apart. This is equivalent to S′ having no zero rows between two non-zero rows, which is guaranteed by P1. Hence, H is a horizontal step matrix.\nSince V′ has at most one non-zero entry per row, we may permute the rows of V′ to obtain a matrix V, where the non-zero entries of V are sorted (any lower row below cannot have a non-zero entry in an earlier column). Hence, for some permutation matrix (P2) −1, V = (P2) −1\nV′, which implies that V′ = P2V. It has exactly one non-zero entry in each of its first s columns. From the action of P2, these non-zero entries are sorted. Therefore, by the same argument as for H above, VT is a horizontal step matrix. Hence, V is a vertical step matrix.\nIn all, we have found a decomposition S = P1HP2VP3, where each P` is a permutation matrix (∈ BB∗ by Theorem 2), H is a horizontal step matrix (∈ B by Lemma I.2), and V is a vertical step matrix (∈ B∗ by Corollary I.3). Moreover, by Lemma G.6, P2 ∈ B∗B, so H,P2,V can be combined to obtain HP2V ∈ (BB∗)2. By Lemma H.2, S ∈ (BB∗)4.\nCorollary I.4. Let R be an n× n matrix of rank r. Then R ∈ (BB∗)8r4 .\nProof. We can decompose R as R = GH∗ where G,H are n × r matrices. With appropriate zero-padding, both of these can be made into n×n matrices with at most rn NNZ. The proof follows immediately from Theorem 3 and Lemma H.2." }, { "heading": "J EXAMPLE OF K-MATRIX REPRESENTATION OF STRUCTURED MATRICES AND COMPARISON TO BP HIERARCHY", "text": "In this appendix, we show explicitly how some common structured matrices (e.g. originating from fast transforms) can be represented as K-matrices. We also draw comparisons between the BB∗ hierarchy and the BP hierarchy introduced by Dao et al. (2019). Lemma J.1. Let Fn be the Discrete Fourier Transform of size n. Then Fn ∈ (BB∗)2.\nProof. From Parker (1995), we can express Fn as Fn = B P, where B ∈ B and P is a permutation (the bit reversal permutation). From Theorem 2, P ∈ BB∗. Hence, by Lemma H.2, Fn ∈ (BB∗)2.\nLemma J.2. Let Hn be the Hadamard Transform of size n. Then Hn ∈ BB∗.\nProof. Hn ∈ B, so trivially Hn ∈ BB∗.\nLemma J.3. Let Sn be the Discrete Sine Transform of size n. Then Sn ∈ (BB∗)2.\nProof. As described in Makhoul (1980), Sn can be performed as a scaled permutation (separating the even and odd indices of the input, and reversing and negating the odd indices) composed with Fn. Therefore, we may decompose Sn as Sn = B P2 D P1, where P1,P2 are permutations, B ∈ B, and D is a diagonal matrix. P2 D P1 is simply a permutation matrix with scaled entries, which can be equivalently expressed as D′ P′ for some diagonal matrix D′ and permutation P′. By Lemma H.1, B D′ ∈ BB∗. By Theorem 2, P′ ∈ BB∗. Hence, by Lemma H.2, Sn ∈ (BB∗)2.\nRemark J.4. An analogous argument shows that the Discrete Cosine Transform is also in (BB∗)2. Lemma J.5. Let Cn be an n× n circulant (convolution) matrix. Then Cn ∈ BB∗.\nProof. Using Theorem 2.6.4 of Pan (2001), we can express Cn as Cn = (Fn) −1\nDFn where Fn is the Discrete Fourier Transform and D is a diagonal matrix. (Fn) −1 = B P (with B ∈ B, P a permutation), which implies that Fn = (P) −1 (B) −1. Therefore\nCn = B P D (P) −1 (B) −1 .\nThe middle three factors have the effect of performing a permutation, scaling each element, and undoing the permutation, which is equivalent to simply scaling by some diagonal matrix D′. Hence, we are left with\nCn = B D ′ (B) −1 .\nBy Lemma H.1, B D′ ∈ B. By Lemma H.6, (B)−1 ∈ B∗. Hence, Cn ∈ BB∗.\nRemark J.6. We can expand any n× n Toeplitz matrix Tn into a 2n× 2n circulant matrix (with upper left n× n submatrix equal to Tn). Hence, Tn ∈ (BB∗)12 by Lemma J.5.\nThe Fastfood matrix class (Le et al., 2013) can be tightly captured in the BB∗ hierarchy: Lemma J.7. The product SHDPHB where S,D,B are diagonal matrices, H is the Hadamard transform, and P is a permutation matrix, is in (BB∗)3.\nProof. We have shown in Lemma J.2 that H ∈ BB∗, and in Theorem 2 that P ∈ BB∗. Since BB∗ is closed under diagonal multiplication (Lemma H.1), we conclude that SHDPHB ∈ (BB∗)3.\nThe two classes of matrices introduced in Moczulski et al. (2016), called AFDF and ACDC, are also tightly captured in the BB∗ hierarchy: Lemma J.8. Let AF−1DF be a product of a diagonal matrix A, the inverse Fourier transform F−1, another diagonal matrix D, and the Fourier transform F. Then AF−1DF ∈ BB∗. Let AC−1DC be a product of a diagonal matrix A, the inverse cosine transform C−1, another diagonal matrix D, and the cosine transform C. Then AC−1DC ∈ (BB∗)4.\nProof. We have argued in Lemma J.5 that F−1DF ∈ BB∗. Since BB∗ is closed under diagonal multiplication (Lemma H.1), we conclude that AF−1DF ∈ BB∗. We have shown that C ∈ (BB∗)2, so C−1 ∈ (BB∗)2 as well. Since BB∗ is closed under diagonal multiplication (Lemma H.1), we conclude that AC−1DC ∈ (BB∗)4.\nRemark J.9. Within each butterfly factor matrix of the DFT (excluding the bit reversal permutation) and the Hadamard transform, the columns are pairwise orthogonal and have norm 2. Hence, we can divide all factors by √ 2 to make orthogonal factor matrices. To counteract this scaling, we can add a diagonal matrix with √ 2 log2(n) = √ n in all entries to the factorization. By doing this we can place all of the above transforms in the OBB hierarchy (defined in Appendix K) with the same width and expansion factor." }, { "heading": "J.1 MULTI-DIMENSIONAL TRANSFORMS", "text": "Here, we show that, using larger matrices, we are able to similarly capture multi-dimensional versions of the above transforms. Lemma J.10. Let F2n be the 2-dimensional Discrete Fourier Transform (represented as an n2 × n2 matrix). Then F2n ∈ (BB∗)2.\nProof. The separation property of the 2-D DFT allows us to express its action on an n× n matrix as the composition of a 1-D DFT on each of its rows and a 1-D DFT on each of its columns. If we view the 2-D DFT as an n2 × n2 matrix, its input and outputs will both be column vectors of size n2. As our convention, we list the entries of the input vector in the row-major order corresponding to the n× n input matrix. Then, we consider the 2-D DFT in four steps, where the first two steps perform the 1-D DFT row-wise, and the second two steps perform the 1-D DFT column-wise:\nStep 1: Permute the columns:\nWe permute the columns (with a bit reversal permutation), which performs a bit reversal permutation on each row. Viewing the input as a vector, this step corresponds to left multiplication by a permutation matrix Pc that permutes the entries of each chunk of size n of the input vector. Step 2: Multiply each row by a butterfly matrix\nSince the entries of the input were listed in row major order, this step is achieved through multiplication by a block diagonal matrix of n butterfly matrices of size n, which can be viewed as a product of butterfly factor matrices B(n 2) n . . .B\n(n2) n 2 B (n2) 2 .\nStep 3: Permute the rows:\nWe permute the rows (with a bit reversal permutation), which performs a bit reversal permutation on each column. This corresponds to left multiplication by a permutation matrix Pr. Since we are permuting the rows, Pr permutes the entries at the granularity of each n-chunk. Since Steps 1 and 2 each performed an identical computation to each n-chunk we can move this row permutation before Step 2, combining Pc and Pr into a single permutation P.\nStep 4: Multiply each column by a butterfly matrix\nConsider multiplication by the first factor matrix. In each row, this matrix is taking linear combinations of adjacent column entries. In our length-n2 vector, these entries will be exactly n indices apart. Therefore this multiplication can be handled by a butterfly factor matrix B(n 2) 2n . Similarly, we find that this butterfly multiplication can be expressed as multiplication by a product of butterfly factor matrices B(n\n2) n2 . . .B (n2) n2\n2\nB (n2) 2n . Combined with the factor matrices from Step 2, these form a butterfly\nmatrix B of size n2.\nIn all, we see that the 2-D DFT may be realized as multiplication by a permutation matrix P followed by multiplication by a butterfly matrix B. The same argument as Lemma J.1 shows that F2n ∈ (BB∗)2.\nRemark J.11. An analogous argument (using the separation property of the respective transforms) can be used to argue that 2-D Discrete Sine and Discrete Cosine transforms are in (BB∗)2, and that 2-D Hadamard Transforms are in BB∗. Lemma J.12. Let C2n be a 2-dimensional convolution matrix. Then C2n ∈ BB∗.\nProof. We can express a 2-D convolution matrix as C2n = (F 2 n) −1DF2n, where D is diagonal, F 2 n is the 2-D Fourier transform and (F2n) −1 is the inverse 2-D Fourier transform. From the proof of Lemma J.10, we see that that we can express F2n (and similarly (F 2 n) −1) as the product of a butterfly matrix and a permutation matrix. The rest of the argument is analogous to the proof of Lemma J.5.\nRemark J.13. Using an inductive argument, we can show that all k-dimensional (k ∈ Z) variants of the above transforms, expressed as nk × nk matrices are contained in BB∗ or (BB∗)2. To do this, we use the separation property of the transforms to break them into a k − 1-dimensional transform (the inductive hypothesis) followed by a 1-dimensional transform." }, { "heading": "K THE ORTHOGONAL KALEIDOSCOPE HIERARCHY", "text": "Through practical application of the butterfly matrices, it has been found useful to constrain them in orthogonality. In Section K.1 we will modify the existing kaleidoscope hierarchy to create the orthogonal kaleidoscope hierarchy OBB. Then, in Section K.2, we will argue that all orthogonal matrices, and as a result all matrices, can also be expressed in this hierarchy in O(n) width. Lastly, in Section K.3, we will argue that permutation matrices and sparse matrices also exist in this hierarchy in O(1) width, which in turn implies a corresponding result for matrices with low-depth arithmetic circuits." }, { "heading": "K.1 DEFINITION", "text": "The definition of the orthogonal butterfly is identical to the original butterfly, with the constraint that all butterfly factors are orthogonal. We specify this definition below: Definition K.1 (Analog of Definition 2.1). An orthogonal butterfly factor of size k ≥ 2 (denoted as B̃k) is a butterfly factor that is also orthogonal. Definition K.2 (Analog of Definition 2.3). An orthogonal butterfly matrix of size n (denoted as B̃(n)) is a butterfly matrix with all butterfly factor matrices being orthogonal.\nNote that the above definition implies that an orthogonal butterfly matrix, as well as its conjugate transpose, is orthogonal.\nThe orthogonal hierarchy definition nearly mimics the original hierarchy Definition 2.4, as follows:" }, { "heading": "Definition K.3.", "text": "• We say that an n× n matrix M ∈ B̃ if we can express M = B̃(n). • We say that an n× n matrix M ∈ B̃∗ if we can express M = [ B̃(n) ]∗ .\n• We say that an n × n matrix M ∈ OBB if we can express M = M1DM2 for some M1 ∈ B̃,M2 ∈ B̃∗, and diagonal matrix D. Note that D need not be full rank.\n• Width w and expansion e in (OBB)we mimic the same definition as in the original hierarchy, using OBB instead of BB∗, such that E ∈ (OBB)w.\nBy padding if necessary, we will assume that n is a power of 2." }, { "heading": "K.2 EXPRESSIVITY", "text": "In this subsection we prove that all orthogonal (resp. unitary) matrices are contained in OBBn. To do this, we consider the class of Householder reflections, given by I− 2uu∗ for any unit vector u (Householder, 1958): Lemma K.1. All Householder reflections are in OBB with inner diagonal matrix I.\nWe will prove this lemma shortly. First, we use this lemma to present a decomposition for all orthogonal (resp. unitary) matrices. Lemma K.2. Let M be an n× n orthogonal/unitary matrix. Then M ∈ (OBB)n−1.\nProof. We consider the QR decomposition of M. It is known that we can compose M into a product of n− 1 Householder reflections and an orthogonal/unitary diagonal matrix (Householder, 1958).10 From Lemma K.1, each Householder reflection is in OBB. To complete the proof, we argue that R can be folded into the rightmost butterfly matrix. Let Q1 be the rightmost butterfly factor matrix in Q (∈ B̃(n)n ). Right multiplication of Q1 by R scales each columns of Q1 by some c ∈ C with ||c|| = 1 (R is unitary diagonal). This preserves both the sparsity\n10Q is the (orthogonal/unitary) product of n− 1 Householder reflections. R, the remaining upper triangular matrix after performing these reflections, is itself orthogonal/unitary, and therefore diagonal.\npattern of Q1 and the orthogonality of its columns. Moreover, the norm of each column of Q1R is 1. Therefore, Q1R is an orthogonal butterfly factor matrix, so M = QR ∈ (OBB)n−1, as desired.\nWe now return to the proof of Lemma K.1\nProof of Lemma K.1. Given u ∈ Cn (n a power of 2), let u0 = u[: n/2] ∈ Cn/2,u1 = u[n/2 :] ∈ Cn/2 denote the first and second halves of u.\nTo show that H ∈ OBB with inner diagonal matrix I, we proceed by induction. The base case for n = 2 is trivial. It suffices to show that there exist unitary butterfly factors L,R such that LHR has the form [ In/2 − 2v0v∗0 0\n0 In/2 − 2v1v∗1\n] for some unit vectors v0,v1 ∈ Cn/2.\nDefine\n(v0[i],v1[i]) = ( u0[i]√ |u0[i]|2+|u1[i]|2 , u1[i]√ |u0[i]|2+|u1[i]|2 ) if |u0[i]|2 + |u1[i]|2 6= 0\n(1, 0) otherwise . (5)\nIt is easily checked that v0[i] ∗v0[i] + v1[i] ∗v1[i] = 1\nv0[i] ∗u0[i] + v1[i] ∗u1[i] = √ |u0[i]|2 + |u1[i]|2\nv1[i]u0[i]− v0[i]∗u1[i] = 0 . (6)\nWe choose\nL =\n[ Diag(v∗0) Diag(v ∗ 1)\nDiag(v1) Diag(−v0) ] and R = L∗. L,R are (permuted) direct sums of blocks of the form [ v0[i] ∗ v1[i] ∗\nv1[i] −v0[i]\n] , which are\northogonal by construction (via (5)). Hence, L ∈ B̃(n)n and R ∈ (B̃∗)(n)n . Further,\nLHR =\n[ Diag(v∗0) Diag(v ∗ 1)\nDiag(v1) Diag(−v0)\n]( I− 2 [ u0 u1 ] [ u0 u1 ]∗)[ Diag(v∗0) Diag(v ∗ 1) Diag(v1) Diag(−v0) ]∗ = I− 2 [ Diag(v∗0) Diag(v ∗ 1)\nDiag(v1) Diag(−v0) ] [ u0 u1 ] [ u0 u1 ]∗ [ Diag(v∗0) Diag(v ∗ 1) Diag(v1) Diag(−v0) ]∗ = I− 2 [ v∗0 ◦ u0 + v∗1 ◦ u1 v1 ◦ u0 − v0 ◦ u1 ] ︸ ︷︷ ︸\nw\n[ v∗0 ◦ u0 + v∗1 ◦ u1 v1 ◦ u0 − v0 ◦ u1 ] ︸ ︷︷ ︸\nw\n∗\n,\nwhere ◦ denotes the Hadamard product. From (6)\nw[i] =\n{√ |u0[i]|2 + |u1[i]|2 i ∈ [n/2]\n0 i ∈ [n/2 : n] Denoting the first half of this vector by w0 ∈ Cn/2, we have\nLHR =\n[ I− 2w0w∗0 0\n0 I\n] ,\nwhere ‖w0‖2 = ‖u‖2 = 1. The result follows inductively.\nAs an immediate corollary, we can use Singular Value Decomposition to obtain a factorization for an arbitrary n× n matrix. Corollary K.3. Let M be an arbitrary n × n matrix. Then, M ∈ (OBB)2n−1, where all but one matrix in the decomposition is orthogonal (unitary).\nProof. By employing Singular Value Decomposition, we can decompose M as M = UΣV∗, where U,V∗ are orthogonal and Σ is diagonal. By Lemma K.2, U,V∗ ∈ (OBB)n−1, and trivially Σ ∈ OBB. Hence, M ∈ (OBB)2n−1. Note that Σ is the only matrix in the decomposition that is not orthogonal (unitary)." }, { "heading": "K.3 CONSTRUCTIONS", "text": "We show that we can construct s-sparse matrices in the OBB hierarchy with the same width as the BB∗ hierarchy. The proof follows a structure to that of Theorem 3. We begin by arguing about permutation and step matrices, then using the same factorization to argue that matrices with at most n NNZ are contained in (BB∗)4. Then, we will appeal to a modified sum closure lemma to extend the argument to matrices of general s NNZ. Similar to Appendix F, we can use these results to place all matrices with low-depth circuits for matrix vector multiplication in the OBB hierarchy." }, { "heading": "K.3.1 PERMUTATIONS", "text": "We begin by presenting the argument that permutations are included in OBB as a corollary to Theorem 2. Corollary K.4. Let P be a permutation matrix. Then P ∈ B̃B̃∗.\nProof. We appeal to the decomposition from Theorem 2, noting that all butterfly factor matrices constructed in the proofs of Lemmas G.3 and G.1 are permutation matrices, and thus are orthogonal. Hence, P ∈ OBB where the inner diagonal matrix is I.\nSimilarly, the construction of Lemma G.6 also show that permutations are included in B̃∗B̃. Corollary K.5. Let P be a permutation matrix. Then P ∈ B̃∗B̃.\nTo prove the containment of sparse matrices within theOBB hierarchy, we make use of the following lemma. Lemma K.6. Let P be a permutation matrix and D a diagonal matrix. Then there exist diagonal matrices D′ and D′′ such that:\nPD = D′P DP = PD′′.\nProof. Let σ be the permutation such that P[i, j] = δi,σ(j).\nDefine D′ such that D′[σ(j), σ(j)] = D[j, j]. Then, if i = σ(j):\n(PD)[i, j] = P[i, j]D[j, j] = D′[σ(j), σ(j)]P[σ(j), j] = (D′P)[σ(j), j] = (D′P)[i, j].\nOtherwise, if i 6= σ(j), then (PD)[i, j] = 0 = (D′P)[i, j]. Hence, PD = D′P. Define D′′ such that D′′[j, j] = D[σ(j), σ(j)]. An analogous argument to above shows that DP = PD′′." }, { "heading": "K.3.2 STEP MATRICES", "text": "In the BB∗ hierarchy (Lemma I.2), we were able to show that horizontal step matrices are butterfly matrices. Here, we present a similar result for the OBB hierarchy. Lemma K.7. Let H be an n× n horizontal step matrix. Then we can decompose H = DO, where" }, { "heading": "D is a diagonal matrix and O ∈ B̃.", "text": "Proof. Throughout the proof, we make reference to the original horizontal step matrix construction given in Lemma I.2 and its proof.\nTo begin, we show that an arbitrary 2k × 2k butterfly factor H2k in the decomposition of H can be expressed as the product of a diagonal matrix and an orthogonal butterfly factor. Since a butterfly factor is direct sum of 2 × 2 matrices, there is a permutation matrix P2k such that conjugation of H2k by P2k gives a block diagonal matrix H′2k of n 2 2× 2 matrices, i.e.\nP2kH2kP ∗ 2k = H ′ 2k .\n(See Figure 10 for an illustration.) Specifically, P2k is the permutation where:\nPs[2i, :] = e T i Ps[2i+ 1, :] = e T i+n2 .\nWe argue that each of these 2×2 blocks can be decomposed into a diagonal matrix times an orthogonal matrix. Note that the butterfly factor matrices constructed in the proof of Lemma I.2 each have at most one non-zero entry per column. Hence, there are 4 cases to consider. Note that matrices with at most one non-zero entry are exhausted by Cases 1 and 2.\nCase 1: [ a 0 0 b ] = [ a 0 0 b ] ︸ ︷︷ ︸\nD\n[ 1 0 0 1 ] ︸ ︷︷ ︸\nO Case 2: [ 0 a b 0 ] = [ a 0 0 b ] ︸ ︷︷ ︸\nD\n[ 0 1 1 0 ] ︸ ︷︷ ︸\nO Case 3: [ a b 0 0 ] = [√ a2 + b2 0 0 0 ] ︸ ︷︷ ︸\nD\n[ a√\na2+b2 b√\na2+b2 b√\na2+b2 −a√ a2+b2 ] ︸ ︷︷ ︸\nO\n, a, b 6= 0\nCase 4: [ 0 0 a b ] = [ 0 0 0 √ a2 + b2 ] ︸ ︷︷ ︸\nD\n[ b√\na2+b2 −a√ a2+b2\na√ a2+b2 b√ a2+b2 ] ︸ ︷︷ ︸\nO\n, a, b 6= 0\nIn the last two cases, O is a 2 × 2 rotation matrix, which is commonly known to be orthogonal. Assume that we perform the above decomposition on all of the blocks of H′2k in parallel, therefore expressing H′2k = D ′O′. We now have\nH2k = P ∗ 2kD ′O′P2k .\nBy Lemma K.6, we can rewrite this as H2k = D ′′P∗2kO ′P2k . Note that P∗2kO ′P2k is the product of three orthogonal matrices, and thus orthogonal. Additionally, the construction of P2k ensures that P∗2kO ′P2k is butterfly factor.11 Hence, H2k can be expressed as the product of a diagonal matrix and an orthogonal butterfly factor, as desired.\nNow, we show that this decomposition of butterfly factors implies Lemma K.7. By performing this decomposition in parallel on each butterfly factor, we conclude that any butterfly factor matrix H(n)\n2k\nof H can be decomposed as H(n) 2k = D2kO (n) 2k .12\n11Conjugation by P2k is an isomorphism from 2 k × 2k butterfly factors onto block diagonal matrices with\n2k−1, 2× 2 blocks. Therefore, conjugation by P−1 2k = P∗2k maps a block diagonal matrix to a butterfly factor. 12Note that a block diagonal matrix composed of orthogonal matrices is, itself, orthogonal.\nWe complete the argument by induction on n. The base case n = 2 holds by the observation about butterfly factor matrices above. Assume that any horizontal step matrix of size n2 × n 2 can be expressed as a diagonal matrix times an orthogonal butterfly matrix. Now, consider the n× n horizontal step matrix H. From Lemma I.2, H can be expressed as\nH = B(n)n [ H1 0 0 H2 ] ,\nwhere H1,H2 are n2 × n 2 horizontal step matrices. By our inductive hypothesis,\nH = B(n)n D1 [ O1 0 0 O2 ] ,\nwhere D1 is diagonal and O1,O2 are n2 × n 2 matrices in B̃. However, B (n) n D1 is a butterfly factor, and therefore can be expressed as DnO (n) n . Therefore,\nH = DnO (n) n [ O1 0 0 O2 ] = DnO,\nwith O ∈ B̃, as desired.\nJust as with the BB∗ hierarchy, the decomposition of vertical step matrices falls out as an immediate corollary to the horizontal step matrix proof.\nCorollary K.8. Let V be a vertical step matrix. Then we can decompose V = O∗D, where D is a diagonal matrix and O∗ ∈ B̃∗." }, { "heading": "K.3.3 SPARSE MATRICES", "text": "Now that we have argued about the decomposition of permutation and step matrices in the OBB hierarchy, we can leverage the construction from Lemma I.1 to argue about matrices with at most n NNZ.\nCorollary K.9. Let S be an n× n matrix with at most n NNZ. Then, S ∈ (OBB)4.\nProof. We use the construction from Lemma I.1, along with Lemma K.7 and Corollary K.8, to express S as:\nS = O1O ′ 1︸ ︷︷ ︸\nP1\nD2O2︸ ︷︷ ︸ H O3O ′ 3︸ ︷︷ ︸\nP2\nO′4D4︸ ︷︷ ︸ V O5O ′ 5︸ ︷︷ ︸\nP3\n,\nwith each Oi ∈ B̃, each O′j ∈ B̃∗, and each Dk diagonal. Since P2 is a permutation, by Corollary K.5, we can write it as Õ3 ′ Õ3 for some Õ3\n′ ∈ B̃∗ and Õ3 ∈ B̃. Moreover, noting that O′1 and O5 are permutations, we make use of Lemma K.6 to re-express S as:\nS = O1D ′ 2O ′ 1︸ ︷︷ ︸\nM1\nO2Õ3 ′︸ ︷︷ ︸\nM2\nÕ3O ′ 4︸ ︷︷ ︸\nM3\nO5D ′ 4O ′ 5︸ ︷︷ ︸\nM4\n.\nNote that each M` ∈ OBB. Hence, S ∈ (OBB)4, as desired.\nJust as in Appendix I, we would like to extend this orthogonal-based construction to capture matrices of general sparsity. To accomplish this, we introduce an addition closure lemma analogous to Lemma K.10 for the OBB hierarchy. Lemma K.10. Let A1, . . . ,Am be k × k matrices in (OBB)we then ∑m i=1 Ai ∈ (OBB)mw4e .\nWith Lemma K.10, we arrive at the following Corollary on general orthogonal sparsity.\nCorollary K.11. Let S be an n× n matrix with s NNZ. Then, S ∈ (OBB)4d s ne\n4 .\nProof. Just as in the proof of Theorem 3, we accomplish this using a sum of ⌈ s n ⌉ matrices of at most n NNZ. For handling the sum of matrices, we need to appeal to Lemma K.10.\nTo conclude the argument, we give the proof of Lemma K.10.\nProof of Lemma K.10. For each 1 ≤ i ≤ m, let EAi ∈ Fek×ek be defined such that Ai = SEAiS∗ (with S as in Definition 2.4). Note that EAi ∈ (OBB)w. Consider matrices of the form:Iek 0 0 EAi0 Iek 0 0Iek 0 0 -EAi\n0 Iek 0 0 ︸ ︷︷ ︸\nMi ∈ F4ek×4ek\n= √ 2\n[ 1√ 2 I2ek 1√ 2 I2ek\n1√ 2 I2ek - 1√2I2ek ] ︸ ︷︷ ︸\nO ∈ B̃(4ek)4ek\nIek 0 0 00 Iek 0 00 0 EAi 0 0 0 0 0 ︸ ︷︷ ︸\nK\nIek 0 0 00 Iek 0 00 0 0 Iek 0 0 Iek 0 ︸ ︷︷ ︸\nP ∈ B̃(4ek)2ek\nNote that K, a block diagonal matrix composed of matrices in (OBB)w, is itself in (OBB)w since\nK = w∏ j=1 Iek 0 0 00 Iek 0 00 0 Oj 0 0 0 0 Iek ︸ ︷︷ ︸\nLj ∈ B̃\nIek 0 0 00 Iek 0 00 0 Dj 0 0 0 0 0 ︸ ︷︷ ︸\nDiagonal\nIek 0 0 00 Iek 0 00 0 O′j 0 0 0 0 Iek ︸ ︷︷ ︸\nRj ∈ B̃∗\n,\nwhere each Oj is a ek × ek matrix in B̃, and each O′j is a ek × ek matrix in B̃∗. Lw (the leftmost factor) is a block diagonal matrix composed of 4 ek × ek matrices in B̃. Therefore, we can fold O into this factor (since a butterfly factor in B̃(4ek)4ek was not yet used in Lw) to conclude that OLw ∈ B̃. Similarly, since no btterfly factor from B̃(4ek)2ek has been used in R1, we may fold P into R1 to conclude that R1P ∈ B̃∗. Finally, we address the scalar multiple of √ 2 by multiplying all entries of\nany diagonal matrix in the decomposition of K by √\n2. Hence, we may conclude that Mi ∈ (OBB)w. Through repeated application (m times) of the identityI A1 0 B10 I 0 0I A2 0 B2\n0 I 0 0\n I 0 0 C10 I 0 0I 0 0 C2\n0 I 0 0\n = I A1 + B1 0 C10 I 0 0I A2 + B2 0 C1\n0 I 0 0 , (7) we see that\nm∏ i=1 Mi =\nIek ∑m i=2 EAi 0 EA1 0 Iek 0 0\nIek -EAm + ∑m−1 i=2 EAi 0 EA1\n0 Iek 0 0 ︸ ︷︷ ︸\nM ∈ F4en×4en\n.\nTherefore, M ∈ (OBB)mw. Next, we note that\nm∑ i=1 Ai = SM 0 Iek 0 IekIek 0 Iek 00 Iek 0 -Iek Iek 0 -Iek 0 ︸ ︷︷ ︸\nQ\nST .\nWe would like to show that we can fold Q into the rightmostOBB factor of M. The rightmost matrix in the decomposition of M is P. Note that\nPQ = 0 Iek 0 IekIek 0 Iek 0Iek 0 -Iek 0 0 Iek 0 -Iek = √2 0 Iek 0 0Iek 0 0 00 0 Iek 0 0 0 0 Iek ︸ ︷︷ ︸\nB̃ (4ek) 2ek\n[ 1√ 2 I2ek 1√ 2 I2ek\n1√ 2 I2ek - 1√2I2ek ] ︸ ︷︷ ︸\nB̃ (4ek) 4ek\n.\nJust as earlier, the factor of √\n2 can be multiplied through any diagonal matrix. Also, these two orthogonal butterfly factor matrices can be folded into the the rightmost R matrix (the decomposition of K above does not use these two, rightmost butterfly factors). Hence, ∑m i=1 Ai ∈ (OBB)mw4e , as desired." }, { "heading": "K.3.4 ARITHMETIC CIRCUITS", "text": "Just as in Theorem 1, we can use the sparsity result in Lemma K.10 to place matrices with low-depth (linear) arithmetic circuits for matrix vector multiplication in the OBB hierarchy. Corollary K.12. Let M be an n× n matrix such that matrix-vector multiplication of M times an arbitrary vector v can be represented as a be a linear arithmetic circuit C comprised of s gates (including inputs) and having depth d. Then, M ∈ (OBB)O(d)O( sn ).\nProof. We use the construction given in the proof of Theorem 1. Corollaries K.9 and K.4 allow us to recover the same width and expansion factor with the OBB hierarchy." }, { "heading": "L RELU NETWORK WITH STRUCTURED WEIGHT MATRICES", "text": "We show that for any neural network with ReLU nonlinearities and whose weight matrices have arithmetic circuits with few gates, its linear network counterpart (obtained by removing all the ReLU’s) also has an arithmetic circuit with not too many more gates. This implies that in trying to find the smallest arithmetic circuit augmented with ReLU gates to represent a ReLU network, one might as well try to find the smallest arithmetic circuits that represent the matrix-vector multiplication of each weight matrix.\nProposition 2. Consider a neural network architecture consisting of L layers with weight matrices W1, . . . ,WL ∈ Fn×n and ReLU nonlinearity in between. Suppose that matrix-vector multiplication of Wi times an arbitrary vector v can be represented as a linear arithmetic circuit with si gates (including inputs). Then there exists an arithmetic circuit augmented with ReLU gates with ∑L i=1 si + Ln total gates that computes the output ReLU(WL(. . .ReLU(W1v))) of the network for an arbitrary input vector v.\nConversely, if there is an arithmetic circuit augmented with ReLU gates with s total gates that computes all the activations of the network ReLU(W1v), . . . ,ReLU(WL . . .ReLU(W1v)) for an arbitrary input v, then there exists an arithmetic circuit augmented with ReLU gates with 2s+ 2Ln total gates that computes the activations of the network without ReLU W1v, . . . ,WL . . .W1v.\nProof of Proposition 2. To compute the output of the network ReLU(WL(. . .ReLU(W1v))), we first compute the matrix-vector product W1v with an arithmetic circuit of s1 gates by assumption, and use n other ReLU gates to compute the pointwise ReLU. Then we repeat the process for layer 2, 3, . . . , L, using the arithmetic circuits of W1, . . . ,WL and Ln additional gates for ReLU. In total we obtain an arithmetic circuit augmented with ReLU gates with ∑L i=1 si + Ln total gates.\nConversely, to build an arithmetic circuit augmented with ReLU gates to compute W1v, . . . ,WL . . .W1v, we pass v and then−v through the circuit that computes ReLU(W1x) for an arbitrary x to get ReLU(W1v) and ReLU(−W1v). Noting that x = ReLU(x)− ReLU(−x), we can use n additional gates to compute W1v from ReLU(W1v) and ReLU(−W1v). Repeat the process for layer 2, 3, . . . , L (for example, pass W1v and −W1v to the circuit that computes W2x for an arbitrary x on layer 2). Overall we need to double the circuits that computes all the activations of the network ReLU(W1v), . . . ,ReLU(WL . . .ReLU(W1v)), requiring 2s gates. We also need n additional gates per layer to compute the negation of the input to that layer (e.g. computing −v from v), and n additional gates per layer to subtract the output of the ReLU circuit (e.g. computing W1v from ReLU(W1v) and ReLU(−W1v).) Therefore we can construct an arithmetic circuit augmented with ReLU gates with 2s+ 2L total gates that computes the activations of the network without ReLU W1v, . . . ,WL . . .W1v.\nWe now prove an asymptotic bound on the VC dimension of a ReLU network whose weight matrices are kaleidoscope matrices with bounded width and expansion. Proposition 3. Let F be the class of ReLU neural networks consisting of L layers, where each layer is a K-matrix with width and expansion bounded by some constant C. Suppose that the network has W total parameters. Let signF denote the corresponding classification functions: {x 7→ sign f(x) : f ∈ F}. Then this class has VC dimension:\nVCdim(signF) = O(LW logW ).\nWe leverage the result from Thomas et al. (2018) for the case where the entries of the weight matrices interact multiplicatively, but with polynomially bounded degrees. This proof is similar to the VC bound for ReLU networks whose weight matrices are butterfly matrices (Dao et al., 2019).\nProof. To use Theorem 3 of Thomas et al. (2018), we simply need to check that the entries of the linear layer, as polynomials of the parameters, has degree at most c1mc2l for some universal constant c1, c2 > 0, where ml is the size of output of the l-th layer. If the network weight matrices are K-matrices with bounded width and expansion, each weight matrix is a product of at most c3 logml sparse factors, for some universal constant c3 > 0. This means that the degree is polynomially bounded, which satisfies the condition of the theorem. Therefore the VC dimension is bounded to be almost linear in the number of parameters:\nVCdim(signF) = O(LW logW )." }, { "heading": "M ARITHMETIC CIRCUIT PRIMER", "text": "We give a quick overview of arithmetic circuits. This is a model of computation that has been studied for numerous computational problems (and is the basic model for algebraic complexity theory). For our purposes, we will exclusively focus on arithmetic circuits for the matrix-vector multiplication problem. For a more detailed exposition, the reader is referred to the standard book on this topic (Bürgisser et al., 2013). Definition M.1 (Arithmetic Circuits). An arithmetic circuit that computes y = Ax (for A ∈ Fm×n) has n input gates (corresponding to x[0], . . . ,x[n − 1]) and m output gates (corresponding to y[0], . . . ,y[m− 1]). All the internal gates correspond to addition, subtraction, multiplication and division13 over the underlying field F. The circuit is also allowed to use constants from F for ‘free.’ The definition of the internal gates can depend on A (as well as x of course). In other words, one can ‘bake’ the knowledge about A into the circuit.\nThe size s of a circuit is n plus the number of addition, multiplication, subtraction and division gates used in the circuit. The depth d of a circuit is the minimum number of layers such that all gates in a given layer take as its input gates from previous layers.14\nOne drawback of arithmetic circuits (especially for infinite fields e.g. F = R, which is our preferred choice in this work) is that they assume operations over F can be performed exactly. In particular, it ignores precision issues involved with real arithmetic. Nonetheless, this model turns out to be a very useful model in reasoning about the complexity of doing matrix-vector multiplication for any family of matrices.\nPerhaps the strongest argument in support of arithmetic circuits is that a large (if not an overwhelming) majority of matrix-vector multiplication algorithm also imply an arithmetic circuit of size comparable to the runtime of the algorithm (and the depth of the circuit roughly correponds to the time taken to compute it by a parallel algorithm). For example consider the obvious algorithm to compute Ax (i.e. for each i ∈ [m], compute y[i] as the sum ∑n−1 i=0 A[i, j]x[j]). It is easy to see that this algorithm implies an arithmetic circuit of size O(nm) and depth O(log n).15\n13Here we assume all the gates have two inputs. 14The input layer corresponding to the input gates does not contriubte to the depth. 15The claim on the depth follow from the fact that each of the sums ∑n−1 i=0 A[i][j]x[j] can be computed in parallel. Further, the sum for each i ∈ [m] can be done in log2 m depth by first computing the partial sums A[i][2j′]x[2j′] +A[i][2j′ + 1]x[2j′ + 1] for all j′ ∈ [n/2] in parallel and recursively computing pair-wise sums till we are done.\nOne thing to note about the arithmetic circuit above is that all the multiplications involve at least one input that is a constant from F (recall that we can assume that the entries of A are constants that can be used to build the circuit). This leads to the following important sub-class of arithmetic circuits:\nDefinition M.2 (Linear Arithmetic Circuits). An arithmetic circuit is called a linear arithmetic circuit if it only uses addition, subtraction and multiplication. Further, every multiplcation has a fixed constant from F as at least one of its two inputs. In other words, all gates in the circuit are linear functions of their inputs (i.e. of the form ax+ by for fixed constants a, b ∈ F).\nIntuitively for the matrix-vector multiplication, it makes sense to consider linear arithmetic circuits since the final function we want to compute Ax is indeed a linear function of its inputs. For inifinite fields (e.g. F = R or F = C), it turns out that this is essentially without loss of generality:\nTheorem 4 ((Bürgisser et al., 2013)). Let F be an infinite field. Any (general) arithmetic circuit to compute Ax over F of size s and depth d can be converted into a linear arithmetic circuit of size O(s) and depth O(d).\nThe above result implies that for asymptotic considerations, linear arithmetic circuits for matrix-vector multiplication are equivalent to general arithmetic circuits.16\nOne important property of linear arithmetic circuits of depth d, which we will use in our arguments, is that such a circuit can be equivalently represented as product of d sparse matrices (see the proof of Theorem 1 for the precise derivation17).\nAs mentioned earlier, a vast majority of efficient matrix vector multiplication algorithms are equivalent to small (both in size and depth) linear arithmetic circuit. For example the FFT can be thought of as an efficient arithmetic circuit to compute the Discrete Fourier Transform (indeed when one converts the linear arithmetic circuit for FFT into a matrix decomposition,18 then each matrix in the decomposition is a butterfly factor, with each block matrix in each factor being the same). For an illustration of this consider the DFT with n = 4 as illustrated in Figure 11.\nFigure 12 represent the arithmetic circuit corresponding to FFT with n = 4.\n16This follows from the fact that by definition any linear arithmetic circuit is also an arithmetic circuit; the other direction follows from Theorem 4.\n17To the best of our knowledge, this connection was explicitly made by De Sa et al. (2018) though the connection seems to be folklore.\n18Using the conversion mentioned in the paragraph above.\nFinally, Figure 13 is representation of the arithmetic circuit of Figure 12 as a product of a butterfly matrix and (the bit-reversal) permutation. We note that our generic arithmetic circuit to decomposition into BB∗ is not as tight as in Figure 13.\nOne reason for the vast majority of existing efficient matrix vector algorithms leading to (linear) arithmetic circuits is that they generally are divide and conquer algorithms that use polynomial operations such as polynomial multiplication or evaluation (both of which themselves are divide and conquer algorithms that use FFT as a blackbox) or polynomial addition. Each of these pieces are well known to have small (depth and size) linear arithmetic circuits (since FFT has these properties). Finally, the divide and conquer structure of the algorithms leads to the circuit being of low depth. See the book of Pan (Pan, 2001) for a more elaborate description of this connection.\nIn fact, the recent work of De Sa et al. (De Sa et al., 2018) makes this fact explicit and presents the most general known structure on matrices that imply near-linear size linear arithmetic circuits for the corresponding matrix vector multiplication. Their work combines two separate classes of structures matrices– orthogonal polynomial transforms (Driscoll et al., 1997; Szegö, 1967) as well as matrices with low displacement rank (Kailath et al., 1979; Olshevsky & Shokrollahi, 2000)– and presents a linear class of linear arithmetic circuits to solve their matrix vector multiplication problem. We note that structured matrices with low displacement rank have been used to replace fully connected layers in some neural network architectures (Sainath et al., 2013; Thomas et al., 2018)." } ]
2,020
SENTATION FOR ALL STRUCTURED LINEAR MAPS
SP:4c48dff5afc7fefe00e4c7e92e319ae4a68165cd
[ "The paper proposes a metric for unsupervised model (and hyperparameter) selection for VAE-based models. The essential basis for the metric is to rank the models based on how much disentanglement they provide. This method relies on a key observation from this paper [A] viz., disentangled representations by any VAE-based model are likely to be similar (upto permutation and sign).", "This paper addresses the problem of unsupervised model selection for disentangled representation learning. Based on the understanding of “why VAEs disentangle” [Burgess et al. 2017, Locatello et al. 2018, Mathieu et al. 2019, Rolinek et al. 2019], the authors adopt the assumption that disentangled representations are all alike (up to permutation and sign inverse) while entangled representations are different, and propose UDR method and its variants. Experimental results clearly show that UDR is a good approach for hyperparameter/model selection." ]
Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks. To extend the benefits of disentangled representations to more complex domains and practical applications, it is important to enable hyperparameter tuning and model selection of existing unsupervised approaches without requiring access to ground truth attribute labels, which are not available for most datasets. This paper addresses this problem by introducing a simple yet robust and reliable method for unsupervised disentangled model selection. Our approach, Unsupervised Disentanglement Ranking (UDR)1, leverages the recent theoretical results that explain why variational autoencoders disentangle (Rolinek et al., 2019), to quantify the quality of disentanglement by performing pairwise comparisons between trained model representations. We show that our approach performs comparably to the existing supervised alternatives across 5400 models from six state of the art unsupervised disentangled representation learning model classes. Furthermore, we show that the ranking produced by our approach correlates well with the final task performance on two different domains.
[ { "affiliations": [], "name": "Sunny Duan" }, { "affiliations": [], "name": "DeepMind sunnyd@google.com" }, { "affiliations": [], "name": "Loic Matthey" }, { "affiliations": [], "name": "Andre Saraiva" }, { "affiliations": [], "name": "Irina Higgins" } ]
[ { "authors": [ "Alessandro Achille", "Tom Eccles", "Loic Matthey", "Christopher P Burgess", "Nick Watters", "Alexander Lerchner", "Irina Higgins" ], "title": "Life-long disentangled representation learning with cross-domain latent homologies", "venue": null, "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "James Bergstra", "Remi Bardenet", "Yoshua Bengio", "Balazs Kegl" ], "title": "Algorithms for hyper-parameter optimization", "venue": null, "year": 2011 }, { "authors": [ "Christopher P. Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in β-VAE", "venue": "NIPS Workshop of Learning Disentangled Features,", "year": 2017 }, { "authors": [ "Christopher P Burgess", "Loic Matthey", "Nick Watters", "Rishabh Kabra", "Irina Higgins", "Matt Botvinick", "Alexander Lerchner" ], "title": "MONet: Unsupervised scene decomposition and representation", "venue": null, "year": 2019 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger Grosse", "David Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": null, "year": 2018 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": null, "year": 2016 }, { "authors": [ "Pierre Comon" ], "title": "Independent component analysis, a new concept", "venue": "Signal Processing,", "year": 1994 }, { "authors": [ "Cian Eastwood", "Christopher K.I. Williams" ], "title": "A framework for the quantitative evaluation of disentangled representations", "venue": null, "year": 2018 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymyr Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning", "Shane Legg", "Koray Kavukcuoglu" ], "title": "IMPALA: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arxiv,", "year": 2018 }, { "authors": [ "Marta Garnelo", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Towards deep symbolic reinforcement learning", "venue": "arXiv preprint arXiv:1609.05518,", "year": 2016 }, { "authors": [ "David R. Hardoon", "Sandor Szedmak", "John Shawe-Taylor" ], "title": "Canonical correlation analysis; an overview with application to learning methods", "venue": "Neural Computation,", "year": 2004 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "arxiv,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "β-VAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017a", "venue": null, "year": 2017 }, { "authors": [ "Irina Higgins", "Arka Pal", "Andrei Rusu", "Loic Matthey", "Christopher Burgess", "Alexander Pritzel", "Matthew Botvinick", "Charles Blundell", "Alexander Lerchner" ], "title": "DARLA: Improving zero-shot transfer in reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Irina Higgins", "David Amos", "David Pfau", "Sebastien Racaniere", "Loic Matthey", "Danilo Rezende", "Alexander Lerchner" ], "title": "Towards a definition of disentangled representations. arXiv, 2018a", "venue": null, "year": 2018 }, { "authors": [ "Irina Higgins", "Nicolas Sonnerat", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Matko Bosnjak", "Murray Shanahan", "Matthew Botvinick", "Demis Hassabis", "Alexander Lerchner" ], "title": "SCAN: Learning hierarchical compositional visual concepts. ICLR, 2018b", "venue": null, "year": 2018 }, { "authors": [ "Frank Hutter", "Holger Hoos", "Kevin Leyton-Brown" ], "title": "Sequential model-based optimization for general algorithm configuration", "venue": "Learning and Intelligent Optimization,", "year": 2011 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M. Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Population based training of neural networks. arXiv, 2018", "venue": null, "year": 2018 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": null, "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR,", "year": 2014 }, { "authors": [ "Nikolaus Kriegeskorte", "Marieke Mur", "Peter Bandettini" ], "title": "Representational similarity analysis – connecting the branches of systems neuroscience", "venue": "Front Syst Neurosci.,", "year": 2008 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "arxiv,", "year": 2017 }, { "authors": [ "Brenden M. Lake", "Tomer D. Ullman", "Joshua B. Tenenbaum", "Samuel J. Gershman" ], "title": "Building machines that learn and think like people", "venue": "Behavioral and Brain Sciences,", "year": 2016 }, { "authors": [ "Guillaume Lample", "Myle Ott", "Alexis Conneau", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Phrase-based & neural unsupervised machine translation", "venue": "arxiv,", "year": 2018 }, { "authors": [ "Adrien Laversanne-Finot", "Alexandre Péré", "Pierre-Yves Oudeyer" ], "title": "Curiosity driven exploration of learned disentangled goal spaces", "venue": "arxiv,", "year": 2018 }, { "authors": [ "Yixuan Li", "Jason Yosinski", "Jeff Clune", "Hod Lipson", "John Hopcroft" ], "title": "Convergent learning: Do different neural networks learn the same representations? ICLR, 2016", "venue": null, "year": 2016 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": null, "year": 2018 }, { "authors": [ "Francesco Locatello", "Gabriele Abbati", "Tom Rainforth", "Stefan Bauer", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "On the fairness of disentangled representations", "venue": "arxiv,", "year": 2019 }, { "authors": [ "Gary Marcus" ], "title": "Deep learning: A critical appraisal", "venue": "arxiv,", "year": 2018 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N. Siddharth", "Yee Whye Teh" ], "title": "Disentangling disentanglement in variational autoencoders", "venue": null, "year": 2019 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dsprites: Disentanglement testing sprites dataset, 2017", "venue": "URL https://github.com/deepmind/dsprites-dataset/", "year": 2017 }, { "authors": [ "Ari S. Morcos", "Maithra Raghu", "Samy Bengio" ], "title": "Insights on representational similarity in neural networks with canonical correlation", "venue": null, "year": 2018 }, { "authors": [ "Ashvin Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine" ], "title": "Visual reinforcement learning with imagined", "venue": "goals. arxiv,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", "venue": null, "year": 2017 }, { "authors": [ "Scott Reed", "Kihyuk Sohn", "Yuting Zhang", "Honglak Lee" ], "title": "Learning to disentangle factors of variation with manifold interaction", "venue": null, "year": 2014 }, { "authors": [ "Danilo J Rezende", "Fabio Viola" ], "title": "Generalized elbo with constrained optimization, geco", "venue": "Workshop on Bayesian Deep Learning,", "year": 2018 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": null, "year": 2014 }, { "authors": [ "Karl Ridgeway", "Michael C Mozer" ], "title": "Learning deep disentangled embeddings with the f-statistic loss", "venue": null, "year": 2018 }, { "authors": [ "Michal Rolinek", "Dominik Zietlow", "Georg Martius" ], "title": "Variational autoencoders pursue pca directions (by accident)", "venue": null, "year": 2019 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Learning factorial codes by predictability minimization", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "A general reinforcement learning algorithm that masters chess", "venue": "shogi, and go through self-play. Science,", "year": 2018 }, { "authors": [ "J. Snoek", "H. Larochelle", "R.P. Adams" ], "title": "Practical Bayesian Optimization of Machine Learning Algorithms", "venue": null, "year": 2012 }, { "authors": [ "Stefano Soatto" ], "title": "Steps toward a theory of visual information", "venue": "Technical Report UCLA-CSD100028,", "year": 2010 }, { "authors": [ "Xander Steenbrugge", "Sam Leroux", "Tim Verbelen", "Bart Dhoedt" ], "title": "Improving generalization for abstract reasoning tasks using disentangled feature representations", "venue": "arxiv,", "year": 2018 }, { "authors": [ "Raphael Suter", "Dorde Miladinovic", "Stefan Bauer", "Bernhard Scholkopf" ], "title": "Interventional robustness of deep latent variable models", "venue": "arxiv,", "year": 2018 }, { "authors": [ "C. Thornton", "F. Hutter", "H.H. Hoos", "K. Leyton-Brown" ], "title": "Auto-WEKA: Combined Selection and Hyperparameter Optimization of Classification", "venue": "Algorithms. arXiv,", "year": 2012 }, { "authors": [ "Sjoerd van Steenkiste", "Francesco Locatello", "Jürgen Schmidhuber", "Olivier Bachem" ], "title": "Are disentangled representations helpful for abstract visual reasoning", "venue": "arxiv,", "year": 2019 }, { "authors": [ "Liwei Wang", "Lunjia Hu", "Jiayuan Gu", "Yue Wu", "Zhiqiang Hu", "Kun He", "John Hopcroft" ], "title": "Towards understanding learning representations: To what extent do different neural networks learn the same representation", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Matko Bosnjak", "Christopher P. Burgess", "Alexander Lerchner" ], "title": "Cobra: Dataefficient model-based rl through unsupervised object discovery and curiosity-driven exploration", "venue": "arxiv,", "year": 2019 }, { "authors": [ "Bengio et al", "Higgins" ], "title": "generalisation, data efficiency and interpretability (Schmidhuber", "venue": null, "year": 2016 }, { "authors": [ "Reed" ], "title": "The full data generative process for this dataset is unknown. The labelled factors include 199 car models and 24 rotations sampled from the full 360 degree out of plane rotation range. An example of an unlabelled generative factor is the colour of the car – this varies across the dataset. A.4 UNSUPERVISED DISENTANGLED REPRESENTATION LEARNING MODELS As mentioned in Sec. 3, current state of the art approaches to unsupervised disentangled representation learning", "venue": null, "year": 2014 }, { "authors": [ "Higgins" ], "title": "Rezende et al., 2014) objective presented in Eq. 1. These approaches decompose the objective in Eq. 1 into various terms and change their relative weighting to exploit the trade-off between the capacity of the latent information bottleneck with independent sources of noise, and the quality of the resulting reconstruction in order to learn a disentangled", "venue": "are based on the VAE (Kingma & Welling,", "year": 2014 }, { "authors": [ "Chen et al", "Kumar" ], "title": "2017) showed that the KL term in Eq. 1 can be further decomposed according to: Ep(x)[KL(qφ(z|x", "venue": null, "year": 2017 }, { "authors": [ "Locatello" ], "title": "UDR CORRELATION WITH FINAL TASK PERFORMANCE To illustrate the usefulness of UDR to select disentangled models, we ran two experiments. We computed the UDR correlation with fairness scores and with data efficiency on a model-based RL task. Fairness scores. Fig. 11 (left) demonstrates that UDR correlates well with the classification fairness scores introduced by Locatello et al", "venue": null, "year": 2019 }, { "authors": [ "Watters" ], "title": "details), while using differently disentangled models. The agent is provided with a pre-trained MONet (Burgess et al., 2019), an exploration policy and a transition model and has to learn a good reward predictor for the task in a dense reward setting. It uses Model Predictive Control in order to plan and solve the task, where sprites have to be clustered by color (e.g. two blue sprites and two red sprites)", "venue": null, "year": 2019 }, { "authors": [ "Rolinek" ], "title": "2019) as being important for pushing VAEs towards disentanglement. It is known that even vanilla VAEs (Kingma", "venue": "Rezende et al., 2014) enter the “polarised regime”,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Happy families are all alike; every unhappy family is unhappy in its own way. —\nLeo Tolstoy, Anna Karenina\nDespite the success of deep learning in the recent years (Hu et al., 2018; Espeholt et al., 2018; Silver et al., 2018; Lample et al., 2018; Hessel et al., 2017; Oord et al., 2016), the majority of state of the art approaches are still missing many basic yet important properties, such as fairness, data efficient learning, strong generalisation beyond the training data distribution, or the ability to transfer knowledge between tasks (Lake et al., 2016; Garnelo et al., 2016; Marcus, 2018). The idea that a good representation can help with such shortcomings is not new, and recently a number of papers have demonstrated that models with disentangled representations show improvements in terms of these shortcomings (Higgins et al., 2017b; 2018b; Achille et al., 2018; Steenbrugge et al., 2018; Nair et al., 2018; Laversanne-Finot et al., 2018; van Steenkiste et al., 2019; Locatello et al., 2019). A common intuitive way to think about disentangled representations is that it should reflect the compositional ∗Equal contribution. 1We have released the code for our method as part of disentanglement_lib" }, { "heading": "Roof height", "text": "structure of the world. For example, to describe an object we often use words pertaining to its colour, position, shape and size. We can use different words to describe these properties because they relate to independent factors of variation in our world, i.e. properties which can be compositionally recombined. Hence a disentangled representation of objects should reflect this by factorising into dimensions which correspond to those properties (Bengio et al., 2013; Higgins et al., 2018a).\nThe ability to automatically discover the compositional factors of complex real datasets can be of great importance in many practical applications of machine learning and data science. However, it is important to be able to learn such representations in an unsupervised manner, since most interesting datasets do not have their generative factors fully labelled. For a long time scalable unsupervised disentangled representation learning was impossible, until recently a new class of models based on Variational Autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) was developed. These approaches (Higgins et al., 2017a; Burgess et al., 2017; Chen et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018) scale reasonably well and are the current state of the art in unsupervised disentangled representation learning. However, so far the benefits of these techniques have not been widely exploited because of two major shortcomings: First, the quality of the achieved disentangling is sensitive to the choice of hyperparameters, however, model selection is currently impossible without having access to the ground truth generative process and/or attribute labels, which are required by all the currently existing disentanglement metrics (Higgins et al., 2017a; Kim & Mnih, 2018; Chen et al., 2018; Eastwood & Williams, 2018; Ridgeway & Mozer, 2018). Second, even if one could apply any of the existing disentanglement metrics for model selection, the scores produced by these metrics can vary a lot even for models with the same hyperparameters and trained on the same data (Locatello et al., 2018). While a lot of this variance is explained by the actual quality of the learnt representations, some of it is introduced by the metrics themselves. In particular, all of the existing supervised disentanglement metrics assume a single “canonical” factorisation of the generative factors, any deviation from which is penalised. Such a “canonical” factorisation, however, is not chosen in a principled manner. Indeed, for the majority of datasets, apart from the simplest ones, multiple equally valid disentangled representations may be possible (see Higgins et al. (2018a) for a discussion). For example, the intuitive way that humans reason about colour is in terms of hue and saturation. However, colour may also be represented in RGB, YUV, HSV, HSL, CIELAB. Any of the above representations are as valid as each other, yet only one of them is allowed to be “canonical” by the supervised metrics. Hence, a model that learns to represent colour in a subspace aligned with HSV will be penalised by a supervised metric which assumes that the canonical disentangled representation of colour should be in RGB. This is despite the fact that both representations are equal in terms of preserving the compositional property at the core of what makes disentangled representations useful (Higgins et al., 2018a). Hence, the field finds itself in a predicament. From one point of view, there exists a set of approaches capable of reasonably scalable unsupervised disentangled representation learning. On the other hand, these models are hard to use in practice, because there is no easy way to do a hyperparameter search and model selection.\nThis paper attempts to bridge this gap. We propose a simple yet effective method for unsupervised model selection for the class of current state-of-the-art VAE-based unsupervised disentangled representation learning methods. Our approach, Unsupervised Disentanglement Ranking (UDR), leverages the recent\ntheoretical results that explain why variational autoencoders disentangle (Rolinek et al., 2019), to quantify the quality of disentanglement by performing pairwise comparisons between trained model representations. We evaluate the validity of our unsupervised model selection metric against the four best existing supervised alternatives reported in the large scale study by Locatello et al. (2018): theβ-VAE metric (Higgins et al., 2017a), the FactorVAE metric (Kim & Mnih, 2018), Mutual Information Gap (MIG) (Chen et al., 2018) and DCI Disentanglement scores (Eastwood & Williams, 2018). We do so for all existing state of the art disentangled representation learning approaches: β-VAE (Higgins et al., 2017a), CCI-VAE (Burgess et al., 2017), FactorVAE (Kim & Mnih, 2018), TC-VAE (Chen et al., 2018) and two versions of DIP-VAE (Kumar et al., 2017). We validate our proposed method on two datasets with fully known generative processes commonly used to evaluate the quality of disentangled representations: dSprites (Matthey et al., 2017) and 3D Shapes (Burgess & Kim, 2018), and show that our unsupervised model selection method is able to match the supervised baselines in terms of guiding a hyperparameter search and picking the most disentangled trained models both quantitatively and qualitatively. We also apply our approach to the 3D Cars dataset (Reed et al., 2014), where the full set of ground truth attribute labels is not available, and confirm through visual inspection that the ranking produced by our method is meaningful (Fig. 1). Overall we evaluate 6 different model classes, with 6 separate hyperparameter settings and 50 seeds on 3 separate datasets, totalling 5400 models and show that our method is both accurate and consistent across models and datasets. Finally, we validate that the model ranking produced by our approach correlates well with the final task performance on two recently reported tasks: a classification fairness task (Locatello et al., 2019) and a model-based reinforcement learning (RL) task (Watters et al., 2019). Indeed, on the former our approach outperformed the reported supervised baseline scores." }, { "heading": "2 OPERATIONAL DEFINITION OF DISENTANGLING", "text": "Given a dataset of observationsX={x1,...,xN}, we assume that there exist a number of plausible generative processes gi that produce the observations from a small set of correspondingKi independent generative factors ci. For each choice of i, g :cn 7→xn, where p(cn) = ∏K j=1p(c j n). For example, a dataset containing images of an object, which can be of a particular shape and colour, and which can be in a particular vertical and horizontal positions, may be created by a generative process that assumes a ground truth disentangled factorisation into shape x colour x position, or shape x hue x saturation x position X x position Y. We operationalise a model as having learnt a disentangled representation, if it learns to invert of one of the generative processes gi and recover a latent representation z∈RL, so that it best explains the observed data p(z,x)≈p(ci,x), and factorises the same way as the corresponding data generative factors ci. The choice of the generative process can be determined by the interaction between the model class and the observed data distribution p(x), as discussed next in Sec. 3.1." }, { "heading": "3 VARIATIONAL UNSUPERVISED DISENTANGLING", "text": "The current state of the art approaches to unsupervised disentangled representation learning are based on the Variational Autoencoder (VAE) framework (Rezende et al., 2014; Kingma & Welling, 2014). VAEs attempt to estimate the lower bound on the joint distribution of the data and the latent factors p(x,z) by optimising the following objective:\nLV AE=Ep(x)[ Eqφ(z|x)[log pθ(x|z)]−KL(qφ(z|x) || p(z)) ] (1)\nwhere, in the usual case, the prior p(z) is chosen to be an isotropic unit Gaussian. In order to encourage disentangling, different approaches decompose the objective in Eq. 1 into various terms and change their relative weighting. In this paper we will consider six state of the art approaches to unsupervised disentangled representation learning that can be grouped into three broad classes based on how they modify the objective in Eq. 1: 1) β-VAE (Higgins et al., 2017a) and CCI-VAE (Burgess et al., 2017) upweight the KL term; 2) FactorVAE (Kim & Mnih, 2018) and TC-VAE (Chen et al., 2018) introduce a total correlation penalty; and 3) two different implementations of DIP-VAE (-I and -II) (Kumar et al., 2017) penalise the deviation of the the marginal posterior from a factorised prior (see Sec. A.4.1 in Supplementary Material for details)." }, { "heading": "3.1 WHY DO VAES DISENTANGLE?", "text": "In order to understand the reasoning behind our proposed unsupervised disentangled model selection method, it is first important to understand why VAEs disentangle. The objective in Eq. 1 does not in itself encourage disentangling, as discussed in Rolinek et al. (2019) and Locatello et al. (2018). Indeed, any rotationally invariant prior makes disentangled representations learnt in an unsupervised setting unidentifiable when optimising Eq. 1. This theoretical result is not surprising and has been known for a while in the ICA literature (Comon, 1994), however what is surprising is that disentangling VAEs appear to work in practice. The question of what makes VAEs disentangle was answered by Rolinek et al. (2019), who were able to show that it is the peculiarities of the VAE implementation choices that allow disentangling to emerge (see also discussion in Burgess et al. (2017); Mathieu et al. (2019)). In particular, the interactions between the reconstruction objective (the first term in Eq. 1) and the enhanced pressure to match a diagonal prior created by the modified objectives of the disentangling VAEs, force the decoder to approximate PCA-like behaviour locally around the data samples. Rolinek et al. (2019) demonstrated that during training VAEs often enter the so-called “polarised regime”, where many of the latent dimensions of the posterior are effectively switched off by being reduced to the prior qφ(zj)=p(zj) (this behaviour is often further encouraged by the extra disentangling terms added to the ELBO). When trained in such a regime, a linear approximation of the Jacobian of the decoder around a data sample xi, Ji=\n∂Decθ(µφ(xi)) ∂µφ(xi) , is forced to have orthogonal columns, and hence to separate the generative factors based on the amount of reconstruction variance they induce. Given that the transformations induced by different generative factors will typically have different effects on the pixel space (e.g. changing the position of a sprite will typically affect more pixels than changing its size), such local orthogonalisation of the decoder induces an identifiable disentangled latent space for each particular dataset. An equivalent statement is that for a well disentangled VAE, the SVD decomposition J=UΣV > of the Jacobian J calculated as above, results in a trivial V , which is a signed permutation matrix." }, { "heading": "4 UNSUPERVISED DISENTANGLED MODEL SELECTION", "text": "We now describe how the insights from Sec. 3.1 motivate the development of our proposed Unsupervised Disentanglement Ranking (UDR) method. Our method relies on the assumption that for a particular dataset and a VAE-based unsupervised disentangled representation learning model class, disentangled representations are all alike, while every entangled representation is entangled in its own way, to rephrase Tolstoy. We justify this assumption next.\nDisentangled representations are similar According to Rolinek et al. (2019) for a given non-adversarial dataset a disentangling VAE will likely keep converging to the same disentangled representation (up to permutation and sign inverse). Note that this representation will correspond to a single plausible disentangled generative process gi using the notation we introduced in Sec. 2. This is because any two different disentangled representations za and zb learnt by a VAE-based model will only differ in terms of the corresponding signed permutation matrices Va and Vb of the SVD decompositions of the locally linear approximations of the Jacobians of their decoders.\nEntangled representations are different Unfortunately the field of machine learning has little theoretical understanding of the nature and learning dynamics of internal representations in neural networks. The few pieces of research that have looked into the nature of model representations (Raghu et al., 2017; Li et al., 2016; Wang et al., 2018; Morcos et al., 2018) have been empirical rather than theoretical in nature. All of them suggest that neural networks tend to converge to different hidden representations despite being trained on the same task with the same hyperparameters and architecture and reaching similar levels of task performance. Furthermore, the theoretical analysis and the empirical demonstrations in Rolinek et al. (2019) suggest that the entangled VAEs learn representations that are different at least up to a rotation induced by a non-degenerate matrix V in the SVD decomposition of the local linear approximation of the decoder Jacobian Ji.\nThe justifications presented above rely on the theoretical work of Rolinek et al. (2019), which was empirically verified only for the β-VAE. We have reasons to believe that the theory also holds for the other model classes presented in this paper, apart from DIP-VAE-I. We empirically verify that this is the case in Sec. A.10 in Supplementary Materials. Furthermore, in Sec. 5 we show that our proposed method works well in practice across all model classes, including DIP-VAE-I.\nUnsupervised Disentanglement Ranking Our proposed UDR method consists of four steps (illustrated in Fig. 4 in Supplementary Material):\n1. TrainM =H×S models, whereH is the number of different hyperparameter settings, and S is the number of different initial model weight configurations (seeds).\n2. For each trained model i∈{1,...,M}, sample without replacement P ≤S other trained models with the same hyperparameters but different seeds.\n3. Perform P pairwise comparisons per trained model and calculate the respective UDRij scores, where i∈{1,...,M} is the model index, and j∈{1,...,P} is its unique pairwise match from Step 2.\n4. Aggregate UDRij scores for each model i to report the final UDRi=avgj(UDRij) scores, where avgj(·) is the median over P scores.\nThe key part of the UDR method is Step 3, where we calculate the UDRij score that summarises how similar the representations of the two models i and j are. As per the justifications above, two latent representations zi and zj should be scored as highly similar if they axis align with each other up to permutation (the same ground truth factor ck may be encoded by different latent dimensions within the two models, zi,a and zj,b where a 6=b), sign inverse (the two models may learn to encode the values of the generative factor in the opposite order compared to each other, zi,a =−zj,b), and subsetting (one model may learn a subset of the factors that the other model has learnt if the relevant disentangling hyperparameters encourage a different number of latents to be switched off in the two models). In order for the UDR to be invariant to the first scenario, we propose calculating a full L×L similarity matrixRij between the individual dimensions of zi∈RL and zj ∈RL (see Fig. 5 in Supplementary Material). In order to address the second point, we take the absolute value of the similarity matrix |Rij |. Finally, to address the third point, we divide the UDR score by the average number of informative latents discovered by the two models. Note that even though disentangling often happens when the VAEs enter the “polarised regime”, where many of the latent dimensions are switched off, the rankings produced by UDR are not affected by whether the model operates in such a regime or not.\nTo populate the similarity matrix Rij we calculate each matrix element as the similarity between two vectors zi,a and zj,b, where zi,a is a response of a single latent dimension za of model i over the entire ordered dataset or a fixed number of ordered mini-batches if the former is computationally restrictive (see Sec. A.5 in Supplementary Material for details). We considered two versions of the UDR score based on the method used for calculating the vector similarity: the non-parametric UDRS, using Spearman’s correlation; and the parametric UDRL, using Lasso regression following past work on evaluating representations (Eastwood & Williams, 2018; Li et al., 2016). In practice the Lasso regression version worked slightly better, so the remainder of the paper is restricted to UDRL (we use UDRL and UDR interchangeably to refer to this version), while UDRS is discussed in the Supplementary Materials.\nGiven a similarity matrixRij , we want to find one-to-one correspondence between all the informative latent dimensions within the chosen pair of models. Hence, we want to see at most a single strong correlation in each row and column of the similarity matrix. To that accord, we step through the matrix R= |Rij |, one column and row at a time, looking for the strongest correlation, and weighting it by the proportional weight it has within its respective column or row. We then average all such weighted scores over all the informative row and column latents to calculate the final UDRij score:\n1\nda+db [∑ b r2a ·IKL(b)∑ aR(a,b) + ∑ a r2b ·IKL(a)∑ bR(a,b) ] (2)\nwhere ra = maxaR(a,b) and rb = maxbR(a,b). IKL indicates an “informative” latent within a model and d is the number of such latents: da= ∑ aIKL(a) and db= ∑ bIKL(b). We define a latent dimension as “informative” if it has learnt a latent posterior which diverges from the prior:\nIKL(a)= { 1 KL(qφ(za|x) || p(za))>0.01 0 otherwise\n(3)\nUDR variations We explored whether doing all-to-all pairwise comparisons, with models in Step 2 sampled from the set of all M models rather than the subset of S models with the same hyperparameters, would produce more accurate results. Additionally we investigated the effect of choosing different numbers of models P for pairwise comparisons by sampling P ∼U[5,45].\nUDR assumptions and limitations Note that our approach is aimed at the current state of the art disentangling VAEs, for which the assumptions of our metric have been demonstrated to hold (Rolinek et al., 2019). It may be applied to other model classes, however the following assumptions and limitations need to be considered:" }, { "heading": "1. Disentangled representations produced by two models from the same class trained on the", "text": "same dataset are likely to be more similar than entangled representations – this holds for disentangling VAEs (Rolinek et al., 2019), but may not hold more broadly.\n2. Continuous, monotonic and scalar factors – UDR assumes that these properties hold for the data generative factors and their representations. This is true for the disentangling approaches described in Sec. 3, but may not hold more generally. It is likely that UDR can be adapted to work with other kinds of generative factors (e.g. factors with special or no geometry) by exchanging the similarity calculations in Step 3 with an appropriate measure, however we leave this for future work.\n3. Herd effect – since UDR detects disentangled representations through pairwise comparisons, the score it assigns to each individual model will depend on the nature of the other models involved in these comparisons. This means that UDR is unable to detect a single disentangled model within a hyperparameter sweep. It also means that when models are only compared within a single hyperparameter setting, individual model scores may be over/under estimated as they tend to be drawn towards the mean of the scores of the other models within a hyperparameter group. Thus, it is preferable to perform the UDR-A2A during model selection and UDR during hyperparameter selection.\n4. Explicitness bias – UDR does not penalise models that learn a subset of the data generative factors. In fact, such models often score higher than those that learn the full set of generative factors, because the current state of the art disentangling approaches tend to trade-off the number of discovered factors for cleaner disentangling. As discussed in Sec. 2, we provide the practitioner with the ability to choose the most disentangled model per number of factors discovered by approximating this with the d score in Eq. 2.\n5. Computational cost – UDR requires training a number of seeds per hyperparameter setting and M×P pairwise comparisons per hyperparameter search, which may be computationally expensive. Saying this, training multiple seeds per hyperparameter setting is a good research practice to produce more robust results and UDR computations are highly parallelisable.\nTo summarise, UDR relies on a number of assumptions and has certain limitations that we hope to relax in future work. However, it offers improvements over the existing supervised metrics. Apart from being the only method that does not rely on supervised attribute labels, its scores are often more representative of the true disentanglement quality (e.g. see Fig. 3 and Fig. 9 in Supplementary Materials), and it does not assume a single “canonical” disentangled factorisation per dataset. Hence, we believe that UDR can be a useful method for unlocking the power of unsupervised disentangled representation learning to real-life practical applications, at least in the near future." }, { "heading": "5 EXPERIMENTS", "text": "Our hope was to develop a method for unsupervised disentangled model selection with the following properties: it should 1) help with hyperparameter tuning by producing an aggregate score that can be used to guide evolutionary or Bayesian methods (Jaderberg et al., 2018; Snoek et al., 2012; Thornton et al., 2012; Bergstra et al., 2011; Hutter et al., 2011; Miikkulainen et al., 2017); 2) rank individual trained models based on their disentanglement quality; 3) correlate well with final task performance. In this section we evaluate our proposed UDR against these qualities. For the reported experiments we use the trained model checkpoints and supervised scores from Locatello et al. (2018) to evaluate β-VAE, CCI-VAE, FactorVAE, TC-VAE, DIP-VAE-I and DIP-VAE-II on two benchmark datasets: dSprites (Matthey et al., 2017) and 3D Shapes (Burgess & Kim, 2018) (see Sec. A.3 for details). Each model is trained withH=6 different hyperparameter settings (detailed in Sec. A.4.1 in Supplementary Material), with S=50 seeds per setting, and P =50 pairwise comparisons.\nUDR correlates well with the supervised metrics. To validate UDR, we calculate Spearman’s correlation between its model ranking and that produced by four existing supervised disentanglement metrics found to be the most meaningful in the large scale comparison study by Locatello et al. (2018): the original β-VAE metric (Higgins et al., 2017a), FactorVAE metric (Kim & Mnih, 2018), Mutual Information Gap (MIG) (Chen et al., 2018) and DCI Disentanglement (Eastwood & Williams, 2018) (see Sec. A.6 in Supplementary Material for metric details). The average correlation for UDR\nis 0.54± 0.06 and for UDR-A2A is 0.60± 0.11. This is comparable to the average Spearman’s correlation between the model rankings produced by the different supervised metrics: 0.67± 0.2. The variance in rankings produced by the different metrics is explained by the fact that the metrics capture different aspects of disentangling (see Sec. A.2 in Supplementary Materials for a discussion of how UDR relates to other representation comparison methods). Tbl. 1 provides a breakdown of correlation scores between MIG and the different versions of UDR for different model classes and datasets. It is clear that the different versions of UDR perform similarly to each other, and this holds across datasets and model classes. Note that unlike the supervised metrics, UDR does not assume a “canonical” disentangled representation. Instead, it allows any one of the many equivalent possible ground truth generative processes to become the “canonical” one for each particular dataset and model class, as per the theoretical results by Rolinek et al. (2019) summarised in Sec. 3.1.\nUDR is useful for hyperparameter selection. Fig. 2 compares the scores produced by UDR and the four supervised metrics for 3600 trained models, split over six model classes, two datasets and six hyperparameter settings. We consider the median score profiles across the six hyperparameter settings to evaluate whether a particular setting is better than others. It can be seen that UDR broadly agrees with the supervised metrics on which hyperparameters are more promising for disentangling. This holds across datasets and model classes. Hence, UDR may be useful for evaluating model fitness for disentangled representation learning as part of an evolutionary algorithm or Bayesian hyperparameter tuning.\nUDR is useful for model selection. Fig. 2 can also be used to examine whether a particular trained model has learnt a good disentangled representation. We see that some models reach high UDR scores. For example, more models score highly as the value of the β hyperparameter is increased in the β-VAE model class. This is in line with the previously reported results (Higgins et al., 2017a). Note that the 0th hyperparameter setting in this case corresponds to β=1, which is equivalent to the standard VAE objective (Kingma & Welling, 2014; Rezende et al., 2014). As expected, these models score low in terms of disentangling. We also see that for some model classes (e.g. DIP-VAE-I, DIP-VAE-II and FactorVAE on dSprites) no trained model scores highly according to UDR. This suggests that none of the hyperparameter choices explored were good for this particular dataset, and that no instance of the model class learnt to disentangle well. To test this, we use latent traversals to qualitatively evaluate the level of disentanglement achieved by the models, ranked by their UDR scores. This is a common technique to qualitatively evaluate the level of disentanglement on simple visual datasets where no ground truth attribute labels are available. Such traversals involve changing the value of one latent dimension at a time and evaluating its effect on the resulting reconstructions to understand whether the latent has learnt to represent anything semantically meaningful. Fig. 3 demonstrates that the qualitative disentanglement quality is reflected well in the UDR scores. The figure also highlights that the supervised metric scores can sometimes be overoptimistic. For example, compare TC-VAE and β-VAE traversals in Fig. 3. These are scored similarly by the supervised metric (0.774 and 0.751) but differently by UDR (0.444 and 0.607). Qualitative evaluation of the traversals clearly shows thatβ-VAE learnt a more disentangled representation than TCVAE, which is captured by UDR but not by the supervised metric. Fig. 9 in Supplementary Material provides more examples. We also evaluated how well UDR ranks models trained on more complex datasets, CelebA and ImageNet, and found that it performs well (see Sec. A.9 in Supplementary Materials).\nUDR works well even with five pairwise comparisons. We test the effect of the number of pairwise comparisons P on the variance and accuracy of the UDR scores. Tbl. 2 reports the changes in the rank correlation with the β-VAE metric on the dSprites dataset as P is varied between 5 and 45. We see that the correlation between the UDR and the β-VAE metric becomes higher and the variance decreases as the number of seeds is increased. However, even with P =5 the correlation is reasonable.\nUDR generalises to a dataset with no attribute labels. We investigate whether UDR can be useful for selecting well disentangled models trained on the 3D Cars (Reed et al., 2014) dataset with poorly labelled attributes, which makes it a bad fit for supervised disentanglement metrics. Fig. 1 shows that a highly ranked model according to UDR appears disentangled, while a poorly ranked one appears entangled. Fig. 10 in Supplementary Material provides more examples of high and low scoring models according to the UDR method.\nUDR predicts final task performance. We developed UDR to help practitioners use disentangled representations to better solve subsequent tasks. Hence, we evaluate whether the model ranking produced by UDR correlates with task performance on two different domains: the fairness on a classification task introduced by Locatello et al. (2018), and data efficiency on a clustering task for a model-based reinforcement learning agent introduced by Watters et al. (2019) (see Sec. A.8 in Supplementary Materials for more details). We found that UDR had an average of 0.8 Spearman correlation with the fairness scores, which is higher than the average of 0.72 correlation between fairness and supervised scores reported by Locatello et al. (2018). We also found that UDR scores had 0.56 Spearman correlation with data efficiency of the COBRA agent. The difference between the best and the worst models according to UDR amounted to around 66% reduction in the number of steps to 90% success rate on the task." }, { "heading": "6 CONCLUSION", "text": "We have introduced UDR, the first method for unsupervised model selection for variational disentangled representation learning. We have validated our approach on 5400 models covering all six state of the art VAE-based unsupervised disentangled representation learning model classes. We compared UDR to four existing supervised disentanglement metrics both quantitatively and qualitatively, and demonstrated that our approach works reliably well across three different datasets, often ranking models more accurately than the supervised alternatives. Moreover, UDR avoids one of the big shortcomings of the supervised disentangling metrics – the arbitrary choice of a “canonical” disentangled factorisation, instead allowing any of the equally valid disentangled generative processes to be accepted. Finally, we also demonstrated that UDR is useful for predicting final task performance using two different domains. Hence, we hope that UDR can be a step towards unlocking the power of unsupervised disentangled representation learning to real-life applications." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank Olivier Bachem and Francesco Locatello for helping us re-use their code and model checkpoints, and Neil Rabinowitz, Avraham Ruderman and Tatjana Chavdarova for useful feedback." }, { "heading": "A SUPPLEMENTARY MATERIAL", "text": "" }, { "heading": "A.1 USEFUL PROPERTIES OF DISENTANGLED REPRESENTATIONS", "text": "Disentangled representations are particularly useful because they re-represent the information contained in the data in a way that enables semantically meaningful compositionality. For example, having discovered that the data is generated using two factors, colour and shape, such a model would be able to support meaningful reasoning about fictitious objects, like pink elephants, despite having never seen one during training (Higgins et al., 2017b; 2018b). This opens up opportunities for counterfactual reasoning, more robust and interpretable inference and model-based planning (Higgins et al., 2018a; Suter et al., 2018). Furthermore, such a representation would support more data efficient learning for subsequent tasks, like a classification objective for differentiating elephants from cats. This could be achieved by ignoring the nuisance variables irrelevant for the task, e.g. the colour variations, by simply masking out the disentangled subspaces that learnt to represent such nuisances, while only paying attention to the task-relevant subspaces, e.g. the units that learnt to represent shape (Cohen & Welling, 2016; Gens & Domingos, 2014; Soatto, 2010). Hence, the semantically meaningful compositional nature of disentangled representations is perhaps the most sought after aspect of disentangling, due to its strong implications for generalisation, data efficiency and interpretability (Schmidhuber, 1992; Bengio et al., 2013; Higgins et al., 2018a)." }, { "heading": "A.2 ASPECTS OF DISENTANGLEMENT MEASURED BY DIFFERENT METRICS", "text": "Methods for evaluating and comparing representations have been proposed in the past. The most similar approaches to ours are the DCI Disentanglement score from Eastwood & Williams (2018) and the axis alignment comparison of representations in trained classifiers proposed in Li et al. (2016). The former is not directly applicable for unsupervised disentangled model selection, since it requires access to the ground truth attribute labels. Even when adapted to compare two latent representations, our preliminary experiments suggested that the entropy based aggregation proposed in Eastwood & Williams (2018) is inferior to our aggregation in Eq. 2 when used in the UDR setup. The approach by Li et al. (2016) shares the similarity matrix calculation step with us, however they never go beyond that quantitatively, opting for qualitative evaluations of model representations instead. Hence, their approach is not directly applicable to quantitative unsupervised disentangled model ranking.\nOther related approaches worth mentioning are the Canonical Correlation Analysis (CCA) and its modifications (Hardoon et al., 2004; Raghu et al., 2017; Morcos et al., 2018). These approaches, however, tend to be invariant to invertible affine transformations and therefore to the axis alignment of individual neurons, which makes them unsuitable for evaluating disentangling quality. Finally, Representation Similarity Matrix (RSM) (Kriegeskorte et al., 2008) is a commonly used method in Neuroscience for comparing the representations of different systems to the same set of stimuli. This technique, however, is not applicable for measuring disentangling, because it ignores dimension-wise response properties.\nWhen talking about disentangled representations, three properties are generally considered: modularity, compactness and explicitness2 (Ridgeway & Mozer, 2018). Modularity measures whether each latent dimension encodes only one data generative factor, compactness measures whether each data generative factor is encoded by a single latent dimension, and explicitness measures whether all the information about the data generative factors can be decoded from the latent representation. We believe that modularity is the key aspect of disentangling, since it measures whether the representation is compositional, which gives disentangled representations the majority of their beneficial properties (see Sec. A.1 in Supplementary Materials for more details). Compactness, on the other hand, may not always be desirable. For example, according to a recent definition of disentangled representations (Higgins et al., 2018a), it is theoretically impossible to represent 3D rotation in a single dimension (see also Ridgeway & Mozer (2018)). Finally, while explicitness is clearly desirable for preserving information about the data that may be useful for subsequent tasks, in practice models often fail to discover and represent the full set of the data generative factors due to restrictions on both the observed data distribution and the model capacity (Mathieu et al., 2019). Hence, we suggest noting the explicitness of a representation, but not necessarily punishing its disentanglement ranking if it is not fully explicit. Instead, we suggest that the practitioner should have the choice to select the most disentangled model given a particular number of discovered generative factors. Hence, in the rest of the paper we will use the term “disentanglement” to refer to the compositional property of a representation related to the modularity measure. Tbl. 3 provides a summary of how the different metrics considered in the paper compare in terms of modularity, compactness and explicitness." }, { "heading": "A.3 DATASET DETAILS", "text": "dSprites A commonly used unit test for evaluating disentangling is the dSprites dataset (Matthey et al., 2017). This dataset consists of images of a single binary sprite pasted on a blank background and can be fully described by five generative factors: shape (3 values), position x (32 values), position y (32 values), size (6 values) and\n2Similar properties have also been referred to as disentanglement, completeness and informativeness respectively in the independent yet concurrent paper by Eastwood & Williams (2018).\nrotation (40 values). All the generative factors are sampled from a uniform distribution. Rotation is sampled from the full 360 degree range. The generative process for this dataset is fully deterministic, resulting in 737,280 total images produced from the Cartesian product of the generative factors.\n3D Shapes A more complex dataset for evaluating disentangling is the 3D Shapes dataset (Burgess & Kim, 2018). This dataset consists of images of a single 3D object in a room and is fully specified by six generative factors: floor colour (10 values), wall colour (10 values), object colour (10 values), size (8 values), shape (4 values) and rotation (15 values). All the generative factors are sampled from a uniform distribution. Colours are sampled from the circular hue space. Rotation is sampled from the [-30, 30] degree angle range.\n3D Cars This dataset was adapted from Reed et al. (2014). The full data generative process for this dataset is unknown. The labelled factors include 199 car models and 24 rotations sampled from the full 360 degree out of plane rotation range. An example of an unlabelled generative factor is the colour of the car – this varies across the dataset." }, { "heading": "A.4 UNSUPERVISED DISENTANGLED REPRESENTATION LEARNING MODELS", "text": "As mentioned in Sec. 3, current state of the art approaches to unsupervised disentangled representation learning are based on the VAE (Kingma & Welling, 2014; Rezende et al., 2014) objective presented in Eq. 1. These approaches decompose the objective in Eq. 1 into various terms and change their relative weighting to exploit the trade-off between the capacity of the latent information bottleneck with independent sources of noise, and the quality of the resulting reconstruction in order to learn a disentangled representation. The first such modification was introduced by Higgins et al. (2017a) in their β-VAE framework:\nEp(x)[Eqφ(z|x)[log pθ(x|z)]−β KL(qφ(z|x) || p(z)) ] (4)\nIn order to achieve disentangling in β-VAE, the KL term in Eq. 4 is typically up-weighted by setting β > 1. This implicitly reduces the latent bottleneck capacity and, through the interaction with the reconstruction term, encourages the generative factors ck with different reconstruction profiles to be encoded by different independent noisy channels zl in the latent bottleneck. Building on the β-VAE ideas, CCI-VAE (Burgess et al., 2017) suggested slowly increasing the bottleneck capacity during training, thus improving the final disentanglement and reconstruction quality:\nEp(x)[Eqφ(z|x)[log pθ(x|z)]−γ |KL(qφ(z|x) || p(z))−C| ] (5)\nLater approaches (Kim & Mnih, 2018; Chen et al., 2018; Kumar et al., 2017) showed that the KL term in Eq. 1 can be further decomposed according to:\nEp(x)[KL(qφ(z|x) || p(z)) ]=I(x;z)+KL(qφ(z) || p(z)) (6)\nHence, penalising the full KL term as in Eqs. 4-5 is not optimal, since it unnecessarily penalises the mutual information between the latents and the data. To remove this undesirable side effect, different authors suggested instead adding more targeted penalised terms to the VAE objective function. These include different implementations of the total correlation penalty (FactorVAE by Kim & Mnih (2018) and TC-VAE by Chen et al. (2018)):\nLV AE−γ KL(qφ(z) || M∏ j=1 qφ(zj)) (7)\nand different implementations of the penalty that pushes the marginal posterior towards a factorised prior (DIP-VAE by Kumar et al. (2017)):\nLV AE−γ KL(qφ(z) || p(z)) (8)" }, { "heading": "A.4.1 MODEL IMPLEMENTATION DETAILS", "text": "We re-used the trained checkpoints from Locatello et al. (2018), hence we recommend the readers to check the original paper for model implementation details. Briefly, the following architecture and optimiser were used.\nFor consistency, all the models were trained using the same architecture, optimiser, and hyperparameters. All of the methods use a deep neural network to encode and decode the latent embedding and the parameters of the latent factors are predicted using a Gaussian encoder whose architecture is specified in Table 4. All of the models predict a latent vector with 10 factors. Each model was also trained with 6 different levels of regularisation strength specified in Table 5. The ranges of the hyperparameters used for the various levels of regularisation were specified to show a diversity of different performance on different datasets without relying on pre-existing intuition on good hyperparameters, however ranges were based on hyperparameters that were used previously in literature. For each of the model classes outlined above, we tried 6 hyperparameter values with 50 seeds each.\nβ-VAE The β-VAE (Higgins et al., 2017a) model is similar to the vanilla VAE model but with an additional hyperparameter β to modify the strength of the KL regulariser.\nEp(x)[Eqφ(z|x)[log pθ(x|z)]−β KL(qφ(z|x) || p(z)) ] (9)\nwhere a β value of 1 corresponds to the vanilla VAE model. Increasing β enforces a stronger prior on the latent distribution and encourages the representation to be independent.\nCCI-VAE The CCI-VAE model (Burgess et al., 2017) is a variant of the β-VAE where the KL divergence is encouraged to match a controlled value C which is increased gradually throughout training. This yields us the objective function for CCI-VAE.\nEp(x)[Eqφ(z|x)[log pθ(x|z)]−β |KL(qφ(z|x) || p(z))−C| ] (10)\nFactorVAE FactorVAE (Kim & Mnih, 2018) specifically penalises the dependencies between the latent dimensions such that the “Total Correlation” term is targeted yielding a modified version of the β-VAE objective.\nEp(x)[Eqφ(z|x)[log pθ(x|z)]−KL(qφ(z|x) || p(z)) ] −βKL(q(z)|| ∏ j q(zj)) (11)\nThe “Total Correlation” term is intractable in this case so for FactorVAE, samples are used from both q(z|x) and q(z) as well as the density-ratio trick to compute an estimate of the “Total Correlation” term. FactorVAE uses an additional discriminator network to approximate the density ratio in the KL divergence. The implementation details for the discriminator network and its hyperparameters can be found in Table 5(b) and 5(c).\nTC-VAE The TC-VAE model (Chen et al., 2018) which independently from FactorVAE has a similar objective KL regulariser which contains a “Total Correlation” term. In the case of TC-VAE the “Total Correlation” term is estimated using a biased Monte-Carlo estimate.\nDIP-VAE The DIP-VAE model also adds regularisation to the aggregated posterior but instead an additional loss term is added to encourage it to match the factorised prior. Since the KL divergence is intractable, other measures of divergence are used instead. Covp(x)[µφ(x)] can be used, yielding the DIP-VAE-I objective\nEp(x)[Eqφ(z|x)[log pθ(x|z)]−KL(qφ(z|x) || p(z)) ] −λod ∑ i6=j [Covp(x)[µφ(x)]] 2 ij\n−λd ∑ i ([Covp(x)[µφ(x)]]ii−1)2 (12)\norCovqφ [z] is used instead yielding the DIP-VAE-II objective.\nEp(x)[Eqφ(z|x)[log pθ(x|z)]−KL(qφ(z|x) || p(z)) ] −λod ∑ i 6=j [Covqφ [z]] 2 ij\n−λd ∑ i ([Covqφ [z]]ii−1) 2\n(13)" }, { "heading": "A.5 UDR IMPLEMENTATION DETAILS", "text": "Similarity matrix To compute the similarity matrix Rij we follow the approach of Li et al. (2016) and Morcos et al. (2018). For a given dataset X = {x1,x2, ..., ,xN} and a neuron a ∈ {1, ...,L} of model i (denoted as zi,a), we define zi,a to be the vector of mean inferred posteriors qi(zi|xi) across the full dataset: zi,a = (zi,a(x1),...,zi,a(xN ))∈RN . Note that this is different from the often considered notion of a “latent representation vector”. Here zi,a is a response of a single latent dimension over the entire dataset, not an entire latent response for a single input. We then calculate the similarity between each two of such vectors zi,a and zj,b using either Lasso regression or Spearman’s correlation.\nLasso regression (UDRL) We trained L lasso regressors to predict each of the latent responses zi,a from zj using the dataset of latent encodingsZi,a={(zj,1,zi,a,1),...,(zj,N ,zi,a,N )}. Each row inRij(a) is then filled in using the weights of the trained Lasso regressor for zi,a. The lasso regressors were trained using the default Scikit-learn multi-task lasso objective minw 12nsamples ||XW−Y || 2 Fro+λ||W ||21 where Fro is the Frobenius\nnorm: ||A||Fro = √∑ ija 2 ij and the l1l2 loss is computed as ||A||21 = ∑ i √∑ ja 2 ij . λ is chosen using cross validation and the lasso is trained until convergence until either 1000 iterations have been run or our updates are below a tolerance of 0.0001. Lasso regressors were trained on a dataset of 10000 latents from each model and training was performed using coordinate descent over the entire dataset. Rnm is then computed by extracting the weights in the trained lasso regressor and computing their absolute value (Eastwood & Williams, 2018). It is important that the representations are normalised per-latent such that the relative importances computed per latent are scaled to reflect their contribution to the output. Normalising our latents also ensures that the weights that are computed roughly lie in the interval [−1,1].\nSpearman’s based similarity matrix (UDRS) We calculate each entry in the similarity matrix according toRij(a,b)=Corr(zi,a,zj,b), where Corr stands for Spearman’s correlation. We use Spearman’s correlation to measure the similarity between zi,a and zj,b, because we do not want to necessarily assume a linear relationship between the two latent encodings, since the geometry of the representational space is not crucial for measuring whether a representation is disentangled (see Sec. 2), but we do hope to find a monotonic dependence between them. Spearman correlation coefficients were computed by extracting 1000 samples from each model and computing the Spearman correlation over all the samples on a per-latent basis.\nAll-to-all calculations To make all-to-all comparisons, we picked 10 random seeds per hyperparameter setting and limited all the calculations to those models. Hence we made the maximum of (60 choose 2) pairwise model comparisons when calculating UDR-A2A.\nInformative latent thresholding Uninformative latents typically have KL 0.01 while informative latents have KL 0.01, so KL=0.01 threshold in Eq. 3 is somewhat arbitrarily chosen to pick out the informative latents z.\nSample reduction experiments We randomly sampled without replacement 20 different sets of P models for pairwise comparison from the original set of 50 models with the same hyperparameter setting for UDR or 60 models with different seeds and hyperparameters for UDR-A2A." }, { "heading": "A.6 SUPERVISED METRIC IMPLEMENTATION DETAILS", "text": "Original β-VAE metric. First proposed in Higgins et al. (2017a), this metric suggests sampling two batches of observations x where in both batches the same single data generative factor is fixed to a particular value, while the other factors are sampled randomly from the underlying distribution. These two batches are encoded into the corresponding latent representations qφ(z|x) and the pairwise differences between the corresponding mean latent values from the two batches are taken. Disentanglement is measured as the ability of a linear classifier to predict the index of the data generative factor that was fixed when generating x.\nWe compute the β-VAE score by first randomly picking a single factor of variation and fixing the value of that factor to a randomly sampled value. We then generate two batches of 64 where all the other factors are sampled\nrandomly and take the mean of the differences between the latent mean responses in the two batches to generate a training point. This process is repeated 10000 times to generate a training set by using the fixed factor of variation as the label. We then train a logistic regression on the data using Scikit-learn and report the evaluation accuracy on a test set of 5000 as the disentanglement score.\nFactorVAE metric. Kim & Mnih (2018) proposed a modification on the β-VAE metric which made the classifier non-parametric (majority vote based on the index of the latent dimension with the least variance after the pairwise difference step). This made the FactorVAE metric more robust, since the classifier did not need to be optimised. Furthermore, the FactorVAE metric is more accurate than the β-VAE one, since the β-VAE metric often over-estimates the level of disentanglement by reporting 100% disentanglement even when onlyK−1 factors were disentangled.\nThe Factor VAE score is computed similarly to the β-VAE metric but with a few modifications. First we draw a set of 10000 random samples from the dataset and we estimate the variance of the mean latent responses in the model. Latents with a variance of less than 0.05 are discarded. Then batches of 64 samples are generated by a random set of generative factors with a single fixed generative factor. The variances of all the latent responses over the 64 samples are computed and divided by the latent variance computed in the first step. The variances are averaged to generate a single training point using the fixed factor of variation as the label. 10000 such training points are generated as the training set. A majority vote classifier is trained to pick out the fixed generative factor and the evaluation accuracy is computed on test set of 5000 and reported as the disentanglement score.\nMutual Information Gap (MIG). The MIG metric proposed in Chen et al. (2018) proposes estimating the mutual information (MI) between each data generative factor and each latent dimension. For each factor, they consider two latent dimensions with the highest MI scores. It is assumed that in a disentangled representation only one latent dimension will have high MI with a single data generative factor, and hence the difference between these two MI scores will be large. Hence, the MIG score is calculated as the average normalised difference between such pairs of MI scores per each data generative factor. Chen et al. (2018) suggest that the MIG score is more general and unbiased than the β-VAE and FactorVAE metrics.\nWe compute the Mutual Information Gap by taking the discretising the mean representation of 10000 samples into 20 bins. The disentanglement score is then derived by computing, per generative factor, the difference between the top two latents with the greatest mutual information with the generative factor and taking the mean." }, { "heading": "UDR LASSO SPEARMAN SUPERVISED", "text": "where K is the number of generative factors, from which vk is a single generative factor zj is the mean representation and j(k) = argmaxjIn(zj ;vk) is the latent representation with the greatest mutual information with the generative factor. Hvk is the computed entropy of the generative factor.\nDCI Disentanglement. This is the disentanglement part of the three-part metric proposed by Eastwood & Williams (2018). The DCI disentanglement metric is somewhat similar to our unsupervised metric, whereby the authors train a random forest classifier to predict the ground truth factors from the corresponding latent encodings q(z|x). They then use the resulting M ×N matrix of feature importance weights to calculate the difference between the entropy of the probability that a latent dimension is important for predicting a particular ground truth factor weighted by the relative importance of each dimension.\nThe DCI disentanglement metric is an implementation of the disentanglement metric as described in Eastwood & Williams (2018) using a gradient boosted tree. It was computed by first extracting the relative importance of each latent mean representation as a predictor for each generative factor by training a gradient boosted tree using the default Scikit-learn model on 10000 training and 1000 test points and extracting the importance weights. The weights are summarised into an importance matrixRij with the number of rows equal to the number of generative factors and columns equal to the number of latents. The disentanglement score for each column is computed asDi=(1− HK(Pi))whereHK(Pi)=− ∑K−1 k=0 PiklogKPik denotes the entropy. Pik=Rij/ ∑K−1 k=0 is the probability of the latent factor i in being important for predicting factork. The weighted mean of the scores for the column is computed using the relative predictive importance of each column as the weightD= ∑ ipi∗Di where pi= ∑ jRij/ ∑ ijRij ." }, { "heading": "A.7 ADDITIONAL RESULTS", "text": "We evaluated four UDR versions, which differed in terms of whether Spearman- and Lasso-based similarity matrices Rij were used (subscripts S and L respectively), and whether the models for pairwise similarity comparison are picked from the pool of different seeds trained with the same hyperparameters or from the pool of all models (the latter indicated by the A2A suffix). The A2A correlations in Tbl. 7 are on average slightly higher, however these scores are more computationally expensive to compute due to the higher number of total pairwise similarity calculations. For that reason, the scores presented in the table are calculated using only 20% of all the trained models. Hence, the results presented in the main text of the paper are computed using the UDRL score, which allowed us to evaluate all 5400 models and performed slightly better than the UDRS score. Figs. 6-8 provide more details on the performance of the different UDR versions.\nTo qualitatively validate that the UDR method is ranking models well, we look into more detail into the β-VAE model ranking when evaluated with the DCI disentanglement metric on the dSprites dataset. This scenario resulted in the worst disagreement between UDR and the supervised metric as shown in Fig. 6. We consider the UDRL version of our method, since it appears to give the best trade off between overall correlations with the supervised metrics and hyperparameter selection accuracy. Fig. 9 demonstrates that the poor correlation between UDRL and DCI Disentanglement is due to the supervised metric. Models ranked highly by UDRL but poorly by DCI Disentanglement appear to be qualitatively disentangled through visual inspection of latent traversals. Conversely, models scored highly by DCI Disentanglement but poorly by UDRL appear entangled." }, { "heading": "A.8 UDR CORRELATION WITH FINAL TASK PERFORMANCE", "text": "To illustrate the usefulness of UDR to select disentangled models, we ran two experiments. We computed the UDR correlation with fairness scores and with data efficiency on a model-based RL task.\nFairness scores. Fig. 11 (left) demonstrates that UDR correlates well with the classification fairness scores introduced by Locatello et al. (2019). We adopted a similar setup described in Locatello et al. (2019) to compute fairness, using a gradient booting classifier over 10000 labelled examples. The fairness score was computed by taking\nthe mean of the fairness scores across all targets and all sensitive variables where the fairness scores are computed by measuring the total variation after intervening on the sensitive variable. The fairness scores were compared against the Lasso regression version of UDR where models were paired only within the same hyperparameters.\nModel-based RL data efficiency. We reproduced the results from the COBRA agent (Watters et al., 2019), to observe if UDR would correlate with the final tasks performance when using VAEs as state representations. More precisely, we will look at the training data efficiency, reported as the number of steps needed to achieve 90% performance on the Clustering tasks (see Watters et al. (2019) for details), while using differently disentangled models.\nThe agent is provided with a pre-trained MONet (Burgess et al., 2019), an exploration policy and a transition model and has to learn a good reward predictor for the task in a dense reward setting. It uses Model Predictive Control in order to plan and solve the task, where sprites have to be clustered by color (e.g. two blue sprites and two red sprites). In COBRA, the authors use a MONet with disentangled representation by using a high β=1.\nWhen pre-training MONet, we used β∈{0.01,0.1,1} in order to introduce entanglement in the representations without compromising reconstruction accuracy and pre-trained 10 seeds for each value of β. We use 5 random initialisations of the reward predictor for each possible MONet model, and train them to perform the clustering task as explained in Watters et al. (2019). We report the number of steps to reach 90% success, averaged across the initialisations. The UDR score is computed by feeding images with a single sprite to obtain an associated unique representation and proceeding as described in the main text.\nAs can be seen in Figure 11 (right), we find that the UDR scores correlate with this final data efficiency (linear regression shown, Spearman correlation ρ=0.56). This indicates that one could leverage the UDR score as a metric to select representations for further tasks. In this analysis we used the version of UDR that uses Spearman correlations and within-hyperparameter model comparisons." }, { "heading": "A.9 EVALUATING UDR ON MORE COMPLEX DATASETS", "text": "We evaluated whether UDR is useful for model selection on more complex datasets. In particular, we chose CelebA and ImageNet. While disentangling VAEs have been shown to perform well on CelebA in the past (e.g. Higgins et al. (2018b)), ImageNet is notoriously too complex for even vanilla VAEs to model. However, we still wanted to verify whether the coarse representations of VAEs on ImageNet could be disentangled, and if so, whether UDR would be useful for model selection. To this end, we ran a hyperparameter sweep for the β-VAE and ranked its representations using UDR. Fig. 12 shows that UDR scores are clearly different for the different values of the β hyperparameter. It is also clear that the models were able to learn about CelebA and produce reasonable reconstructions, but on ImageNet even the vanilla VAEs struggled to represent anything but the coarsest information. Figs. 13-14 plot latent traversals for three randomly chosen models with high (>0.6) and low (<0.3) UDR scores. The latents are sorted by their informativeness, as approximated by their batch-averaged per dimension KL with the prior as per Eq. 3. It is clear that for both datasets those models that are ranked high by the UDR have both more interpretable and more similar representations than those models that are ranked low." }, { "heading": "A.10 QUALITATIVE EVALUATION OF MODEL REPRESENTATIONS RANKED BY UDR SCORES", "text": "In this section we attempt to qualitatively verify our assumption that “for a particular dataset and a VAE-based unsupervised disentangled representation learning model class, disentangled representations are all alike, while every entangled representation is entangled in its own way, to rephrase Tolstoy”. The theoretical justification of the proposed UDR hinges on the work by Rolinek et al. (2019). However, that work only empirically evaluated their analysis on the β-VAE model class. Even though we have reasons to believe that their theoretical results would also hold for the other disentangling VAEs evaluated in this paper, in this section we empirically evaluate whether this is true.\nFirst, we check if all model classes operate in the so called “polarised regime”, which was highlighted by Rolinek et al. (2019) as being important for pushing VAEs towards disentanglement. It is known that even vanilla VAEs (Kingma & Welling, 2014; Rezende et al., 2014) enter the “polarised regime”, which is often cited as one of their shortcomings (e.g. see Rezende & Viola (2018)). All of the disentangling VAEs considered in this paper augment the original ELBO objective with extra terms. None of these extra terms penalise entering the “polarised regime”, apart from that of DIP-VAE-I. We tested empirically whether different model classes entered the “polarised regime” during our hyperparameter sweeps. We did this by counting the number of latents that were “switched off” in each of the 5400 models considered in our paper by using Eq. 3. We found that all models apart from DIP-VAE-I entered the polarised regime during the hyperparameter sweep, having on average 2.95/10 latents “switched off” (with a standard deviation of 1.97).\nSecond, we check if the models scored highly by the UDR do indeed have similar representations, and models that are scored low have dissimilar representations. We do this qualitatively by plotting latent traversals for three randomly chosen models within each of the six disentangling VAE model classes considered in this paper.\nWe groups these plots by UDR scores into three bands: high (UDR>0.4), medium (0.3<UDR<0.4) and low (UDR<0.3). Fig. 15 shows latent traversals for all model classes that were able to achieve the high range of UDR scores (note that some model classes were not able to achieve high UDR values with the hyperparameter settings evaluated in this paper). We present the latent traversals for all ten latents per model without sorting them in any particular way. We also colour code the latents by their semantic meaning (if the meaning is apparent from the latent traversal). Fig. 15 shows that the representations learnt by the highly ranked models all appear to be very similar (up to subsetting, sign inverse and permutation). Note that the models also include many latents that are “switched off”. Fig. 16 shows latent traversals for two model classes that did not achieve high UDR scores. We see that these models now have fewer semantically meaningful latent dimensions, and fewer latents that are uninformative or “switched off”. Finally, Fig. 17 shows latent traversals for all model classes that had models which scored low on UDR. We see that many of these models do not have any uninformative latents, and their representations are hard to interpret. Furthermore, it is hard to find similarity between the representations learnt by the different models. Together, Figs. 15-17 empirically verify that our assumption holds for the model classes considered in this paper. However, we recommend that any practitioner using UDR on new disentangling model classes developed in the future first verify that the assumptions of the UDR hold for those models. We suggest training the new models on a number of toy but well studied datasets (like dSprites or Shapes3D) and checking if the ranks produced by UDR correlate with those produced by the supervised metrics. Furthermore, we suggest a qualitative evaluation of the traversals plots for the high and low scoring models." } ]
2,020
null
SP:27b73923af173446aa087b192767dedd7119231b
[ "This paper designs a set of dynamics for learning in games called follow-the-ridge with the goal of finding local stackelberg equilibria. The main theoretical results show that the only stable attractors of the dynamics are stackelberg equilibria. Moreover, the authors give a deterministic convergence rate for the vanilla algorithm and a convergence rate using momentum. Empirical results show the learning dynamics cancel out rotational components and drive the vector field to zero rapidly, while reaching good performance on simple GAN examples.", "The present work proposes a new algorithm, \"Follow the Ridge\" (FR) that uses second order gradient information to iteratively find local minimax points, or Stackelberg equilibria in two player continuous games. The authors show rigorously that the only stable fixed points of their algorithm are local minimax points and that their algorithm therefore converges locally exactly to those points. They show that the resulting optimizer is compatible with heuristics like RMSProp and Momentum. They further evaluate their algorithm on polynomial toy problems and simple GANs." ]
Many tasks in modern machine learning can be formulated as finding equilibria in sequential games. In particular, two-player zero-sum sequential games, also known as minimax optimization, have received growing interest. It is tempting to apply gradient descent to solve minimax optimization given its popularity and success in supervised learning. However, it has been noted that naive application of gradient descent fails to find some local minimax and can converge to nonlocal-minimax points. In this paper, we propose Follow-the-Ridge (FR), a novel algorithm that provably converges to and only converges to local minimax. We show theoretically that the algorithm addresses the notorious rotational behaviour of gradient dynamics, and is compatible with preconditioning and positive momentum. Empirically, FR solves toy minimax problems and improves the convergence of GAN training compared to the recent minimax optimization algorithms1.
[ { "affiliations": [], "name": "Yuanhao Wang" }, { "affiliations": [], "name": "Guodong Zhang" }, { "affiliations": [], "name": "Jimmy Ba" } ]
[ { "authors": [ "Leonard Adolphs", "Hadi Daneshmand", "Aurelien Lucchi", "Thomas Hofmann" ], "title": "Local saddle point optimization: A curvature exploitation approach", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "David Balduzzi", "Sebastien Racaniere", "James Martens", "Jakob Foerster", "Karl Tuyls", "Thore Graepel" ], "title": "The mechanics of n-player differentiable games", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Hugo Berard", "Gauthier Gidel", "Amjad Almahairi", "Pascal Vincent", "Simon Lacoste-Julien" ], "title": "A closer look at the optimization landscapes of generative adversarial networks", "venue": null, "year": 1906 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Bo Dai", "Albert Shaw", "Lihong Li", "Lin Xiao", "Niao He", "Zhen Liu", "Jianshu Chen", "Le Song" ], "title": "Sbeed: Convergent reinforcement learning with nonlinear function approximation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Constantinos Daskalakis", "Ioannis Panageas" ], "title": "The limit points of (optimistic) gradient descent in min-max optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Constantinos Daskalakis", "Andrew Ilyas", "Vasilis Syrgkanis", "Haoyang Zeng" ], "title": "Training gans with optimism", "venue": "In International Conference on Learning Representations (ICLR", "year": 2018 }, { "authors": [ "Simon S Du", "Jianshu Chen", "Lihong Li", "Lin Xiao", "Dengyong Zhou" ], "title": "Stochastic variance reduction methods for policy evaluation", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Tanner Fiez", "Benjamin Chasnov", "Lillian J Ratliff" ], "title": "Convergence of learning dynamics in stackelberg games", "venue": "arXiv preprint arXiv:1906.01217,", "year": 2019 }, { "authors": [ "Oded Galor" ], "title": "Discrete dynamical systems", "venue": "Springer Science & Business Media,", "year": 2007 }, { "authors": [ "Ian Gemp", "Sridhar Mahadevan" ], "title": "Global convergence to the equilibrium of gans using variational inequalities", "venue": "arXiv preprint arXiv:1808.01531,", "year": 2018 }, { "authors": [ "Gauthier Gidel", "Reyhane Askari Hemmat", "Mohammad Pezeshki", "Rémi Le Priol", "Gabriel Huang", "Simon Lacoste-Julien", "Ioannis Mitliagkas" ], "title": "Negative momentum for improved game dynamics", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Roger A. Horn", "Charles R. Johnson" ], "title": "Matrix analysis", "venue": null, "year": 2013 }, { "authors": [ "Chi Jin", "Praneeth Netrapalli", "Michael I Jordan" ], "title": "What is local optimality in nonconvex-nonconcave minimax optimization", "venue": "arXiv preprint arXiv:1902.00618,", "year": 2019 }, { "authors": [ "David Kinderlehrer", "Guido Stampacchia" ], "title": "An introduction to variational inequalities and their applications, volume 31", "venue": null, "year": 1980 }, { "authors": [ "GM Korpelevich" ], "title": "The extragradient method for finding saddle points and other problems", "venue": "Matecon, 12:747–756,", "year": 1976 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tianyi Lin", "Chi Jin", "Michael I Jordan" ], "title": "On gradient descent ascent for nonconvex-concave minimax problems", "venue": "arXiv preprint arXiv:1906.00331,", "year": 2019 }, { "authors": [ "Michael L Littman" ], "title": "Markov games as a framework for multi-agent reinforcement learning", "venue": "In Machine learning proceedings", "year": 1994 }, { "authors": [ "Songtao Lu", "Ioannis Tsaknakis", "Mingyi Hong", "Yongxin Chen" ], "title": "Hybrid block successive approximation for one-sided non-convex min-max problems: algorithms and applications", "venue": null, "year": 1902 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "James Martens" ], "title": "Deep learning via hessian-free optimization", "venue": "In International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Eric V Mazumdar", "Michael I Jordan", "S Shankar Sastry" ], "title": "On finding local nash equilibria (and only local nash equilibria) in zero-sum games", "venue": null, "year": 1901 }, { "authors": [ "Panayotis Mertikopoulos", "Bruno Lecouat", "Houssam Zenati", "Chuan-Sheng Foo", "Vijay Chandrasekhar", "Georgios Piliouras" ], "title": "Optimistic mirror descent in saddle-point problems: Going the extra(gradient) mile", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Lars Mescheder", "Sebastian Nowozin", "Andreas Geiger" ], "title": "The numerics of gans", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aryan Mokhtari", "Asuman Ozdaglar", "Sarath Pattathil" ], "title": "Proximal point approximations achieving a convergence rate of O(1/k) for smooth convex-concave saddle point problems: Optimistic gradient and extra-gradient methods", "venue": "arXiv preprint arXiv:1906.01115,", "year": 2019 }, { "authors": [ "Aryan Mokhtari", "Asuman Ozdaglar", "Sarath Pattathil" ], "title": "A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach", "venue": "arXiv preprint arXiv:1901.08511,", "year": 2019 }, { "authors": [ "John F Nash" ], "title": "Equilibrium points in n-person games", "venue": "Proceedings of the national academy of sciences,", "year": 1950 }, { "authors": [ "Arkadi Nemirovski" ], "title": "Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems", "venue": "SIAM Journal on Optimization,", "year": 2004 }, { "authors": [ "Arkadi S Nemirovski", "David Berkovich Yudin" ], "title": "Cesari convergence of the gradient method of approximating saddle points of convex-concave functions", "venue": "In Doklady Akademii Nauk,", "year": 1978 }, { "authors": [ "Maher Nouiehed", "Maziar Sanjabi", "Jason D Lee", "Meisam Razaviyayn" ], "title": "Solving a class of nonconvex min-max games using iterative first order methods", "venue": null, "year": 1902 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "B.T. Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "Ussr Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Hassan Rafique", "Mingrui Liu", "Qihang Lin", "Tianbao Yang" ], "title": "Non-convex min-max optimization: Provable algorithms and applications in machine learning", "venue": null, "year": 1810 }, { "authors": [ "Lillian J. Ratliff", "Samuel A. Burden", "S. Shankar Sastry" ], "title": "On the characterization of local nash equilibria in continuous games", "venue": "IEEE Transactions on Automatic Control,", "year": 2016 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ilya Sutskever", "James Martens", "George E. Dahl", "Geoffrey E. Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In Proceedings of The 30th International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural networks for machine learning,", "year": 2012 }, { "authors": [ "Yasin Yazıcı", "Chuan-Sheng Foo", "Stefan Winkler", "Kim-Hui Yap", "Georgios Piliouras", "Vijay Chandrasekhar" ], "title": "The unusual effectiveness of averaging in GAN training", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "F. Zeuthen" ], "title": "Heinrich von stackelberg: Marktformen und gleichgewicht. julius springer", "venue": "s.). pris r. m", "year": 1934 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nWe consider differentiable sequential games with two players: a leader who can commit to an action, and a follower who responds after observing the leader’s action. Particularly, we focus on the zero-sum case of this problem which is also known as minimax optimization, i.e.,\nmin x∈Rn max y∈Rm f(x,y).\nUnlike simultaneous games, many practical machine learning algorithms, including generative adversarial networks (GANs) (Goodfellow et al., 2014; Arjovsky et al., 2017), adversarial training (Madry et al., 2018) and primal-dual reinforcement learning (Du et al., 2017; Dai et al., 2018), explicitly specify the order of moves between players and the order of which player acts first is crucial for the problem. Therefore, the classical notion of local Nash equilibrium from simultaneous games may not be a proper definition of local optima for sequential games since mini-\nmax is in general not equal to maximin. Instead, we consider the notion of local minimax (Jin et al., 2019) which takes into account the sequential structure of minimax optimization.\nThe vanilla algorithm for solving sequential minimax optimization is gradient descent-ascent (GDA), where both players take a gradient update simultaneously. However, GDA is known to suffer from two drawbacks. First, it has undesirable convergence properties: it fails to converge to some local minimax and can converge to fixed points that are not local minimax (Jin et al., 2019; Daskalakis and Panageas, 2018). Second, GDA exhibits strong rotation around fixed points, which requires using very small learning rates (Mescheder et al., 2017; Balduzzi et al., 2018) to converge.\nIn this paper, we propose Follow-the-Ridge (FR), an algorithm for minimax optimization that addresses both issues. Specifically, we elucidate the cause of undesirable convergence of GDA – the\n∗These two authors contributed equally. 1Our code is made public at: https://github.com/gd-zhang/Follow-the-Ridge\nleader whose gradient step takes the system away from the ridge. By adding a correction term to the follower, we explicitly cancel out negative effects of the leader’s update. Intuitively, the combination of the leader’s update and the correction term is parallel to the ridge in the landscape (see Fig. 1), hence the name Follow-the-Ridge. Overall, our contributions are the following: • We propose a novel algorithm for minimax optimization which has exact local convergence to\nlocal minimax points. Previously, this property was only known to be satisfied when the leader moves infinitely slower than the follower in gradient descent-ascent (Jin et al., 2019). • We show theoretically and empirically that FR addresses the notorious rotational behaviour of\ngradient dynamics around fixed points (Balduzzi et al., 2018) and thus allows a much larger learning rate compared to GDA. • We prove that our algorithm is compatible with standard acceleration techniques such as\npreconditioning and positive momentum, which can speed up convergence significantly. • We further show that our algorithm also applies to general-sum Stackelberg games (Fiez et al.,\n2019; Zeuthen, 1935) with similar theoretical guarantees. • Finally, we demonstrate empirically our algorithm improves the convergence performance in\nboth toy minimax problems and GAN training compared to existing methods." }, { "heading": "2 PRELIMINARIES", "text": "" }, { "heading": "2.1 MINIMAX OPTIMIZATION", "text": "We consider sequential games with two players where one player is deemed the leader and the other the follower. We denote leader’s action by x ∈ Rn, and the follower’s action by y ∈ Rm. The leader aims at minimizing the cost function f(x,y) while the follower aims at maximizing f(x,y). The only assumption we make on the cost function is the following. Assumption 1. f is twice differentiable everywhere, and thrice differentiable at critical points. ∇2yyf is invertible (i.e., non-singular).\nThe global solution to the sequential game minx maxy f(x,y) is an action pair (x∗,y∗), such that y∗ is the global optimal response to x∗ for the follower, and that x∗ is the global optimal action for the leader assuming the follower always play the global optimal response. We call this global solution the global minimax. However, finding this global minimax is often intractable; therefore, we follow Jin et al. (2019) and take local minimax as the local surrogate. Definition 1 (local minimax). (x∗,y∗) is a local minimax for f(x,y) if (1) y∗ is a local maximum of f(x∗, ·); (2) x∗ is a local minimum of φ(x) := f(x, r(x)), where r(x) is the implicit function defined by ∇yf(x,y) = 0 in a neighborhood of x∗ with r(x∗) = y∗.\nIn the definition above, the implicit function r(·) : Rn → Rm is a local best response for the follower, and is a ridge in the landscape of f(x,y). Local minimaxity captures an equilibrium in a two-player sequential game if both players are only allowed to change their strategies locally. For notational convenience, we define\n∇f(x,y) = [∇xf,∇yf ]> , ∇2f(x,y) = [ Hxx Hxy Hyx Hyy ] .\nIn principle, local minimax can be characterized in terms of the following first-order and second-order conditions, which were established in Jin et al. (2019). Proposition 1 (First-order Condition). Any local minimax (x∗,y∗) satisfies∇f(x∗,y∗) = 0. Proposition 2 (Second-order Necessary Condition). Any local minimax (x∗,y∗) satisfies Hyy 4 0 and Hxx −HxyH−1yyHyx < 0. Proposition 3 (Second-order Sufficient Condition). Any stationary point (x∗,y∗) satisfying Hyy ≺ 0 and Hxx −HxyH−1yyHyx 0 is a local minimax.\nThe concept of global/local minimax is different from Nash equilibrium and local Nash, which are the equilibrium concepts typically studied for simultaneous games (see Nash et al. (1950); Ratliff et al. (2016) for more details). In particular, we note that the concept of Nash equilibrium or local Nash does not reflect the order between the min-player and the max-player and may not exist even\nfor simple functions (Jin et al., 2019). In general, the set of local minimax is a superset of local Nash. Under some mild assumptions, local minimax points are guaranteed to exist (Jin et al., 2019). However, the set of stable fixed points of GDA, roughly speaking the set of points that GDA locally converges to, is a different superset of local Nash (Jin et al., 2019). The relation between the three sets of points is illustrated in Fig. 2." }, { "heading": "2.2 STABILITY OF DISCRETE DYNAMICAL SYSTEMS", "text": "Gradient-based methods can reliably find local stable fixed points – local minima in single-objective optimization. Here, we generalize the concept of stability to games by taking game dynamics as a discrete dynamical system. An iteration of the form zt+1 = w(zt) can be viewed as a discrete dynamical system, where in our case w : Rn+m → Rn+m.\nIf w(z) = z, then z is called a fixed point. We study the stability of fixed points as a proxy to local convergence of game dynamics.\nDefinition 2. Let J denote the Jacobian of w at a fixed point z. If it has spectral radius ρ(J) ≤ 1, then we call z a stable fixed point. If ρ(J) < 1, then we call z a strictly stable fixed point.\nIt is known that strict stability implies local convergence (e.g., see Galor (2007)). In other words, if z is a strictly stable fixed point, there exists a neighborhood U of z such that when initialized in U , the iteration steps always converge to z." }, { "heading": "3 UNDESIRABLE BEHAVIOURS OF GDA", "text": "In this section, we discuss the undesirable behaviours of GDA in more detail. Recall that the update rule of GDA is given by\nxt+1 ← xt − η∇xf, yt+1 ← yt + η∇yf,\n(1)\nwhere we assume the same learning rate for both the leader and the follower for simplicity2. As illustrated in Fig. 2, the set of stable fixed points of GDA can include points that are not local minimax and, perhaps even worse, some local minimax are not necessarily stable fixed points of GDA. Here, we first give an example that a stable fixed point of GDA is not a local minimax. Consider minx maxy f(x, y) = 3x\n2 + y2 + 4xy; the only stationary point of this problem is (0, 0) and the Jacobian of GDA at this point is\nJ = I− η [\n6 4 −4 −2\n] .\nIt is easy to see that the eigenvalues of J are e1 = e2 = 1− 2η. Therefore, by Definition 2, (0, 0) is a strictly stable fixed point of GDA. However, one can show that Hyy = 2 > 0 which doesn’t satisfy the second-order necessary condition of local minimax.\nSimilarly, one can easily find examples in which a local minimax is not in the set of stable fixed points of GDA, e.g., minx∈R maxy∈R f(x, y) = −3x2 − y2 + 4xy (see Fig. 1). In this example, the two Jacobian eigenvalues are both greater than 1 no matter how small the learning rate is. In other words, GDA fails to converge to (0, 0) for almost all initializations (Daskalakis and Panageas, 2018).\nAs we will discuss in the next section, the main culprit of the undesirable behaviours of GDA is the leader whose gradient update −η∇xf pushes the whole system away from the ridge or attracts the system to non-local-minimax points. By contrast, the follower’s step η∇yf can pull the system closer to the ridge (see Fig. 1) or push it away from bad fixed points. To guarantee convergence to local minimax (or avoid bad fixed points), we have to use a very small learning rate for the leader (Jin et al., 2019; Fiez et al., 2019) so that the η∇yf term dominates. In the next section, we offer an alternative approach which explicitly cancels out undesirable effects of −η∇xf , thereby allowing us to use larger learning rates for the leader.\n2In general, the learning rates of two players can be different. Since our arguments apply to general setting as long as the ratio ηx/ηy is a positive constant, so we assume the same learning rate for convenience." }, { "heading": "4 FOLLOW THE RIDGE", "text": "Despite its popularity, GDA has the tendency to drift away from the ridge or the implicit function, and can, therefore, fail to converge with any constant learning rate. To address these problems, we propose a novel algorithm for minimax optimization, which we term Follow-the-Ridge (FR). The algorithm modifies gradient descent-ascent by applying an asymmetric preconditioner. The update rule is described in Algorithm. 1.\nAlgorithm 1 Follow-the-Ridge (FR). Differences from gradient descent-ascent are shown in blue. Require: Learning rate ηx and ηy; number of iterations T .\n1: for t = 1, ..., T do 2: xt+1 ← xt − ηx∇xf(xt,yt) . gradient descent 3: yt+1 ← yt + ηy∇yf(xt,yt) + ηxH−1yyHyx∇xf(xt,yt) . modified gradient ascent\nThe main intuition behind FR is the following. Suppose that yt is a local maximum of f(xt, ·). Let r(x) be the implicit function defined by∇yf(x,y) = 0 around (xt,yt), i.e., a ridge in the landscape of f(x,y). By definition, a local minimax has to lie on a ridge; hence, it is intuitive to follow the ridge during learning. However, if (xt,yt) is on the ridge, then ∇yf(xt,yt) = 0, and one step of gradient descent-ascent will take (xt,yt) to (xt− ηx∇xf,yt), which is off the ridge. In other words, gradient descent-ascent tends to drift away from the ridge. The correction term we introduce is ∇xr(x) (−ηx∇xf(xt,yt)) = ηxH−1yyHyx∇xf. It would bring yt to yt + ∇xr(x)(xt+1 − xt) ≈ r(xt+1), thereby encouraging both players to stay along the ridge. When (xt,yt) is not on a ridge yet, we expect the −ηx∇xf term and the ηxH −1 yyHyx∇xf term to move parallel to the ridge, while the ηy∇yf term brings (xt,yt) closer to the ridge (see Fig. 1). Our main theoretical result is the following theorem, which suggests that FR locally converges and only converges to local minimax. Theorem 1 (Exact local convergence). With a suitable learning rate, all strictly stable fixed points of FR are local minimax, and all local minimax points are stable fixed points of FR.\nThe proof is mainly based on the following observation. The Jacobian of FR dynamics at a fixed point (x∗,y∗) is (c := ηy/ηx)\nJ = I− ηx [\nI −H−1yyHyx I ] [ Hxx Hxy −cHyx −cHyy ] ,\nwhere the Hessians are evaluated at (x∗,y∗). J is similar to\nM =\n[ I\nH−1yyHyx I\n] J [ I\n−H−1yyHyx I\n] = I− ηx [ Hxx −HxyH−1yyHyx Hxy\n−cHyy\n] .\nTherefore, the eigenvalues of J are those of I + ηyHyy and those of I− ηx(Hxx −HxyH−1yyHyx). As shown in second-order necessary condition 2, (x∗,y∗) being a local minimax implies Hyy 4 0 and Hxx −HxyH−1yyHyx < 0; one can then show that the spectral radius of the Jacobian satisfies ρ(J) ≤ 1; hence (x∗,y∗) is a stable fixed point by Definition 2. On the other hand, when ρ(J) < 1, by the sufficient condition in Proposition 3, (x∗,y∗) must be a local minimax. Remark 1 (All eigenvalues are real). We notice that all eigenvalues of J, the Jacobian of FR, are real since both Hyy and Hxx −HxyH−1yyHyx are symmetric matrices. As noted by Mescheder et al. (2017); Gidel et al. (2019); Balduzzi et al. (2018), the rotational behaviour (instability) of GDA is caused by eigenvalues with large imaginary part. Therefore, FR addresses the strong rotation problem around fixed points as all eigenvalues are real." }, { "heading": "4.1 ACCELERATING CONVERGENCE WITH PRECONDITIONING AND MOMENTUM", "text": "We now discuss several extension of FR that preserves the theoretical guarantees.\nPreconditioning: To speed up the convergence, it is often desirable to apply a preconditioner on the gradients that compensates for the curvature. For FR, the preconditioned variant is given by[\nxt+1 yt+1 ] ← [ xt yt ] − [ I −H−1yyHyx I ] [ ηxP1∇xf −ηyP2∇yf ] (2)\nWe can show that with any constant positive definite preconditioners P1 and P2, the local convergence behavior of Algorithm 1 remains exact. We note that preconditioning is crucial for successfully training GANs (see Fig. 9) and RMSprop/Adam has been exclusively used in GAN training.\nMomentum: Another important technique in optimization is momentum, which speeds up convergence significantly both in theory and in practice (Polyak, 1964; Sutskever et al., 2013). We show that momentum can be incorporated into FR (here, we include momentum outside the correction term which is equivalent to applying momentum to the gradient directly for simplicity. We give a detailed discussion in Appendix D.4), which gives the following update rule:[\nxt+1 yt+1 ] ← [ xt yt ] − [ I −H−1yyHyx I ] [ ηx∇xf −ηy∇yf ] + γ [ xt − xt−1 yt − yt−1 ] . (3)\nBecause all of the Jacobian eigenvalues are real, we can show that momentum speeds up local convergence in a similar way it speeds up single objective minimization.\nTheorem 2. For local minimax (x∗,y∗), let α = min { λmin(−Hyy), λmin(Hxx −HxyH−1yyHyx) } ,\nβ = ρ ( ∇2f(x∗,y∗) ) , κ := β/α. Then FR converges asymptotically to (x∗,y∗) with a rate Ω(κ−2);\nFR with a momentum parameter of γ = 1−Θ ( κ−1 ) converges asymptotically with a rate Ω(κ−1).3\nExperiments of the speedup of momentum are provided in Appendix E.2. This is in contrast to gradient descent-ascent, whose complex Jacobian eigenvalues prevent the use of positive momentum. Instead, negative momentum may be more preferable (Gidel et al., 2019), which does not achieve the same level of acceleration." }, { "heading": "4.2 GENERAL STACKELBERG GAMES", "text": "Algorithm 2 Follow-the-Ridge (FR) for general-sum Stackelberg games. Require: Learning rate ηx and ηy; number of iterations T .\n1: for t = 1, ..., T do 2: xt+1 ← xt − ηxDxf(xt,yt) . total derivative Dxf = ∇xf −∇2xyg(∇2yyg)\n−1∇yf 3: yt+1 ← yt − ηy∇yg(xt,yt) + ηx(∇2yyg) −1∇2yxgDxf(xt,yt)\nHere, we further extend FR to general sequential games, also known as Stackelberg games. The leader commits to an action x, while the follower plays y in response. The leader aims to minimize its cost f(x,y), while the follower aims at minimizing g(x,y). For Stackelberg games, the notion of equilibrium is captured by Stackelberg equilibrium, which is essentially the solution to the following optimization problem:\nmin x∈Rn\n{ f(x,y)|y ∈ arg min\ny∈Rm g(x,y)\n} .\nIt can be seen that minimax optimization is the special case when g = −f . Similarly, one can define local Stackelberg equilibrium as a generalization of local minimax in general-sum games (Fiez et al., 2019). Stackelberg game has wide applications in machine learning. To name a few, both multi-agent reinforcement learning (Littman, 1994) and hyperparameter optimization (Maclaurin et al., 2015) can be formulated as finding Stackelberg equilibria.\nFor general-sum games, naive gradient dynamics, i.e., both players taking gradient updates with their own cost functions, is no longer a reasonable algorithm, as local Stackelberg equilibria in general may not be stationary points. Instead, the leader should try to use the total derivative of f(x, r(x)), where r(x) is a local best response for the follower. Thus the counterpart of gradient descent-ascent in general-sum games is actually gradient dynamics with best-response gradient (Fiez et al., 2019):\nxt+1 ← xt − η [ ∇xf −∇2xyg ( ∇2yyg )−1∇yf] (xt,yt), yt+1 ← yt − η∇yg(xt,yt).\n(4)\n3By a rate a, we mean that one iteration shortens the distance toward the fixed point by a factor of (1− a); hence the larger the better.\nFR can be adapted to general-sum games by adding the same correction term to the follower. The combined update rule is given in Algorithm 2. Similarly, we show that FR for Stackelberg games locally converges exactly to local Stackelberg equilibria (see Appendix C.2 for rigorous proof.)" }, { "heading": "5 RELATED WORK", "text": "As a special case of Stackelberg games (Ratliff et al., 2016) in the zero-sum setting, minimax optimization concerns the problem of solving minx∈X maxy∈Y f(x,y). The problem has received wide attention due to its extensive applications in modern machine learning, in settings such as generative adversarial networks (GANs), adversarial training. The vast majority of this line of research focus on convex-concave setting (Kinderlehrer and Stampacchia, 1980; Nemirovski and Yudin, 1978; Nemirovski, 2004; Mokhtari et al., 2019b;a). Beyond the convex-concave setting, Rafique et al. (2018); Lu et al. (2019); Lin et al. (2019); Nouiehed et al. (2019) consider nonconvexconcave problems, i.e., where f is nonconvex in x but concave in y. In general, there is no hope to find global optimum efficiently in nonconvex-concave setting.\nMore recently, nonconvex-nonconcave problem has gained more attention due to its generality. Particularly, there are several lines of work analyzing the dynamics of gradient descent-ascent (GDA) in nonconvex-nonconcave setting (such as GAN training). Though simple and intuitive, GDA has been shown to have undersirable convergence properties (Adolphs et al., 2019; Daskalakis and Panageas, 2018; Mazumdar et al., 2019; Jin et al., 2019) and exhibit strong rotation around fixed points (Mescheder et al., 2017; Balduzzi et al., 2018). To overcome this rotation behaviour of GDA, various modifications have been proposed, including averaging (Yazıcı et al., 2019), negative momentum (Gidel et al., 2019), extragradient (EG) (Korpelevich, 1976; Mertikopoulos et al., 2019), optimistic mirror descent (OGDA) (Daskalakis et al., 2018), consensus optimization (CO) (Mescheder et al., 2017) and symplectic gradient (SGA) (Balduzzi et al., 2018; Gemp and Mahadevan, 2018). However, we note that all these algorithms discard the underlying sequential structure of minimax optimization and adopt a simultaneous game formulation. In this work, we hold that GAN training is better viewed as a sequential game rather than a simultaneous game. The former is more consistent with the divergence minimization interpretation of GANs; there is also some empirical evidence showing that well-performing GAN generators are closer to a saddle-point instead of a local minimum (Berard et al., 2019), which suggests that local Nash, the typical solution concept for simultaneous games, may not be the most appropriate one for GANs.\nTo the best of our knowledge, the only two methods that can (and only) converge to local minimax are two time-scale GDA (Jin et al., 2019) and gradient dynamics with best response gradient (Fiez et al., 2019). In two time-scale GDA, the leader moves infinitely slower than the follower, which may cause slow convergence due to infinitely small learning rates. The dynamics in Fiez et al. (2019) is proposed for general-sum games. However, their main result for general-sum games require stronger assumptions and even in that case, the dynamics can converge to non-local-Stackelberg points in general-sum games. In contrast, in general-sum games, FR will not converge to non-local-Stackelberg points. Besides, Adolphs et al. (2019); Mazumdar et al. (2019) attempt to solve the undesirable convergence issue of GDA by exploiting curvature information, but they focus on simultaneous game on finding local Nash and it is unclear how to extend their algorithm to sequential games.\nFor GAN training, there is a rich literature on different strategies to make the GAN-game welldefined, e.g., by adding instance noise (Salimans et al., 2016), by using different objectives (Nowozin et al., 2016; Gulrajani et al., 2017; Arjovsky et al., 2017; Mao et al., 2017) or by tweaking the architectures (Radford et al., 2015; Brock et al., 2019). While these strategies try to make the overall optimization problem easily, our work deals with a specific optimization problem and convergence issues arise in theory and in practice; hence our algorithm is orthogonal to these work." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we investigate whether the theoretical guarantees of FR carry over to practical problems. Particularly, our experiments have three main aims: (1) to test if FR converges and only converges to local minimax, (2) to test the effectiveness of FR in training GANs with saturating loss, (3) to test whether FR addresses the notorious rotation problem in GAN training." }, { "heading": "6.1 LOW DIMENSIONAL TOY EXAMPLES", "text": "To verify our claim on exact local convergence, we first compare FR with gradient descent-ascent (GDA), optimistic mirror descent (OGDA) (Daskalakis et al., 2018), extragradient (EG) (Korpelevich, 1976), symplectic gradient adjustment (SGA) (Balduzzi et al., 2018) and consensus optimization (CO) (Mescheder et al., 2017) on three simple low dimensional problems:\ng1(x, y) = −3x2 − y2 + 4xy g2(x, y) = 3x 2 + y2 + 4xy\ng3(x, y) = ( 4x2 − (y − 3x+ 0.05x3)2 − 0.1y4 ) e−0.01(x 2+y2).\nHere g1 and g2 are two-dimensional quadratic problems, which are arguably the simplest nontrivial problems. g3 is a sixth-order polynomial scaled by an exponential, which has a relatively complicated landscape compared to g1 and g2.\nIt can be seen that when running in g1, where (0, 0) is a local (and in fact global) minimax, only FR, SGA and CO converge to it; all other method diverges (the trajectories of OGDA and EG almost overlap). The main reason behind the divergence of GDA is that gradient of leader pushes the system away from the local minimax when it is a local maximum for the leader. In g2, where (0, 0) is not a local minimax, all algorithms except for FR converges to this undesired stationary point4. In this case, the leader is still to blame for the undesirable convergence of GDA (and other variants) since it gets trapped by the gradient pointing to the origin. In g3, FR can converge to (0, 0), which is a local minimax, while all other methods apparently enter limit cycles around (0, 0). The experiments suggest that even on extremely simple instances, existing algorithms can either fail to converge to a desirable fixed point or converge to bad fixed points, whereas FR always exhibits desirable behaviors." }, { "heading": "6.2 GENERATIVE ADVERSARIAL NETWORKS", "text": "One particularly promising application of minimax optimization algorithms is training generative adversarial networks (GANs). To recover the divergence minimization objective (e.g., JensenShannon divergence in standard GANs), we have to model the adversarial game as a sequential game. According to the formulation, the generator is the leader who commits to an action first, while the discriminator is the follower that helps the generator to learn the target data distribution." }, { "heading": "6.2.1 MIXTURE OF GAUSSIANS", "text": "We first evaluate 4 different algorithms (GDA, EG, CO and FR) on mixture of Gaussian problems with the original saturating loss. To satisfy the non-singular Hessian assumption, we add L2 regularization (0.0002) to the discriminator. For both generator and discriminator, we use 2-hidden-layers MLP with 64 hidden units each layer where tanh activations is used. By default, RMSprop (Tieleman and Hinton, 2012) is used in all our experiments while the learning rate is tuned for GDA. As our FR involves the computation of Hessian inverses which is computational prohibitive, we instead use conjugate\n4Note that it is a local minimum for the follower.\ngradient (Martens, 2010; Nocedal and Wright, 2006) to solve the linear system in the inner loop. To be specific, instead of solving Hyyz = Hyx∇xf directly, we solve H2yyz = HyyHyx∇xf to ensure that the problem is well-posed since H2yy is always positive semidefinite. For all experimental details, we refer readers to Appendix D.2.\nAs shown in Fig. 4, GDA suffers from the “missing mode” problem and both discriminator and generator fail to converge as confirmed by the gradient norm plot. EG fails to resolve the convergence issue of GDA and performs similarly to GDA. With tuned gradient penalties, consensus optimization (CO) can successfully recover all three modes and obtain much smaller gradient norm. However, we notice that the gradient norm of CO decreases slowly and that both the generator and the discriminator have not converged after 50,000 iterations. In contrast, the generator\ntrained with FR successfully learns the true distribution with three modes and the discriminator is totally fooled by the generator. As expected, both players reach much lower gradient norm with FR, indicating fast convergence. Moreover, we find that even if initialized with GDA-trained networks (the top row of Fig. 4), FR can still find all the modes at the end of training.\nTo check whether FR fixes the strong rotation problem around fixed points, we follow Berard et al. (2019) to plot the gradient norm and path-angle (see Fig. 6). By interpolating between the initial parameters z and the final parameters z∗, they proposed to monitor the angle between the vector field v and the linear path from z to z∗. Specifically, they looked at the quantity – path-angle, defined as\nθ(α) = 〈z∗ − z,vα〉 ‖z∗ − z‖‖vα‖ where vα = v(αz + (1− α)z∗).\nThey showed that a high “bump” around α = 0 in the path-angle plot typically indicates strong rotation behaviour. We choose α = [0.6, 1.2] and plot the gradient norm and path-angle along the linear path for the updates of FR. In particular, we only observe a sign-switch around the fixed point\nz∗ without an obvious bump, suggesting that FR doesn’t exhibit rotational behaviour around the fixed point. To further check if FR converges to local minimax, we check the second-order condition of local minimax by computing the eigenvalues of Hxx −HxyH−1yyHyx and Hyy. As expected, all eigenvalues of Hxx −HxyH−1yyHyx are non-negative while all eigenvalues of Hyy are non-positive.\nWe also run FR on 2-D mixture of Gaussian with the same architectures (see Fig. 5) and compare it to vanilla GDA. Though GDA captures all the modes, we note that both the generator and the discriminator don’t converge which can be seen from the gradient norm plot in Fig. 12. In contrast, the discriminator trained by FR is totally fooled by the generator and gradients vanish. We stress here that the sample quality in GAN\nmodels is not a good metric of checking convergence as we shown in the above example." }, { "heading": "6.2.2 PRELIMINARY RESULTS ON MNIST", "text": "In a more realistic setting, we test our algorithm on image generation task. Particularly, we use the standard MNIST dataset (LeCun et al., 1998) but only take a subset of the dataset with class 0 and 1 for quick experimenting. To stabilize the training of GANs, we employ spectral normalization (Miyato et al., 2018) to enforce Lipschitz continuity on the discriminator. To ensure the invertibility of the discriminator’s Hessian, we add the same amount of L2 regularization to the discriminator as in mixture of Gaussian experiments. In terms of network architectures, we use 2-hidden-layers MLP with 512 hidden units in each layer for both the discriminator and the generator. For the discriminator, we use Sigmoid activation in the output layer. We use RMSProp as our base optimizer in the experiments with batch size 2,000. We run both GDA and FR for 100,000 iterations.\nIn Fig. 8, we show the generated samples of GDA and FR along with the gradient norm plots. Our main observation is that FR improves convergence as the gradient norms of both discriminator and generator decrease much faster than GDA; however the convergence is not well reflected by the quality of generated samples. We notice that gradients don’t vanish to zero at the end of training. We conjecture that for high-dimensional data distribution like images, the network we used is not flexible enough to learn the distribution perfectly." }, { "heading": "7 CONCLUSION", "text": "In this paper, we studied local convergence of learning dynamics in minimax optimization. To address undesirable behaviours of gradient descent-ascent, we proposed a novel algorithm that locally converges to and only converges to local minimax by taking into account the sequential structure of minimax optimization. Meanwhile, we proved that our algorithm addresses the notorious rotational behaviour of vanilla gradient-descent-ascent around fixed points. We further showed theoretically that our algorithm is compatible with standard acceleration techniques, including preconditioning and positive momentum. Our algorithm can be easily extended to general-sum Stackelberg games with similar theoretical guarantees. Empirically, we validated the effectiveness of our algorithm in both low-dimensional toy problems and GAN training." }, { "heading": "ACKNOWLEDGEMENT", "text": "We thank Kefan Dong, Roger Grosse and Shengyang Sun for helpful comments on this project." }, { "heading": "A PROOF OF THEOREM 1", "text": "Proof. First of all, note that FR’s update rule can be rewritten as[ xt+1 yt+1 ] ← [ xt yt ] − ηx [ I −H−1yyHyx cI ] [ ∇xf −∇yf ] , (5)\nwhere c := ηy/ηx, and that [\nI −H−1yyHyx cI\n] is always invertible. Therefore, the fixed points of FR\nare exactly those that satisfy∇f(x,y) = 0, i.e., the first-order necessary condition of local minimax. Now, consider a fixed point (x∗,y∗). The Jacobian of FR’s update rule at (x∗,y∗) is given by\nJ = I− ηx [\nI −H−1yyHyx I ] [ Hxx Hxy −cHyx −cHyy ] .\nObserve that J is similar to[ I\nH−1yyHyx I\n] J [ I\n−H−1yyHyx I ] =I− ηx [ I\nH−1yyHyx I\n] [ I\n−H−1yyHyx I ] [ Hxx Hxy −cHyx −cHyy ] [ I −H−1yyHyx I ] =I− ηx [ Hxx −HxyH−1yyHyx Hxy\n−cHyy\n] ,\nwhich is block diagonal. Therefore, the eigenvalues of J are exactly those of I+ ηyHyy and those of I− ηx(Hxx −HxyH−1yyHyx), which are all real because both matrices are symmetric. Moreover, suppose that\nηx < 2 max { ρ(Hxx −HxyH−1yyHyx), cρ(−Hyy) } , where ρ(·) stands for spectral radius. In this case\n−I ≺ I + ηyHyy, −I ≺ I− ηx(Hxx −HxyH−1yyHyx).\nTherefore whether ρ(J) < 1 depends on whether−Hyy or Hxx−HxyH−1yyHyx has negative eigenvalues. If (x∗,y∗) is a local minimax, by the necessary condition, Hyy 4 0, Hxx−HxyH−1yyHyx < 0. It follows that the eigenvalues of J all fall in (−1, 1]. (x∗,y∗) is thus a stable fixed point of FR. On the other hand, when (x∗,y∗) is a strictly stable fixed point, ρ(J) < 1. It follows that both Hyy and Hxx −HxyH−1yyHyx must be positive definite. By the sufficient conditions of local minimax, (x∗,y∗) is a local minimax." }, { "heading": "B PROOF OF THEOREM 2", "text": "Consider a general discrete dynamical system zt+1 ← g(zt). Let z∗ be a fixed point of g(·). Let J(z) denote the Jacobian of g(·) at z. Similar results can be found in many texts; see, for instance, Theorem 2.12 (Olver, 2015). Proposition 4 (Local convergence rate from Jacobian eigenvalue). If ρ(J(z∗)) = 1−∆ < 1, then there exists a neighborhood U of z∗ such that for any z0 ∈ U ,\n‖zt − z∗‖2 ≤ C (\n1− ∆ 2\n)t ‖z0 − z∗‖2,\nwhere C is some constant.\nProof. By Lemma 5.6.10 (Horn and Johnson, 2013), since ρ(J(z∗)) = 1−∆, there exists a matrix norm ‖ · ‖ induced by vector norm ‖ · ‖ such that ‖J(z∗)‖ < 1 − 3∆4 . Now consider the Taylor expansion of g(z) at the fixed point z∗:\ng(z) = g(z∗) + J(z∗)(z− z∗) +R(z− z∗),\nwhere the remainder term satisfies\nlim z→z∗ R(z− z∗) ‖z− z∗‖ = 0.\nTherefore, we can choose 0 < δ such that whenever ‖z− z∗‖ < δ, ‖R(z− z∗)‖ ≤ ∆4 ‖z− z ∗‖. In this case, ‖g(z)− g(z∗)‖ ≤ ‖J(z∗)(z− z∗)‖+ ‖R(z− z∗)‖\n≤ ‖J(z∗)‖‖z− z∗‖+ ∆ 4 ‖z− z∗‖\n≤ (\n1− ∆ 2\n) ‖z− z∗‖.\nIn other words, when z0 ∈ U = {z| ‖z− z∗‖ < δ}, ‖zt − z∗‖ ≤ (\n1− ∆ 2\n)t ‖z0 − z∗‖.\nBy the equivalence of finite dimensional norms, there exists constants c1, c2 > 0 such that ∀z, c1‖z‖2 ≤ ‖z‖ ≤ c2‖z‖2.\nTherefore\n‖zt − z∗‖2 ≤ c2 c1\n( 1− ∆\n2\n)t ‖z0 − z∗‖2.\nIn other words, the rate of convergence is given by the gap between ρ(J) and 1. We now prove Theorem 2 using this view.\nproof of Theorem 2. In the following proof we use ‖ · ‖ to denote the standard spectral norm. It is not hard to see that λmax(−Hyy) ≤ ρ(∇2f(x∗,y∗)) = β and ‖Hxy‖ ≤ β. Also,\nλmax(Hxx −HxyH−1yyHyx) ≤ ‖Hxx‖+ ‖Hxy‖2 · ‖H−1yy‖ ≤ β + β2\nα = (1 + κ)β.\nTherefore we choose our learning rate to be ηx = ηy = 12κβ . In this case, the eigenvalues of the Jacobian of FR without momentum all fall in [ 0, 1− 12κ2 ] . Using Proposition 4, we can show that FR locally converges with a rate of Ω(κ−2). Now, let us consider FR with momentum:[ xt+1 yt+1 ] ← [ xt yt ] − ηx [ I −H−1yyHyx I ] [ ∇xf −∇yf ] + γ [ xt − xt−1 yt − yt−1 ] . (6)\nThis is a dynamical system on the augmented space of (xt,yt,xt−1,yt−1). Let J1 := I− ηx [\nI −H−1yyHyx I ] [ Hxx Hxy −Hyx −Hyy ] be the Jacobian of the original FR at a fixed point (x∗,y∗). Then the Jacobian of Polyak’s momentum at (x∗,y∗,x∗,y∗) is\nJ2 :=\n[ γI + J1 −γI\nI 0\n] .\nThe spectrum of J2 is given by solutions to det (λI− J2) = det ( (λ2 − γλ+ γ)I− γJ1 ) = 0.\nIn other words, an eigenvalue r of J1 corresponds to two eigenvalues of J2 given by the roots of λ2− (γ+r)λ+γ = 0. For our case, let us choose γ = 1+ 12κ2 − √ 2 κ . Then for any r ∈ [ 0, 1− 12κ2 ] ,\n(r + γ)2 − 4γ ≤ (\n1− 1 2κ2 + γ\n)2 − 4γ = 0.\nTherefore the two roots of λ2− (γ+ r)λ+ γ = 0 must be imaginary, and their magnitude are exactly√ γ. Since\n√ γ ≤ 1− 1−γ2 ≤ 1− 1 2 √ 2κ , we now know that ρ(J2) ≤ 1− 12√2κ . Using Proposition 4,\nwe can see that FR with momentum locally converge with a rate of Ω(κ−1)." }, { "heading": "C PROOFS FOR SECTION 4", "text": "" }, { "heading": "C.1 PRECONDITIONING", "text": "Recall that the preconditioned variant of FR is given by[ xt+1 yt+1 ] ← [ xt yt ] − [ I −H−1yyHyx I ] [ ηxP1∇xf −ηyP2∇yf ] . (7)\nWe now prove that preconditioning does not effect the local convergence properties. Proposition 5. If A is a symmetric real matrix, B is symmetric and positive definite, then the eigenvalues of AB are all real, and AB and A have the same number of positive, negative and zero eigenvalues.\nProof. AB is similar to and thus has the same eigenvalues as B 1 2AB 1 2 , which is symmetric and has real eigenvalues. Since B 1 2AB 1 2 is congruent to A, they have the same number of positive, negative and zero eigenvalues (see Horn and Johnson (2013, Theorem 4.5.8)).\nProposition 6. Assume that P1 and P2 are positive definite. The Jacobian of (7) has only real eigenvalues at fixed points. With a suitable learning rate, all strictly stable fixed points of (7) are local minimax, and all local minimax are stable fixed points of (7). Proof. First, observe that both [\nI −H−1yyHyx I\n] and [ P1\nP2\n] are both always invertible. Hence\nfixed points of (7) are exactly stationary points. Let c := ηy/ηx. Note that the Jacobian of (7) is given by J = I− ηx [\nI −H−1yyHyx I\n] [ P1\nP2 ] [ Hxx Hxy −cHyx −cHyy ] ,\nwhich is similar to [ I\nH−1yyHyx I\n] J [ I\n−H−1yyHyx I ] =I− ηx [ P1\nP2\n] [ Hxx −HxyH−1yyHyx Hxy\n−cHyy\n] .\nTherefore the eigenvalues of J are exactly those of I − ηxP1 ( Hxx −HxyH−1yyHyx ) and I + ηyP2Hyy. By Proposition 5, the eigenvalues of both matrices are all real. When the learning rates are small enough, i.e., when\nηx < 2 max { ρ ( P1(Hxx −HxyH−1yyHyx) ) , cρ(−P2Hyy) } , whether ρ(J) ≤ 1 solely depends on whether P1 ( Hxx −HxyH−1yyHyx ) and −P2Hyy have negative eigenvalues. By Proposition 5, the number of positive, negative and zero eigenvalues of the two matrices are the same as those of Hxx −HxyH−1yyHyx and −Hyy respectively. Therefore the proposition follows from the same argument as in Theorem 1." }, { "heading": "C.2 GENERAL-SUM STACKELBERG GAMES", "text": "A general-sum Stackelberg game is formulated as follows. There is a leader, whose action is x ∈ Rn, and a follower, whose action is y ∈ Rm. The leader’s cost function is given by f(x,y) while the follower’s is given by g(x,y). The generalization of minimax in general-sum Stackelberg games is Stackelberg equilibrium. Definition 3 (Stackelberg equilibrium). (x∗,y∗) is a (global) Stackelberg equilibrium if y∗ ∈ R(x∗), and ∀x ∈ X ,\nf(x∗,y∗) ≤ max y∈R(x) f(x,y),\nwhere R(x) := arg min g(x, ·) is the best response set for the follower.\nSimilarly, we have local Stackelberg equilibrium (Fiez et al., 2019) defined as follows.5\nDefinition 4 (Local Stackelberg equilibrium). (x∗,y∗) is a local Stackelberg equilibrium if\n1. y∗ is a local minimum of g(x∗, ·);\n2. Let r(x) be the implicit function defined by ∇yg(x,y) = 0 in a neighborhood of x∗ with r(x∗) = y∗. Then x∗ is a local minimum of φ(x) := f(x, r(x)).\nFor local Stackelberg equilibrium, we have similar necessary conditions and sufficient conditions. For simplicity, we use the following notation when it is clear from the context\n∇2f(x,y) = [ Hxx Hxy Hyx Hyy ] , ∇2g(x,y) = [ Gxx Gxy Gyx Gyy ] .\nSimilar to the zero-sum case, local Stackelberg equilibrium can be characterized using derivatives.\nProposition 7 (Necessary conditions). Any local Stackelberg equilibrium satisfies ∇yg(x,y) = 0, ∇xf(x,y)−GxyG−1yy∇yf(x,y) = 0,∇2yyg(x,y) < 0 and\nHxx −HxyG−1yyGyx −∇x ( GxyG −1 yy∇yf ) +∇y ( GxyG −1 yy∇yf ) G−1yyGyx < 0.\nProposition 8 (Sufficient conditions). If (x,y) satisfy ∇yg(x,y) = 0, ∇xf(x,y) − GxyG −1 yy∇yf(x,y) = 0,∇2yyg(x,y) 0 and\nHxx −HxyG−1yyGyx −∇x ( GxyG −1 yy∇yf ) +∇y ( GxyG −1 yy∇yf ) G−1yyGyx 0.\nthen (x,y) is a local Stackelberg equilibrium.\nThe conditions above can be derived from the definition with the observation that ∇2φ(x) = ∇ ( ∇xf(x, r(x))> −∇yf(x, r(x))>∇r(x) ) = ∇2xxf +∇2xyf∇r(x) +∇x ( ∇yf>∇r(x) ) +∇y ( ∇yf>∇r(x) ) ∇r(x)\n= Hxx −HxyG−1yyGyx −∇x ( GxyG −1 yy∇yf ) +∇y ( GxyG −1 yy∇yf ) G−1yyGyx.\nHere all derivatives are evaluated at (x, r(x)). We would like to clarify that by ∇xh, where h : Rn+m → R, we mean the partial derivative of h for the first n entries. Similarly for h : Rn+m → Rk, by∇xh we mean the first n columns of the Jacobian of h, which is k-by-(n+m). Henceforth we will use Dxf(x,y) to denote ∇xf −GxyG−1yy∇yf(x,y). The general-sum version of Follow-the-Ridge is given by[\nxt+1 yt+1 ] ← [ xt yt ] − [ I −G−1yyGyx I ] [ ηxDxf(xt,yt) ηy∇yg(xt,yt) ] . (8)\nJust as the zero-sum version of FR converges exactly to local minimax, we can show that the generalsum version of FR converges exactly to local Stackelberg equilibria. As in the zero-sum setting, we use stability of fixed points as a proxy of discussing local convergence.\nTheorem 3. The Jacobian of (8) has only real eigenvalues at fixed points. With a suitable learning rate, all strictly stable fixed points of (8) are local Stackelberg equilibria, and all local Stackelberg equilibria are stable fixed points of (8).\nProof. This theorem only analyzes the Jacobian of (8) at fixed points; thus we will only need to focus on the fixed points of (8) in the proof.\nLet c := ηy/ηx. Note that [\nI −G−1yyGyx I\n] is always invertible. Therefore, the fixed points of (8)\nare exactly points (x,y) that satisfyDxf(x,y) = 0 and∇yg(x,y) = 0, i.e. the first-order necessary condition for local Stackelberg equilibria.\n5Our definition is slightly different from that in Fiez et al. (2019)\nIn particular, consider a fixed point z∗ := (x,y). The Jacobian of (8) at (x,y) is given by J = I− ηx [\nI −G−1yyGyx I\n] [ Hxx −∇x(GxyG−1yy∇yf) Hxy −∇y(GxyG−1yy∇yf)\ncGyx cGyy\n] .\nObserve that[ I\nG−1yyGyx I\n] J [ I\n−G−1yyGyx I ] =I− ηx [ Hxx −∇x(GxyG−1yy∇yf) Hxy −∇y(GxyG−1yy∇yf)\ncGyx cGyy\n] [ I\n−G−1yyGyx I ] =I− ηx [ Hxx −HxyG−1yyGyx −∇x ( ) +∇y ( )G−1yyGyx Hxy −∇y( )\n0 cGyy\n] ,\nwhere is a shorthand for GxyG−1yy∇yf . Let\nH̃xx := Hxx −HxyG−1yyGyx −∇x ( ) +∇y ( )G−1yyGyx.\nWe can now see that the eigenvalues of J are exactly those of I− ηxH̃xx and those of I− ηyGyy. It follows that all eigenvalues of J are real.6 Suppose that\nηx < 2\nmax{ρ(H̃xx), cρ (Gyy)} .\nIn that case, if (x,y) is a local Stackelberg equilibrium, then from the second-order necessary condition, both H̃xx and Gyy are positive semidefinite. As a result, all eigenvalues of J would be in (−1, 1]. By Definition 2, this suggests that (x,y) is a stable fixed point. On the other hand, if (x,y) is a strictly stable fixed point, then all eigenvalues of J fall in (−1, 1), which suggests that H̃xx 0 and Gyy 0. By the sufficient condition, (x,y) is a local Stackelberg equilibrium.\nRemark 2. From the proof above, it is can be seen that “strict local Stackelberg equilibria”, i.e. points that satisfy the sufficient conditions, must be strictly stable fixed points of (8). Then by Proposition 4, FR locally converges to such points. Thus, the set of points that FR locally converges to is the same as local Stackelberg equilibria, up to degenerate cases where the Jacobian spectral radius is exactly 1." }, { "heading": "D EXPERIMENTAL DETAILS", "text": "" }, { "heading": "D.1 LOW DIMENSIONAL PROBLEMS", "text": "The algorithms we compared with are[ xt+1 yt+1 ] ← [ xt yt ] − η [ ∇xf(xt,yt) −∇yf(xt,yt) ] , (GDA)[\nxt+1 yt+1 ] ← [ xt yt ] − 2η [ ∇xf(xt,yt) −∇yf(xt,yt) ] + η [ ∇xf(xt−1,yt−1) −∇yf(xt−1,yt−1) ] , (OGDA)[\nxt+1 yt+1 ] ← [ xt yt ] − η [ ∇xf(xt − η∇xf(xt,yt),yt + η∇yf(xt,yt)) −∇yf(xt − η∇xf(xt,yt),yt + η∇yf(xt,yt)) ] , (EG)[\nxt+1 yt+1 ] ← [ xt yt ] − η [ I −λHxy λHyx I ] [ ∇xf(xt,yt) −∇yf(xt,yt) ] , (SGA)[\nxt+1 yt+1 ] ← [ xt yt ] − η [ ∇xf(xt,yt) −∇yf(xt,yt) ] − γη∇‖∇f(xt,yt)‖2 . (CO)\nWe used a learning rate of η = 0.05 for all algorithms, λ = 1.0 for SGA and γ = 0.1 for CO. We did not find SGA with alignment (Balduzzi et al., 2018) to be qualitatively different from SGA in our experiments.\n6H̃xx is always symmetric." }, { "heading": "D.2 MIXTURE OF GAUSSIAN EXPERIMENT", "text": "Dataset. The mixture of Gaussian dataset is composed of 5,000 points sampled independently from the following distribution pD(x) = 13N (−4, 0.01) + 1 3N (0, 0.01) + 1 3N (4, 0.01) where N (µ, σ 2) is the probability density function of a 1D-Gaussian distribution with mean µ and variance σ2. The latent variables z ∈ R16 are sampled from a standard Normal distribution N (0, I). Because we want to use full-batch methods, we sample 5,000 points that we re-use for each iteration during training. For the two-dimensional case, we generate the data from 9 Gaussians with µx ∈ {−3, 0, 3} and µy ∈ {−3, 0, 3}. The covariance matrix is 0.01I. Neural Networks Architecture. Both the generator and discriminator are 2 hidden layer neural networks with 64 hidden units and Tanh activations.\nOther Hyperparameters. For FR, we use conjugate gradient (CG) in the inner-loop to approximately invert the Hessian. In practice, we use 10 CG iterations (5 iterations also works well). Since the loss surface is highly non-convex (let alone quadratic), we add damping term to stabilize the training. Specifically, we follow Levenberg-Marquardt style heuristic adopted in Martens (2010). For both generator and discriminator, we use learning rate 0.0002. For consensus optimization (CO), we tune the gradient penalty coefficient using grid search over {0.01, 0.03, 0.1, 0.3, 1.0, 3.0, 10.0}." }, { "heading": "D.3 MNIST EXPERIMENT", "text": "Dataset. The dataset we used in our experiment only includes class 0 and 1. For each class, we take 4,800 training examples. Overall, we have 9,800 examples. The latent variables z ∈ R64 are sampled from a standard Normal distribution N (0, I). Neural Networks Architecture. Both the generator and discriminator are 2 hidden layer neural networks with 512 hidden units and Tanh activations. For each fully-connected layer, we use spectral normalization to stabilize training.\nOther Hyperparameters. For FR, we use conjugate gradient (CG) in the inner-loop to approximately invert the Hessian. In practice, we use 5 CG iterations for computational consideration. We also use the same damping scheme as MOG experiment. For both generator and discriminator, we use learning rate 0.0001. We use batch size 2,000 in our experiments." }, { "heading": "D.4 COMPUTING CORRECTION TERM ON MOG AND MNIST EXPERIMENTS", "text": "The main innovation of this work is the introduction of the correction term which encourages both players (leader and follower) to stay close to the ridge. In the main paragraph, we focused on local convergence of our algorithm and essentially the Hessian matrix is constant. However, we know that the curvature (Hessian) of the loss surface might change rapidly in practice especially when both players are parameterized by deep neural networks, making the computation of Hessian inverse highly non-trivial. Here, we summarize detailed steps in computing ηxH−1yyHyx∇xf :\n1. Computing ηxHyx∇xf by finite difference b := ∇yf(x,y)−∇yf(x− ηx∇xf,y); 2. Assigning x← x− ηx∇xf such that the Hessian H below are evaluated at the updated x; 3. Solving linear system ( H2yy + λI ) ∆y = Hyyb using conjugate gradient to get an approx-\nimation of ∆y = ( H2yy + λI )−1 Hyyb ≈ ηxH−1yyHyx∇xf ;\n4. Adapting the damping coefficient λ by computing reduction ratio\nρ = ‖b‖22 − ‖∇yf(x,y)−∇yf(x− ηx∇xf,y + ∆y)‖22\n‖b‖22 − ‖Hyy∆y − b‖22 ,\nwhich measures whether the loss surface is locally quadratic or not. We note that ρ should be exactly 1 in the quadratic case if λ = 0. We then adjust the damping with LevenbergMarquardt style heuristic (Martens, 2010) as follows:\nλ←− 1.1λ if 0 < ρ ≤ 0.5 0.9λ if ρ > 0.95 2.0λ if ρ ≤ 0 λ otherwise\n5. Setting ∆y← 0 if ρ ≤ 0 (essentially we don’t believe the approximation if ρ is negative).\nWhen momentum or preconditioning is applied, we modify∇xf above with the momentum or preconditioned version. To be specific, we apply momentum and preconditioning before the computation of the correction term, Here, we give an example of FR with momentum:[\nxt+1 yt+1 ] ← [ xt yt ] − [ I −H−1yyHyx I ] [ ηx∇xf + γmx,t −ηy∇yf + γmy,t ] [ mx,t+1 my,t+1 ] ← [ γmx,t + ηx∇xf γmy,t − ηy∇yf\n] (9) which is equivalent to Eqn.(3) in the quadratic case since it is a linear dynamical system. Nevertheless, we argue that it is more effective to use Eqn.(9) when the loss surface is highly non-quadratic." }, { "heading": "E ADDITIONAL RESULTS", "text": "" }, { "heading": "E.1 THE ROLE OF PRECONDITIONING", "text": "Following the same setting as Fig. 4, we investigate the effect of preconditioning for our algorithm. As we shown in section 4.1, FR is compatible with preconditioning with same theoretical convergence guarantee. In Fig. 4, we use diagonal preconditioning for accelerating the training. Here, we report the results of FR without preconditioning in Fig. 9. For fair comparison, we also tune the learning rate for vanilla FR and the optimal learning rate is 0.05. Our first observation is that vanilla FR does converge with 500,000 iterations which is consistent with our theoretical results. Particularly, the discriminator is being fooled at the end of training and the gradient vanishes. Our second observation is that it takes much longer to converge, which can be seen from the comparison between the second column (preconditioned version) and the third column. With the same time budget (50,000 iterations), preconditioned FR already converges as seen from the gradient norm curves while the vanilla FR is far from converged." }, { "heading": "E.2 THE ROLE OF MOMENTUM", "text": "In this subsection, we discuss the effect of momentum in our algorithm. We first consider the following quadratic problem:\nf(x,y) = −0.45x21 − 0.5x22 − 0.5y21 − 0.05y22 + x1y2 + x2y2. In this problem, (0,0) is a local (and global) minimax. We run FR with learning rate η = 0.2 and momentum values γ ∈ {0.0, 0.5, 0.8}, and observe how fast the iterates approach the origin. We also compare FR with gradient descent-ascent in this problem. Note that when learning rate ratio (ratio of the follower’s learning rate to the leader’s learning rate) is 1, GDA diverges. We use\na grid search for the follower’s learning rate ηy ∈ {0.1, 0.2, 0.4, 0.8, 1.6} and learning rate ratio c ∈ {5, 10, 20, 40, 80}. We experiment with momentum γ ∈ {0.0,±0.1,±0.2,±0.4,±0.8}. The best result for GDA without momentum is achieved by ηy = 0.8, c = 20; the best result for GDA with momentum is achieved by ηy = 1.6, c = 40 and γ = 0.2. The results are plotted in Fig. 10.\nWe can see that momentum speeds up FR dramatically. In contrast, GDA with momentum does not improve much over GDA without momentum. Moreover, under large momentum values (i.e. γ = 0.8), GDA diverges even when using very large learning rate ratios.\nWe further test the acceleration effect of momentum on the mixture of Gaussian benchmark. Keeping all other hyperparameters the same as those used in Fig. 4, we conduct experiments with momentum coefficient γ ∈ {0.8, 0.9}. As shown in the gradient norm plots of Fig. 11, FR with large positive momentum coefficient converges faster than the one with zero momentum (the second column). Particularly, FR is able to converge within 10,000 iterations with γ = 0.9, yielding roughly a factor of 3 improvement in terms of convergence." }, { "heading": "E.3 SPECTRUM FOR GAN MODEL", "text": "As we claimed in Section 6.2.1, all eigenvalues of Hxx − HxyH−1yyHyx are nonnegative while all eigenvalues of Hyy are non-positive. Here we plot all eigenvalues of them in log scale. To be noted, we plot the eigenvalues for −Hyy for convenience. As expected, the Hessian matrix for the\ndiscriminator is negative semi-definite while the Schur compliment is positive semi-definite." } ]
2,020
ON SOLVING MINIMAX OPTIMIZATION LOCALLY: A FOLLOW-THE-RIDGE APPROACH
SP:72f151a2ffa8c63b2d7740ba2d2074ca6125c3ba
[ "Authors analyse curvature corrected optimization methods in the context of deep learning. They build their analysis on Saxe et.al.s work. They show that curvature corrected methods preserve properties of SGD. They also show the disadvantages of layer restricted approximations. They show the importance of time scales in optimization. The paper looks to deep learning from a dynamical systems perspective and hence their experiments are fitting to this framework.", "In this manuscript, the authors analyze the dynamics of training deep linear neural networks under a generalized family of natural gradient methods that apply curvature corrections. They first show that the learning trajectory (direction of singular mode dynamics) in natural gradient descent follows the same path as gradient descent, while only accelerating the temporal dynamics along the path. Moreover, the authors show that the learning trajectory in layer-restricted approximations of natural gradient descent can significantly differ from the true natural gradient. Also, the authors proposed a fractional natural gradient that applies partial curvature correction which in addition to faster convergence, neutralizes vanishing/exploding gradient problems. " ]
Deep neural networks exhibit complex learning dynamics due to highly nonconvex loss landscape. Second order approaches, such as natural gradient descent, mitigate such problems by neutralizing the effect of potentially ill-conditioned curvature, yet it is largely unknown how the current theory of deep learning generalizes beyond gradient descent to these higher order learning rules. To answer these questions, we derive exact solutions to learning dynamics of deep linear networks under a spectrum of curvature-corrected learning rules. Our analysis reveals that curvature corrected learning preserves a core feature of gradient descent, a conservation law, such that the learning trajectory follows precisely the same path in the underlying manifold as gradient descent, only accelerating the temporal dynamics along the path. We also show that layer-restricted approximations of natural gradient, which are widely used in most second order methods (e.g. K-FAC), can significantly distort the learning trajectory into highly diverging dynamics that significantly differs from true natural gradient, which may lead to undesirable network properties. We also introduce fractional natural gradient that applies partial curvature correction, and show that it provides most of the benefit of full curvature correction in terms of convergence speed, with additional benefit of superior numerical stability and neutralizing vanishing/exploding gradient problems, which holds true also in layer-restricted approximations.
[]
[ { "authors": [ "Madhu S Advani", "Andrew M Saxe" ], "title": "High-dimensional dynamics of generalization error in neural networks", "venue": "arXiv preprint arXiv:1710.03667,", "year": 2017 }, { "authors": [ "Shun-Ichi Amari" ], "title": "Natural gradient works efficiently in learning", "venue": "Neural computation,", "year": 1998 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Noah Golowich", "Wei Hu" ], "title": "A convergence analysis of gradient descent for deep linear neural networks", "venue": "arXiv preprint arXiv:1810.02281,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "arXiv preprint arXiv:1802.06509,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Wei Hu", "Yuping Luo" ], "title": "Implicit regularization in deep matrix factorization", "venue": "arXiv preprint arXiv:1905.13655,", "year": 2019 }, { "authors": [ "Jimmy Ba", "Roger Grosse", "James Martens" ], "title": "Distributed second-order optimization using kronecker-factored approximations", "venue": null, "year": 2016 }, { "authors": [ "Peter L Bartlett", "David P Helmbold", "Philip M Long" ], "title": "Gradient descent with identity initialization efficiently learns positive-definite linear transformations by deep residual networks", "venue": "Neural computation,", "year": 2019 }, { "authors": [ "Alberto Bernacchia", "Mate Lengyel", "Guillaume Hennequin" ], "title": "Exact natural gradient in deep linear networks and its application to the nonlinear case", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aleksandar Botev", "Hippolyt Ritter", "David Barber" ], "title": "Practical gauss-newton optimisation for deep learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yann N Dauphin", "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Surya Ganguli", "Yoshua Bengio" ], "title": "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Simon S Du", "Wei Hu" ], "title": "Width provably matters in optimization for deep linear neural networks", "venue": "arXiv preprint arXiv:1901.08572,", "year": 2019 }, { "authors": [ "Simon S Du", "Wei Hu", "Jason D Lee" ], "title": "Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2016 }, { "authors": [ "Roger Grosse", "James Martens" ], "title": "A kronecker-factored approximate fisher matrix for convolution layers", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Suriya Gunasekar", "Blake E Woodworth", "Srinadh Bhojanapalli", "Behnam Neyshabur", "Nati Srebro" ], "title": "Implicit regularization in matrix factorization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tom Heskes" ], "title": "On “natural” learning and pruning in multilayered perceptrons", "venue": "Neural Computation,", "year": 2000 }, { "authors": [ "Andrew K Lampinen", "Surya Ganguli" ], "title": "An analytic theory of generalization dynamics and transfer learning in deep linear networks", "venue": "arXiv preprint arXiv:1809.10374,", "year": 2018 }, { "authors": [ "Jason D Lee", "Max Simchowitz", "Michael I Jordan", "Benjamin Recht" ], "title": "Gradient descent only converges to minimizers", "venue": "In Conference on learning theory,", "year": 2016 }, { "authors": [ "James Martens" ], "title": "Deep learning via hessian-free optimization", "venue": "In ICML,", "year": 2010 }, { "authors": [ "James Martens" ], "title": "New insights and perspectives on the natural gradient method", "venue": "arXiv preprint arXiv:1412.1193,", "year": 2014 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "James Martens", "Jimmy Ba", "Matt Johnson" ], "title": "Kronecker-factored curvature approximations for recurrent neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Kazuki Osawa", "Yohei Tsuji", "Yuichiro Ueno", "Akira Naruse", "Rio Yokota", "Satoshi Matsuoka" ], "title": "Large-scale distributed second-order optimization using kronecker-factored approximate curvature for deep convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Razvan Pascanu", "Yoshua Bengio" ], "title": "Revisiting natural gradient for deep networks", "venue": "arXiv preprint arXiv:1301.3584,", "year": 2013 }, { "authors": [ "Nicolas L Roux", "Pierre-Antoine Manzagol", "Yoshua Bengio" ], "title": "Topmoumoute online natural gradient algorithm. In Advances in neural information processing", "venue": null, "year": 2008 }, { "authors": [ "Andrew M Saxe", "James L McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "arXiv preprint arXiv:1312.6120,", "year": 2013 }, { "authors": [ "Ohad Shamir" ], "title": "Are resnets provably better than linear predictors", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Difficulty in training deep neural networks arises from the fact that the network’s input-output map fθ(·) is nonlinearly related to its parameters θ. This causes non-convex loss landscape with proliferation of saddle-points and poorly-conditioned curvature where gradient-based first order optimization methods perform poorly (Martens, 2010; Dauphin et al., 2014). Second order methods, such as natural gradient descent (Amari, 1998), compensate for the effect of curvature by using the distance metric intrinsic to the space of input-output maps to define the update steps (Pascanu & Bengio, 2013; Martens, 2014; Bernacchia et al., 2018), rather than the parameter space. Recent advancements led to approximate implementations of these methods that prove efficient for practical scale applications (Ba et al., 2016; Grosse & Martens, 2016; Martens et al., 2018; Osawa et al., 2019).\nDespite their practical effectiveness, however, the exact nature of such curvature-corrected learning process remains largely unknown. Do curvature-corrected learning methods simply accelerate convergences towards the same minimum solutions as gradient descent, or do they impose implicit bias toward qualitatively different solutions?\nAs a first step toward establishing theoretical understanding of these questions, we analyze the exact learning dynamics of deep linear networks under a spectrum of curvature-corrected update rules. Deep linear networks provide an excellent mathematical framework for developing insightful theoretical understanding of the complex inner workings of deep nonlinear networks (Goodfellow et al., 2016). Despite their simplicity, deep linear networks capture the essential nonlinear relationship between network’s input-output maps and their parameters, and exhibit comparable learning behavior to their nonlinear counterparts that can be exactly solved for rigorous analysis. Indeed, many recent works analyzed the learning trajectories of deep linear networks under gradient descent to compute the convergence rate under various initial conditions (Arora et al., 2018a;b; Bartlett et al., 2019; Du & Hu, 2019), revealed decoupled modes of convergence dynamics to explain the origin of multiple\nstage-like loss profiles (Saxe et al., 2013), and showed the implicit bias for regularization (Du et al., 2018; Arora et al., 2019) and resistance to overfitting (Advani & Saxe, 2017; Lampinen & Ganguli, 2018; Poggio et al., 2018). Yet, it is uncertain whether these convergence properties generally apply for update rules beyond gradient descent.\nOur contribution The main results are summarized as follows.\n1. We derive a generalized conservation law that describes the optimization paths of network parameters under gradient descent as well as curvature-corrected update rules. Consequently, curvature correction only affects the speed of convergence without affecting other qualitative properties of parameter update process.\n2. There is a trade-off between map dynamics and parameter dynamics. The full curvature correction effect of natural gradient descent (NGD) completely linearizes the map learning dynamics of deep networks, equivalent to that of shallow networks. Such complete linearization, however, sacrifices stability of parameter update dynamics to explode when gradient vanishes and vice versa.\n3. We introduce a regularized version of NGD that partially corrects for the effect of curvature, called √ NGD, which facilitates the parameter update dynamics by eliminating the\nvanishing/exploding update problems. This makes the map dynamics slightly nonlinear, but no more so than that of single hidden layer networks under gradient descent.\n4. NGD makes the learning process prone to overfitting by simultaneously learning both the signal and the noise dimensions of data, whereas √ NGD partially retains gradient descent’s\nresistance to overfitting by separating the time-scales between the signal and the noise dimensions.\n5. The widely-used block-diagonal approximation of NGD breaches the aforementioned conservation law, resulting in highly divergent parameter update dynamics, which breaks the weight balance across layers. In contrast, block-diagonalization of √ NGD preserves sta-\nbility of parameter update dynamics, yielding efficient and stable learning algorithms." }, { "heading": "2 SETUP AND NOTATIONS", "text": "Consider a depth d network that consists of an input layer, d− 1 hidden layers, an output layer, and weight matrices w ≡ {wi}di=1 that connect the adjacent layers. The network’s input-output map is w̄ ≡ ∏d i=1 wi = wd · · ·w1, such that fw(x) = w̄x. The network learns the input-output statistics of a dataset D = {xµ, yµ}Pµ=1 by minimizing the squared-error loss:\nL(w) = 1\n2 ED[‖w̄x− y‖2] = Tr\n[ 1\n2 (w̄ − w̄∗)Σx(w̄ − w̄∗)ᵀ\n] + const,\nwhere ED is the expectation over the dataset D, Σx ≡ ED[xxᵀ] is the input correlations, and w̄∗ ≡ ED[yxᵀ] Σ−1x . Neglecting the constant term, the loss function is expressed as\nL(w) = Tr [ 1\n2 ∆Σx∆\nᵀ ] . (∆ ≡ w̄ − w̄∗) (1)\nwhere ∆ denotes the displacement between w̄ and w̄∗.\nShallow networks (d = 1, w̄ = w1) exhibit linear learning dynamics under gradient descent, whose convergence rates scale with eigenvalues of Σx. In this case, curvature correction has the well-understood effect of normalizing the convergence rates, which is also achievable by simple pre-whitening of input correlation. Instead, we are interested in the less-understood effect of how curvature correction facilitates the complex nonlinear dynamics of deep networks (d ≥ 2). Therefore, we consider pre-whitened input distribution Σx = I to isolate the nonlinear effect of curvature correction, but this condition is not critical for the analysis.\nGradient and Hessian+ We use bold symbols to collectively represent network parameters and derivatives in array form. For example, ẇ ≡ [ ẇ1 ẇ2 ] and g ≡ [ ∂L ∂w1 ∂L ∂w2 ] = [ wᵀ2 ∆ ∆wᵀ1 ] represent the continuous-time weight update and the gradient of a depth d = 2 network. Hessian is fully characterized by its operation on weight update, which, by definition, produces gradient update:\nHẇ = ġ =\n[ wᵀ2 ∆̇ + ẇ ᵀ 2 ∆\n∆̇wᵀ1 + ∆ẇ ᵀ 1\n] . (∆̇ = ˙̄w = w2ẇ1 + ẇ2w1) (2)\nHowever, true Hessian-based methods (e.g. Newton-Raphson method) can converge to any extrema types. To guarantee convergence to (local) minimum solutions, natural gradient methods use positive semi-definite (PSD) approximations of Hessian (e.g. Fisher matrix (Amari, 1998; Heskes, 2000; Martens & Grosse, 2015; Bernacchia et al., 2018), Generalized-Gauss-Newton matrix (Martens, 2014; Botev et al., 2017; Roux et al., 2008)1), which correspond to\nH+ẇ =\n[ wᵀ2 ∆̇\n∆̇wᵀ1\n] . (3)\nThis operation is indeed PSD, since ẇ ·H+ẇ = Tr[ẇᵀ1w ᵀ 2 ∆̇ + ẇ ᵀ 2 ∆̇w ᵀ 1 ] = Tr[∆̇∆̇ ᵀ] ≥ 0, where the dot-product denotes a · b ≡ ∑d i=1 Tr[aib ᵀ i ]. We refer to this operation as Hessian+.\nNull-space and Conservation laws Deep linear networks exhibit inherent symmetries that their input-output map w̄ is invariant under transformations that multiply arbitrary matrix m to one layer and its inverse to the next layer, i.e. [ w1 w2 ] → [ mw1 w2m −1 ] , ∀m. ẇnull ≡ [ mw1 −w2m ] are the equivalent\ncontinuous-time transformations that yield the invariance ∆̇ = ˙̄w = w2mw1 − w2mw1 = 0, ∀m.\nThese transformations form the null-space of H+, since ẇnull ·H+ẇnull = Tr[∆̇∆̇ᵀ] = 0, which is orthogonal to gradient, since ẇnull · g = Tr[∆∆̇ᵀ] = 0. Also orthogonal to the null-space is natural gradient, since ẇnull ·H†+g = g ·H†+ẇnull = 0, where H†+ denotes Moore-Penrose pseudo-inverse. These continuous symmetries imply the following, self-explanatory theorem (Noether’s theorem):\nTheorem 1 All update rules ẇ that are orthogonal to the null-space, i.e.\nẇ · ẇnull = d∑ i=1 Tr[(wiẇ ᵀ i − ẇ ᵀ i+1wi+1)mi] = 0, ∀mi\nexhibit the following conservation law\nd/dt (wiw ᵀ i − w ᵀ i+1wi+1) = 0, ∀i (4)\nThis result was previously only known for gradient descent dynamics (Arora et al., 2018b; Du et al., 2018), which is generalized here." }, { "heading": "3 LEARNING DYNAMICS", "text": "In this section, we analyze the learning dynamics of the network parameters w (Section 3.1) and the update dynamics of the input-output map w̄ (Section 3.2) under a spectrum of curvature-corrected update rules. We then analyze how block-diagonal approximation modifies the curvature-corrected dynamics (Section 3.3)." }, { "heading": "3.1 PARAMETER DYNAMICS", "text": "We follow the singular value decomposition (SVD)-based analysis of Saxe et al. (2013); Advani & Saxe (2017); Lampinen & Ganguli (2018), by considering network weights that are initialized to\n1Fisher matrix and Generalized-Gauss-Newton matrix are equivalent in many cases, including the least squares problem considered here (Pascanu & Bengio, 2013; Martens, 2014).\nhave their map’s singular vectors aligned with those of w̄∗.2Under such initialization, the update dynamics of weight matrices simplifies to their singular value dynamics, with their singular vectors remain unchanging. This simplified case admits exact analytic solutions, which provide good approximation to general learning dynamics. Moreover, this aligned singular vector condition is automatically satisfied for networks initialized with small random weights Saxe et al. (2013); Advani & Saxe (2017).\nSteepest gradient descent (SGD) Under SGD update, deep networks’ weight parameters exhibit coupled nonlinear dynamics: (d = 2 example, η: learning rate)\nẇ + ηg =\n[ ẇ1 + η w ᵀ 2 ∆\nẇ2 + η∆w ᵀ 1\n] = 0. (5)\nThe SVD analysis decomposes eq (5) to individual singular mode dynamics. The dynamics of one singular mode is described by3(See S.I.)\nσ̇i + η σ∆ ji = 0 (σ∆ = σ̄ − σ̄∗, σ̄ = d∏ i=1 σi) (6)\nwhere σi, σ̄∗, σ̄, σ∆ are the singular values of wi, w̄∗, w̄, ∆, and ji ≡ ∂σ̄/∂σi = σ̄/σi denotes the coupling between the input-output map and parameters, i.e. Jacobian. Note that this singular mode dynamics follows the hyperbolic paths\nσ2i − σ2k = constant, ∀i, k (7) which is the direct consequence of the conservation law (4). The update speed ‖σ̇‖ is proportional to the displacement |σ∆| and the coupling strength ‖j‖\n‖σ̇‖ ∝ |σ∆|‖j‖, (‖σ̇‖2 ≡ ∑d i=1 σ̇ 2 i , ‖j‖2 ≡ ∑d i=1 j 2 i ) (8)\nwhich vanishes for networks with small coupling strength and explodes for large coupling strength.\nNatural gradient descent (NGD) NGD finds the minimum-norm update solution (min ẇ · ẇ) subject to the constraint (i.e. Moore-Penrose pseudo-inverse solution)\nH+ẇ + ηg =\n[ wᵀ2 (∆̇ + η∆)\n(∆̇ + η∆)wᵀ1\n] = 0, (9)\nwhich can be solved using Lagrange multipliers to yield (See S.I.)[ ẇ1 + η w ᵀ 2 Λ\nẇ2 + ηΛw ᵀ 1\n] = 0, (10)\nwhere Λ satisfies wᵀ2S(Λ) = S(Λ)w ᵀ 1 = 0. (S(Λ) ≡ (w2w ᵀ 2 )Λ + Λ(w ᵀ 1w1)−∆) (11)\nRemarkably, the only change from SGD update (5) is replacing ∆ with Λ as the main drive of dynamics eq (10), which preserves orthogonality to null-space and hence the conservation law (4) 4. The singular mode dynamics of NGD update eq (10) is5\nσ̇i + η σ∆ ji ‖j‖2 = 0, (12)\nwhere σ∆ of SGD eq (6) is replaced by σΛ = σ∆/‖j‖2, the singular values of Λ (See S.I.). NGD dynamics eq (12) follows the same hyperbolic paths of SGD eq (7), but with modified update speed\n‖σ̇‖ ∝ |σ∆| ‖j‖\n(13)\nwhich inversely scales with ‖j‖. Therefore, NGD’s update speed explodes for small coupling strength, reciprocal to SGD’s vanishing speed phenomenon.\n2Given SVD of weight matrices wi = LiDiRᵀi and w̄∗ = L∗D∗R ᵀ ∗, where D are the diagonal singular value matrices and L/R are the left/right singular vector matrices, the aligned singular vector condition assumes R1 = R∗, Ld = L∗ and Ri+1 = Li for all layers 1 ≤ i ≤ d− 1.\n3 The dynamics eq (6),(12) apply to all active singular modes. Inactive modes that have σ̄ = 0 stay frozen. The number of active modes is determined by the bottleneck size, i.e. the narrowest width of network.\n4The Moore-Penrose pseudo-inverse solution is guaranteed to be orthogonal to the null-space, since nonzero null-space components only increase the solution’s norm without affecting the constraint eq (9).\nFractional Natural Gradient Descent ( q √\nNGD) Above results can be generalized to a spectrum of update rules that apply partial curvature corrections, described by q √ H+ẇ + ηg = 0, where q √ H+ is a fractional power of Hessian+ (q ≥ 1). The singular mode dynamics of q √ NGD is\nσ̇i + η σ∆ ji ‖j‖2/q = 0, (14)\nwhich interpolates between NGD (q = 1) and SGD (q →∞). Eq (14) follows the same hyperbolic paths of SGD eq (7), but with modified update speed\n‖σ̇‖ ∝ |σ∆|‖j‖1−2/q. (15)\nNote that for q = 2, termed √ NGD, the update speed becomes independent of the coupling strength\n‖σ̇‖ = η |σ∆|, (16) thus eliminating the vanishing/exploding update speed problems of SGD/NGD (See Fig 1C).\nRelation to Regularized NGD Alternative interpolation solves (H+ẇ + ηg) + I(ẇ + ηg) = 0 ( ≥ 0), which yields the regularized (or damped) inverse\nẇ = −η( + 1)( I + H+)−1g, 6 (17) similar to Levenberg-Marquardt damping (less the ( + 1) term), whose singular mode dynamics is\nσ̇i + η σ∆ ji ‖j‖ ( a‖j‖+ 1 a+ ‖j‖ ) = 0, (a ≡ /‖j‖) (18)\n6 This expression reduces to SGD in the limit → ∞, which differs from the usual regularized inverse ẇ = −η( I + H+)−1g, which reduces to 0.\nwhere the ratio a ≡ /‖j‖ describes the effective degree of interpolation between NGD (a → 0) and SGD (a → ∞). Note that a should be large enough to provide sufficient damping, but not too large to nullify the effect of curvature correction, which is difficult to simultaneously satisfy across all singular modes with fkxed . √ NGD can be considered as providing ideally and adaptively tuned regularization (a = 1) for all singular modes, where the regularization is most effective." }, { "heading": "3.2 MAP DYNAMICS", "text": "The parameter update ultimately drives the learning dynamics of the input-output map, via Jacobian\n˙̄σ = d∑ i=1 σ̇i ji, (19)\nwhich yields the following map learning dynamics under q √ NGD update (14)\n˙̄σ = −η(σ̄ − σ̄∗)‖j‖2(1−1/q). (20) In general, eq (20) does not admit closed-form solutions due to the coupling strength term, with the exception of NGD (q = 1). As shown by the vector field in Figure 1, however, the coupling strength changes in a streotypical manner along the learning trajectories. Therefore, the general characteristics of map dynamics can be appreciated from the representative case of balanced weights: σi = σ̄ 1/d ∀i, or in terms of the conserved quantities, wiwᵀi − w ᵀ i+1wi+1 = 0. Note that this balanced weight condition is automatically approximately satisfied if the networks are initialized with small random weights.\nUnder the balanced weight condition, eq (20) simplifies to\n˙̄σ = −η̄ (σ̄ − σ̄∗) σ̄p ( p ≡ 2(d− 1)(q − 1)\nd q ) (21)\nwhere η̄ ≡ η d1−1/q is the depth-calibrated learning rate, and p represents the combined effect of depth and curvature correction that determines the stiffness, or degree of nonlinearity, of map\ndynamics. Figure 2 shows the following notable closed-form solutions, as well as p = 2 case: σ̄(t) = σ̄∗(1− e−η̄t) ( p = 0) σ̄(t) = σ̄∗ tanh 2(η̄ √ σ̄∗t/2) ( p = 0.5)\nσ̄(t) = σ̄∗\n1 + (σ̄∗/σ̄(0) − 1)e−η̄σ̄∗t ( p = 1 )\nwhere zero initial condition σ̄(0) = 0 is assumed for p < 1 cases.\nNGD update (q = 1, p = 0) Under NGD update, the map dynamics exhibits fully linearized convergence dynamics with a constant time-scale η−1 for all depth d and data mode-strength σ̄∗. Its learning curves exhibit finite growth rate near zero σ̄(t) ≈ η̄ σ̄∗t, which entails exploding parameter update speed as the coupling strength approaches zero. Therefore, the full curvature correction of NGD sacrifices stability of parameter dynamics in order to perfectly cancel out all nonlinearities of map dynamics.\n√ NGD update (q = 2, p = 1 − 1/d) For √ NGD update, the stiffness ranges from p = 0.5 for single hidden layer networks to p→ 1 in infinite depth limit. Its learning curves exhibit polynomial growth near zero, σ̄(t) ∝ t1/(1−p), which takes finite time to escape from zero initial condition. even though the initial growth rate vanishes with the coupling strength. The overall time-scale of learning decreases with mode strength as σ̄−p∗ , such that stronger singular modes (large σ̄∗) learn faster than weaker modes.\nSGD update (q → ∞, p = 2 − 2/d) Under SGD update, the stiffness ranges from p = 1 for single hidden layer networks to p → 2 in infinite depth limit. Its learning curves exhibit sigmoidal shape that take infinite time to escape from the saddle point at zero initial condition: the escape time diverges as O(− log σ̄(0)) for p = 1 and O(σ̄1−p(0) ) for p > 1. Also, the increased p causes greater separation of time-scales (η̄σ̄p∗)−1 across singular modes, which results in stage-like transitions over the course of training, with each singular mode making sudden transition from slow learning to rapid convergence (Saxe et al., 2013).\nEffective Depth Network depth d and curvature correction q interact in a symmetric manner, which can be intuitively understood by representing stiffness in terms of the corresponding network depth under SGD update, called the effective depth:\ndeff = dq\nd+ q − 1 , (22)\nwhich approaches the actual depth deff → d in the SGD limit (q → ∞), and similarly, approaches deff → q in the limit of infinite depth (d → ∞). Therefore, q √ NGD reduces the network’s effective\ndepth to be strictly less than q. For √ NGD, this upper-limit is 2, i.e. single hidden layer network.\nTo summarize, curvature correction lowers the nonlinearity/stiffness of map dynamics of deep networks by reducing their effective depth. The full curvature correction effect of NGD perfectly cancels out all nonlinearities of map dynamics to exhibit linear convergence, equivalent to shallow network learning, but it sacrifices stability of parameter dynamics to explode at the saddle point. In contrast, partial curvature correction of √ NGD directly facilitates the parameter update dynamics, which eliminates the vanishing/exploding update problem, and it makes the map dynamics only slightly nonlinear, but no more so than that of single hidden layer networks under gradient descent." }, { "heading": "3.3 EFFECT OF LAYER-RESTRICTED APPROXIMATION", "text": "Block-diagonal NGD (NGD-d) In most practical deep learning applications, numerically estimating and inverting Hessian+ becomes prohibitively expensive. Instead, most second-order methods approximate NGD by applying layer-restricted curvature corrections, ignoring the off-blockdiagonal Hessian+ terms across different layers (Martens & Grosse, 2015; Ba et al., 2016; Grosse & Martens, 2016; Martens et al., 2018; Bernacchia et al., 2018): (d = 2 example)[\nH1ẇ1 + η1g1 H2ẇ2 + η2g2\n] = [ wᵀ2w2ẇ1 + η1w ᵀ 2 ∆\nẇ2w1w ᵀ 1 + η2∆w ᵀ 1\n] = 0, (23)\nwhich nevertheless satisfies the NGD constraint (9) if ∑d i=1 ηi = η. Hi denotes the block-diagonal Hessian+ term of layer i. Singular mode dynamics of eq (23) is (with ηi = η/d)\nσ̇i + η σ∆ d ji = 0, (24)\nwhere the layer-restricted factor j2i substitutes the full curvature correction factor ‖j‖2 of NGD (12). This block-diagonalization significantly modifies the parameter update dynamics by adding nonzero null space components. Instead of the hyperbolic paths (7), eq (24) follows radially diverging paths that conserve σi/σk as constants of motion. Consequently, NGD-d update exhibits larger parameter update speed than NGD7, and converges to less efficient, large norm solutions that are highly sensitive to initial conditions and perturbations (Fig 1D, red line). Despite the vastly different parameter dynamics, however, NGD-d exhibits identical map learning dynamics as NGD ˙̄σ = −η (σ̄ − σ̄∗) (Fig 1BD, red dots), because the input-output map is invariant under null-space transformations.\nBlock-diagonal √ NGD ( √\nNGD-d) More generally, block-diagonalized fractional NGD H\n1/q i ẇi + η gi/d 1/q = 0 yields\nσ̇i + η σ∆ d1/q j 1−2/q i = 0, (25)\nwhich conserves σ2(1−1/q)i − σ 2(1−1/q) k as constants of motion. For q = 2, called\n√ NGD-d, the\nsingular mode dynamics\nσ̇i + η σ∆√ d sign(ji) = 0, (26)\nfollows non-diverging, parallel paths that conserve |σi|−|σk|, with identical parameter update speed as √ NGD’s ‖σ̇‖ = η |σ∆| (Fig 1E). Therefore, √\nNGD-d yields neutrally-stable update dynamics that neutralizes the vanishing/exploding update speed problems." }, { "heading": "4 IMPLICIT BIAS FOR REGULARIZATION", "text": "Recent works have shown that learning dynamics of deep networks under SGD update exhibits implicit bias towards keeping the network well regularized. Here, we consider two such properties, and analyze how they generalize under curvature-corrected update rules.\nWeight balance Deep neural networks often exhibit redundant parameterizations, such that configurations of parameters implement the same input-output map. One such redundancy, or symmetry, that concerns both deep linear networks and deep ReLU networks is homogeneity, such that multiplying a layer by a positive scalar c and divide another layer by c, does not change the input-output map. The problem is that c can be arbitrarily large or small, yielding potentially unbounded, yet valid solutions. Such unboundedness poses major theoretical difficulty for convergence analysis of gradient-based local optimization methods (Lee et al., 2016; Shamir, 2018).\nFortunately, SGD update exhibits implicit bias toward automatically balancing the norm of different layer’s weight matrices. The proof directly follows from the following conserved quantity of scalar multiplication symmetry ‖wi‖2Frob − ‖wi+1‖2Frob, which is a relaxed version of the aforementioned conservation law eq (4). Thus, if the weights are initially small, this differences between squared norm will remain small through out learning process, thus establishing balancedness across layers Du et al. (2018). As shown in section 2, curvature-corrected updates (e.g. NGD and √\nNGD) retain orthogonality to the null-space of symmetry, and thus comply with the same conservation laws as SGD. We show numerical confirmation of this prediction in S.I. The conservation of squared difference of norms for homogeneous ReLU networks still requires similar numerically confirmation.\nIn contrast, block-diagonalized methods do not follow the same conservation law. NGD-d conserves the ratio between singular values across layers σi/σk, which does not guarantee balancedness even\n7 ‖σ̇‖2NGD-d ≥ ‖σ̇‖2NGD can be shown using Jensen’s inequality: 1d ∑d i=1 1 j2i ≥ d∑d i=1 j 2 i ,\nwith small initialization. √\nNGD-d, however, conserves the absolute difference of singular values across layers |σi| − |σk|, which guarantees balancedness, at least under the condition of aligned singular vectors: That is, the ratio between the singular values would approach close to 1 if they grow from small initial values, while maintaining the small absolute difference. Although this does not constitute a formal proof for general case, √ NGD-d confirms to maintain balancedness across layers in numerical simulations (See S.I.).\nLow rank approximation / Generalization dynamics The learning dynamics of the input-output map under SGD update separates the time-scales of learning across singular modes (η̄σ̄p∗)−1, such that the singular modes with stronger data correlation preferentially are learned faster (Saxe et al., 2013). This property yields an implicit regularization property for deep networks to efficiently extract low-rank structure of the dataset, such as for finding matrix factorizations with minimum nuclear norm (Gunasekar et al., 2017; Arora et al., 2019). It also allows deep networks to avoid overfitting via early stopping by first learning the signal dimensions of noisy dataset, before the overfitting of the noise dimensions occurs, as long as signal-to-noise ratio is sufficiently large Advani & Saxe (2017); Lampinen & Ganguli (2018) which yields good generalization performance to unseen data. However, this approach requires the network to be trained from small random weight initialization, where SGD suffers from vanishing gradient problem.\nIn curvature-corrected cases, the learning speed of map dynamics eq (21) scales as σ̄−p∗ . Under NGD, the map dynamics is perfectly linearized (p = 0), which also removes its ability to separate out the time-scales. This makes NGD update prone to large generalization error due learning the noise dimension simultaneously with the signal. In contrast, √ NGD partially retains time-scale separation in the learning dynamics, while also accelerating the parameter update dynamics near zero weights.\nWe the generalization property of curvature corrected learning rules with student-teacher task from Lampinen & Ganguli (2018), in which the training and test dataset are generated by a teacher network yµ = w̄∗xµ + zµ, where xµ ∈ RN is the input data, yµ ∈ RN is the output, w̄∗xµ is the signal and zµ ∈ RN is the noise. Teacher’s input-output map w̄∗ ∈ RN×N is assumed to have a low-rank structure (rank 3), and the student is a depth d = 4 network of constant width N = 16, whose weight matrices are initialized to have maximum singular value of 0.05. The number of training dataset {xµ, yµ}Pµ=1 is set to be equal to the effective number of parameters P = N , which makes the learning process most susceptible to overfitting. For numerical calculation of NGD and √\nNGD, the Hessian+ block between layer i and k are computed as described in Bernacchia et al. (2018) (eq 42), which are then concatenated to the full Hessian+ matrix and numerically inverted (or sqrt-inverted) via eigen-decomposition. LevenbergMarquardt damping of = 10−5 and update clipping are used for numerical stability of NGD.√\nNGD does not require such clipping or damping terms.\nFigure 3 shows the result of training. SGD exhibits stage-like transitions, which first learns the three signal modes, well separated from the onset of overfitting of the noise modes begins, which allows effective early stopping scheme. However, it suffers from the long plateaus due to vanishing gradient problem.\nNGD (and NGD-d) updates learn all singular modes simultaneously including the noise modes (See Fig 3 D), which leads to high generalization error. Note that NGD’s loss profile deviates from exponential decay due to the clipping. In contrast, √ NGD (and √ NGD-d) allows fast learning while separating the signal dimensions from the noise dimensions, achieving comparable test loss as SGD update, but also with fast early-stopping time comparable to NGD update. Note that all three update rules achieve the same test loss after overfitting is complete. This is due to the shared learning path for each singular mode across the methods." }, { "heading": "5 CONCLUSION", "text": "To summarize our contribution, we derived a generalized conservation law that describes the optimization paths of network parameters under gradient descent as well as curvature-corrected update rules. Consequently, curvature correction only affects the speed of convergence without affecting other qualitative properties of parameter update process.\nWe revealed a trade-off between map dynamics and parameter dynamics: The full curvature correction effect of natural gradient descent (NGD) completely linearizes the map learning dynamics of deep networks, equivalent to that of shallow networks. Such complete linearization, however, sacrifices stability of parameter update dynamics to explode when gradient vanishes and vice versa. Moreover, we introduced √ NGD that partially corrects for the effect of curvature, which facilitates the parameter update dynamics by eliminating the vanishing/exploding update problems. This makes the map dynamics slightly nonlinear, but no more so than that of single hidden layer networks under gradient descent. Moreover, NGD makes the learning process prone to overfitting by simultaneously learning both the signal and the noise dimensions of data, whereas √ NGD partially retains gradient descent’s resistance to overfitting by separating the time-scales between the signal and the noise dimensions. We also showed that the widely-used block-diagonal approximation of NGD breaches the aforementioned conservation law, resulting in highly divergent parameter update dynamics, which breaks the weight balance across layers. In contrast, block-diagonalization of √\nNGD preserves stability of parameter update dynamics, yielding efficient and stable learning algorithms." }, { "heading": "4 NUMERICAL CONFIRMATION OF CONSERVATION LAW", "text": "See Figure S2. Figure S2 plots the learning trajectory of a 3 layer network, and shows the elements of the weight matrices evolving over time (w1, w2, w3). It also shows the conserved quantities w1w ᵀ 1 − w ᵀ 2w2, w2w ᵀ 2 − w ᵀ 3w3, which indeed remain constant for SGD, NGD and √ NGD, while\nit blows up for NGD-d. √\nNGD also violates the conservation law, but the weights remain balanced over time." } ]
2,019
null
SP:3d4981a5b80d3f1f2b1249fa1a310988dfc81a91
[ "This paper introduces a method to incorporate both sequence information and graph information to learn the protein representations. The idea is very straightforward. Basically, it used the embedding from OhmNet [Marinka et al, 2017] for the graph information and used the sequence information from UniRep [Ethan et al, 2019] or SeqVec [Michael et al, 2019]. It uses one experiment to show the performance of the combination of the two pieces of information.", "In this study, the authors develop a method to predict the function of proteins from their structure as well as the network of proteins with which they interact in a given tissue. The method consists in training a linear classifier on the output of two existing embedding methods, UniRep/SeqVec and OhmNet, respectively embedding the amino acid sequences and the tissue-specific protein-protein interaction networks. This method improves prediction of protein function by 19% compared to OhmNet alone." ]
Computational methods that infer the function of proteins are key to understanding life at the molecular level. In recent years, representation learning has emerged as a powerful paradigm to discover new patterns among entities as varied as images, words, speech, molecules. In typical representation learning, there is only one source of data or one level of abstraction at which the learned representation occurs. However, proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks. Given that protein function is an emergent property of all these levels of interactions in this work, we learn joint representations from both amino acid sequence and multilayer networks representing tissue-specific protein-protein interactions. Using these hybrid representations, we show that simple machine learning models trained using these hybrid representations outperform existing network-based methods on the task of tissue-specific protein function prediction on 13 out of 13 tissues. Furthermore, these representations outperform existing ones by 14% on average.
[]
[ { "authors": [ "Ethan C Alley", "Grigory Khimulya", "Surojit Biswas", "Mohammed AlQuraishi", "George M Church" ], "title": "Unified rational protein engineering with sequence-only deep representation learning", "venue": "bioRxiv, pp", "year": 2019 }, { "authors": [ "Alex Bateman", "Lachlan Coin", "Richard Durbin", "Robert D Finn", "Volker Hollich", "Sam Griffiths-Jones", "Ajay Khanna", "Mhairi Marshall", "Simon Moxon", "Erik LL Sonnhammer" ], "title": "The pfam protein families database", "venue": "Nucleic acids research, 32(suppl 1):D138–D141,", "year": 2004 }, { "authors": [ "Florence Corpet" ], "title": "Multiple sequence alignment with hierarchical clustering", "venue": "Nucleic acids research,", "year": 1988 }, { "authors": [ "Thomas E Creighton" ], "title": "Proteins: structures and molecular properties", "venue": null, "year": 1993 }, { "authors": [ "Robert C Edgar" ], "title": "Muscle: multiple sequence alignment with high accuracy and high throughput", "venue": "Nucleic acids research,", "year": 2004 }, { "authors": [ "Da-Fei Feng", "Russell F Doolittle" ], "title": "Progressive sequence alignment as a prerequisitetto correct phylogenetic trees", "venue": "Journal of molecular evolution,", "year": 1987 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Michael Heinzinger", "Ahmed Elnaggar", "Yu Wang", "Christian Dallago", "Dmitrii Nachaev", "Florian Matthes", "Burkhard Rost" ], "title": "Modeling the language of life-deep learning protein", "venue": "sequences. bioRxiv,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Ben Krause", "Liang Lu", "Iain Murray", "Steve Renals" ], "title": "Multiplicative lstm for sequence modelling", "venue": "arXiv preprint arXiv:1609.07959,", "year": 2016 }, { "authors": [ "Stanley Letovsky", "Simon Kasif" ], "title": "Predicting protein function from protein/protein interaction data: a probabilistic approach", "venue": "Bioinformatics, 19(suppl 1):i197–i204,", "year": 2003 }, { "authors": [ "Sara Mostafavi", "Debajyoti Ray", "David Warde-Farley", "Chris Grouios", "Quaid Morris" ], "title": "Genemania: a real-time multiple association network integration algorithm for predicting gene function", "venue": "Genome biology,", "year": 2008 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Baris E Suzek", "Hongzhan Huang", "Peter McGarvey", "Raja Mazumder", "Cathy H Wu" ], "title": "Uniref: comprehensive and non-redundant uniprot reference", "venue": "clusters. Bioinformatics,", "year": 2007 }, { "authors": [ "Lei Tang", "Xufei Wang", "Huan Liu" ], "title": "Scalable learning of collective behavior", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2011 }, { "authors": [ "Alexei Vazquez", "Alessandro Flammini", "Amos Maritan", "Alessandro Vespignani" ], "title": "Global protein function prediction from protein-protein interaction networks", "venue": "Nature biotechnology,", "year": 2003 }, { "authors": [ "Marinka Zitnik", "Jure Leskovec" ], "title": "Predicting multicellular function through multi-layer tissue", "venue": "networks. Bioinformatics,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks (Creighton, 1993). Some proteins with similar sequences play similar roles; others with high levels of sequence similarity can play different roles. To add further nuance, the same protein can play different roles depending on the tissue it is in and the state of that tissue. Understanding the relationship between these different levels of structure and the role that a protein plays is one of the grand challenges of biology. Recent availability of highthroughput experimental data and machine-learning based computational methods can be useful for unveiling and understanding such patterns.\nWe frame the problem of understanding the relationship between these complementary data sources and tissue-specific protein function as one of developing protein embeddings on top of which simple machine learning models can be trained to map a given protein to its tissue-specific function.\nIn this work we constructed new protein representations combining different levels of abstraction. More specifically, we constructed a 128-dimensional vector for each protein where the first 64 dimensions are derived from the amino acid sequence and the remaining 64 dimensions are obtained from embedding the protein into a tissue-specific protein-protein interaction networks. Such representations are then used to train a simple linear classifier to predict tissue-specific protein function. This approach outperforms network-based approaches which usually only use information from the protein-protein interaction network.\nThe main contribution of this paper include:\n• Approaching the problem of tissue-specific protein function prediction from the angle of representation learning using information ranging from amino acid sequence to multilayer networks including tissue-specific protein-protein interaction\n• Experimentally showing that such representations outperform network-based methods on 13 out of 13 tissues for which we perform the experiments. The best method outperforms current ones by 14% on average.\n• An ablation analysis that demonstrated that our state-of-the-art results are a result of the joint embeddings" }, { "heading": "2 RELATED WORK", "text": "Computational methods to predict the function of proteins fall into several categories. An important step of the pipeline is developing representations for proteins. Most existing methods focus on one level of biological abstraction and develop a representation specific to this level. For example, when looking at the primary structure, the first attempt to computationally predict the role of a protein is through sequence homology. That is, using a database of protein whose sequence and function is known, methods using string similarity will find the closest proteins and use heuristics to make a prediction based on such similarity. These methods use dynamic programming and hierarchical clustering to align multiple sequence to perform homology and find the distance of a given protein to multiple proteins stored in a database. (Feng & Doolittle, 1987) (Corpet, 1988) (Corpet, 1988) (Edgar, 2004)\nBeyond sequence homology, local polypeptide chains are grouped under patterns called protein domains (Bateman et al., 2004). Protein domains evolve independently of the rest of the protein chain. They are often thought of as evolutionary advantageous building blocks which are conserved across species and proteins. The presence of such building blocks in protein is used as a proxy to infer function and protein family. Pfam is a database of protein families that includes their annotations and multiple sequence alignments generated using hidden Markov models and has 17,929 families used to characterize unknown on the basis of motif presence.\nRecently, inspired by the methods used in natural language processing, researchers have developed character-level language models by training algorithms such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) networks to predict the next amino acid given the previous amino acids. Many recent works have gone into training and investigating the properties learned by such language models and found that they encode many biochemical properties and can be used to recover protein families. More specifically UniRep (Alley et al., 2019) uses a multiplicative LSTM (Krause et al., 2016) trained to perform next amino acid prediction on 24 million UniRef50 (Suzek et al., 2007) amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec (Heinzinger et al., 2019) works by training bi-directional language model ELMo (Peters et al., 2018) on UniRef50. While such models are useful descriptors and encoders of biochemical properties, they lack the local context needed to infer protein function.\nWhile all previously-cited methods develop representations of proteins with the basic molecular components, other methods treat proteins like social networks. Proteins rarely accomplish a function in isolation and need to bind with other proteins, in a specific tissue in a given state to accomplish a function. Using this insight, many methods describe proteins using such signals. That is, using a “guilt by association principle,” they take the perspective that the role of a protein can be inferred from understanding which other proteins it interacts with (Letovsky & Kasif, 2003) (Vazquez et al., 2003) (Mostafavi et al., 2008). Representation learning methods formalizing such principles usually take as input a protein-protein interaction network represented as a graph and use methods such as matrix decomposition (Tang et al., 2011) and node embeddings (Grover & Leskovec, 2016) to develop a vector representation grouping neighboring nodes into a similar position. However, these methods do not take into account the rich information that can be learned by examining a protein’s primary sequence. We aim to synthesize the previous approaches, and also take more contextual information about the tissues in which proteins interact. We use OhmNet (Zitnik & Leskovec, 2017) to include the tissue hierarchy and develop tissue-specific node embeddings taking into account local neighborhoods among proteins as well as local neighborhoods among tissues." }, { "heading": "3 METHODS", "text": "The main idea we present is to integrate information at different levels of the biological hierarchy into the learned representation of each protein. We used information from two sources: the amino acid sequence and the tissue-specific protein-protein interaction network. We combined these representations by concatenating them into a 128 dimensional vector and trained a linear classifier to\npredict tissue-specific protein functions in a one vs all fashion. That is, each classifier is a binary classifier to predict if a given protein plays a given role in a specific tissue. We measure the area under the curve for each classifier and average it to have a tissue-specific AUROC." }, { "heading": "3.1 AMINO ACID SEQUENCE REPRESENTATION", "text": "To represent the amino acid sequence, we used recent works such as UniRep and SeqVec treat the amino acids as an alphabet and the amino acid sequence as a string in that discrete alphabet. They learn representations by leveraging the millions of protein sequences available to train a machine learning model to predict the next amino acid given the previously seen amino acids. More specifically UniRep uses a multiplicative LSTM train to perform next amino acid prediction on 24 million UniRef50 amino acid sequences. The trained model is used to generate a single fixed-length vector representation of the input sequence by globally averaging intermediate mLSTM numerical summaries. SeqVec works by training bi-directional language model ELMo on UniRef50." }, { "heading": "3.2 TISSUE-SPECIFIC PROTEIN NETWORK EMBEDDING", "text": "For the second source of representation, we used two different methods: Ohmnet and Node2Vec. Node2vec learns a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes.\nOhmNet encourages sharing of similar features among proteins with similar network neighborhoods and among proteins activated in similar tissues.\nGiven that the task of tissue-specific protein function prediction is introduced in OhmNet and uses 128 dimensional vector to compare it with other methods, all of our vectors are also constructed to produce 128 dimensional vectors." }, { "heading": "3.3 DUMMY VECTORS", "text": "To perform controlled experiments that ablate various sources of information, we constructed dummy vectors that we concatenated with either the amino acid sequence representation or the tissue-specific protein network embedding. These vectors are: Random64, a 64 dimensional random vector where each dimension is generated by sampling from a uniform distribution in the [-1,1] interval. Random128 is the corresponding 128 dimensional random vector. 0-pad, which simply pads the remaining dimensions with 0s." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "The goal of each experiment is to solve a multi-label binary classification problem. Each label is binary and represents a specific function (more precisely a cellular function from the Gene Ontology) in a specific tissue. On each tissue, we aim to match every active protein with zero, one or more tissue-specific functions. Using a multi-output linear classifier model, we then, for each tissue, use a separate linear classifier to predict every single protein functional activation.\nWe evaluate and compare the protein representations from the original Ohmnet versus the augmented versions introduced in this paper. In this experiment, we run a 10-fold cross-validation with each method over 13 complex tissues (those mapped with more than one function in the Gene Ontology). Prior to that a random oversampling is run on the training data to make for the class imbalance present in almost all tissues. With each fold, the protein embeddings are split between training set (90%) and validation set (10%) in a randomly stratified fashion. This training/test split ration is done to reproduce the OhmNet setting. The task at hand is to predict the unseen validation set after fitting the training set. The name of the representations includes the data sources used to generate the 128 dimensional vectors. More details including scores for specific tissues are available in the appendix.\nOut of the 13 tissues we’ve tried. Some highlight results include:\n• Node2Vec-SeqVec outperforms Node2Vec 13/13 times\nLooking at how Ohmnet-SeqVec and Node2Vec-SeqVec performs (a similar trend is observed for UniRep) shows that both Unirep and Seqvec add significant and new information that’s not captured by tissue hierarchy or protein-protein interaction alone.\nThe average AUROC score from Random is a big higher than what could be expected from such representations thanks to the spikes (Placenta, Epidermis) which might also result from the huge functional class imbalance within those two tissues which, given the uniformity of the data, gets them more often than not on the right side of the hyperplane. Another explanation might be the low amount of data (respectiveley 35 and 72 active proteins) available on those two tissues !" }, { "heading": "5 CONCLUSION", "text": "In this work, we have looked at how conceptually different representations of proteins could interact and complement each other for the task of predicting function. We have shown that by merging information from two task-independent representations of proteins, we make consistently better tissue-specific function predictions in 13 complex tissues. Our ablation analysis demonstrates that the improved results are a consequence of integrating information from different levels of the biological hierarchy." }, { "heading": "6 DISCUSSION/FUTURE WORK", "text": "This work explores various ways of learning representations of proteins to understand protein function in its given biological context. One key takeaway is that combining representations from different level of biological abstractions leads to improved representations as judged by their ability to predict tissue-specific protein function. Recent work on developing representation from amino acid sequence enables us to take advantage of the vast amount of unlabeled sequences and work directly with proteins whether or not they have been aligned with existing sequences or annotated using known families.\nIn the current experimental setting, we only focused on 13 tissues which had more than 2 functions and between 90 and 1400 active proteins. Further work can be done by looking at a more comprehensive set of tissues and functions. Additionally, we trained relatively simply classifiers in a one vs. all manners; more powerful approaches using complex models should naturally be explored.\nRecent work has also developed embeddings encoding 3D protein structure. These embeddings are currently missing in this work and could also be integrated in subsequent work to help understand the relative importance of sequence, structure and protein interaction network to predict tissue-speicifc function.\nWe hope that our work spurs more research in representations that integrate information from multiple levels of the biological hierarchy and provide insight into the function of proteins and cells." } ]
2,019
AUGMENTING PROTEIN NETWORK EMBEDDINGS
SP:979cb5eda94e85ac70c6652abb1580295f39c46b
[ "On the basis of existing topic modelling approaches, the authors apply a transfer learning approach to incorporate additional knowledge to topic models, using both word embeddings and topic models. The underlying idea is that topic models contain a global view that differs on a thematic level, while word embeddings contain a local, immediate contextual view. The combination of both local and global view transfer to enhance a topic model is the main contribution of this paper, especially when using multiple sources (therefore the title: multi-source multi-view transfer).", "The paper proposes a multi-source and multi-view transfer learning for neural topic modelling with the pre-trained topic and word embedding. The method is based on NEURAL AUTOREGRESSIVE TOPIC MODELs --- DocNADE (Larochelle&Lauly,2012). DocNADE learns topics using language modelling framework. DocNADEe (Gupta et al., 2019) extended DocNADE by incorporating word embeddings, the approach the authors described as a single source extension of the existing method." ]
Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity problem in short text or small collection of documents. However, no prior work has employed (pretrained latent) topics in transfer learning paradigm. In this paper, we propose a framework to perform transfer learning in neural topic modeling using (1) pretrained (latent) topics obtained from a large source corpus, and (2) pretrained word and topic embeddings jointly (i.e., multiview) in order to improve topic quality, better deal with polysemy and data sparsity issues in a target corpus. In doing so, we first accumulate topics and word representations from one or many source corpora to build respective pools of pretrained topic (i.e., TopicPool) and word embeddings (i.e., WordPool). Then, we identify one or multiple relevant source domain(s) and take advantage of corresponding topics and word features via the respective pools to guide meaningful learning in the sparse target domain. We quantify the quality of topic and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains. We have demonstrated the state-ofthe-art results on topic modeling with the proposed transfer learning approaches.
[ { "affiliations": [], "name": "WORD EMBEDDINGS" } ]
[ { "authors": [ "Yoshua Bengio", "Réjean Ducharme", "Pascal Vincent", "Christian Janvin" ], "title": "A neural probabilistic language model", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "David M. Blei", "Andrew Y. Ng", "Michael I. Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "TACL,", "year": 2017 }, { "authors": [ "Bin Cao", "Sinno Jialin Pan", "Yu Zhang", "Dit-Yan Yeung", "Qiang Yang" ], "title": "Adaptive transfer learning", "venue": "In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "Jonathan Chang", "Jordan L. Boyd-Graber", "Sean Gerrish", "Chong Wang", "David M. Blei" ], "title": "Reading tea leaves: How humans interpret topic models", "venue": "In Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems", "year": 2009 }, { "authors": [ "Rajarshi Das", "Manzil Zaheer", "Chris Dyer" ], "title": "Gaussian lda for topic models with word embeddings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 795–804", "venue": "Association for Computational Linguistics,", "year": 2015 }, { "authors": [ "Pankaj Gupta", "Yatin Chaudhary", "Florian Buettner", "Hinrich Schütze" ], "title": "Document informed neural autoregressive topic models with distributional prior", "venue": "In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Hugo Larochelle", "Stanislas Lauly" ], "title": "A neural autoregressive topic model", "venue": "Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Hugo Larochelle", "Iain Murray" ], "title": "The neural autoregressive distribution estimator", "venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, AISTATS, volume 15 of JMLR Proceedings,", "year": 2011 }, { "authors": [ "Stanislas Lauly", "Yin Zheng", "Alexandre Allauzen", "Hugo Larochelle" ], "title": "Document neural autoregressive distribution estimation", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Quoc V. Le", "Tomas Mikolov" ], "title": "Distributed representations of sentences and documents", "venue": "In Proceedings of the 31th International Conference on Machine Learning, ICML,", "year": 2014 }, { "authors": [ "Yishu Miao", "Lei Yu", "Phil Blunsom" ], "title": "Neural variational inference for text processing", "venue": "In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Gregory S. Corrado", "Jeffrey Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "SPFGH Moen" ], "title": "Distributional semantics resources for biomedical text processing", "venue": "Proceedings of LBM, pp", "year": 2013 }, { "authors": [ "Dat Quoc Nguyen", "Richard Billingsley", "Lan Du", "Mark Johnson" ], "title": "Improving topic models with latent feature word", "venue": "representations. TACL,", "year": 2015 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227–2237", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Michael Röder", "Andreas Both", "Alexander Hinneburg" ], "title": "Exploring the space of topic coherence measures", "venue": "In Proceedings of the Eighth ACM International Conference on Web Search and Data", "year": 2015 }, { "authors": [ "Akash Srivastava", "Charles Sutton" ], "title": "Autoencoding variational inference for topic models", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Probabilistic topic models, such as LDA (Blei et al., 2003), Replicated Softmax (RSM) (Salakhutdinov & Hinton, 2009) and Document Neural Autoregressive Distribution Estimator (DocNADE) (Larochelle & Lauly, 2012) are often used to extract topics from text collections and learn latent document representations to perform natural language processing tasks, such as information retrieval (IR). Though they have been shown to be powerful in modeling large text corpora, the topic modeling (TM) still remains challenging especially in the sparse-data setting, especially for the cases where word co-occurrence data is insufficient e.g., on short text or a corpus of few documents. To this end, several works (Das et al., 2015; Nguyen et al., 2015; Gupta et al., 2019) have introduced external knowledge in traditional topic models via word embeddings Pennington et al. (2014). However, no prior work in topic modeling has employed topical embeddings (obtained from large document collection(s)), complementary to word embeddings.\nLocal vs Global Views: Though word embeddings (Pennington et al., 2014) and topics are complementary in how they represent the meaning, they are distinctive in how they learn from word occurrences observed in text corpora. Word embeddings have local context (view) in the sense that they are learned based on local collocation pattern in a text corpus, where the representation of each word either depends on a local context window (Mikolov et al., 2013) or is a function of its sentence(s) (Peters et al., 2018). Consequently, the word occurrences are modeled in a fine-granularity. On other hand, a topic (Blei et al., 2003; Gupta et al., 2019) has a global word context (view): TM infers topic distributions across documents in the corpus and assigns a topic to each word occurrence, where the assignment is equally dependent on all other words appearing in the same document. Therefore, it learns from word occurrences across documents and encodes a coarse-granularity description. Unlike topics, the word embeddings do not capture thematic structures (topical semantics) underlying in the document collection.\nConsider the following topics (Z1-Z4), where (Z1-Z3) are respectively obtained from different (high-resource) source (S1-S3) domains whereas Z4 from the (low-resource) target domain T in the data-sparsity setting:\nZ1 (S1): profit, growth, stocks, apple, fall, consumer, buy, billion, shares→ Trading Z2(S2): smartphone, ipad, apple, app, iphone, devices, phone, tablet→ Product Line Z3 (S3): microsoft, mac, linux, ibm, ios, apple, xp, windows→ Operating System/Company Z4 (T ): apple, talk, computers, shares, disease, driver, electronics, profit, ios→ ?\nUsually, the top words associated with topics learned on a large corpus are semantically coherent and represent meaningful semantics, e.g., Trading, Product Line, etc. However in sparse-data setting, topics (e.g., Z4) are incoherent (noisy) and therefore, it is difficult to infer meaningful semantics. Additionally, notice that the word apple is topically/thematically contextualized (topic-word association) by different semantics in S1-S3 and referring to a Company. Unlike the topics, word embeddings encode syntactic and semantic relatedness in fine-granularity and therefore, do not capture thematic structures. For instance, the top-5 nearest neighbors (NN) of apple (below) in the embeddings (Mikolov et al., 2013) space suggest that it refers to a fruit; however, they do not express anything about its thematic context, e.g., Health.\napple NN==⇒ apples, pear, fruit, berry, pears, strawberry Motivation (1) Knowledge transfer using pretrained word and topic embeddings: Essentially, the application of TM aims to discover hidden thematic structures (i.e., topics) in text collection; however, it is challenging in data sparsity settings, e.g, in a short and/or small collection. This leads to suboptimal text representations and incoherent topics (e.g., topic Z4).\nTo alleviate the data sparsity issues, recent works (Das et al., 2015; Nguyen et al., 2015; Gupta et al., 2019) have shown that TM can be improved by introducing external knowledge, where they leverage pretrained word embeddings (i.e., local view) only. However, the word embeddings ignore the thematically contextualized structures (i.e., document-level semantics), and can not deal with ambiguity. Given that the word and topic representations encode complementary information, no prior work has explored transfer learning in TM using pretrained topics obtained from a large corpus.\nMotivation (2) Knowledge transfer from multiple sources of word and topic embeddings: Knowledge transfer via word embeddings is vulnerable to negative transfer (Cao et al., 2010) on the target domain when domains are shifted and not handled properly. For instance, consider a short-text document v: [apple gained its US market shares] in the target domain T . Here, the word apple refers to a company, and hence the word vector of apple (about fruit) is an irrelevant source of prior knowledge for both v and the topic Z4. In contrast, one can better model v and amend the noisy Z4 for coherence, given the meaningful word and topic embeddings.\nOften, there are several topic-word associations in different domains, e.g., in topics Z1-Z3. Given a noisy topic Z4 in T and meaningful topics Z1-Z3 of S1-S3, we identify multiple relevant (source) domains and advantageously transfer their word and topic embeddings in order to facilitate meaningful and positive transfer learning in the sparse corpus, T . Contribution (1) To our knowledge, it is the first work in unsupervised topic modeling framework that introduces (external) knowledge transfer using (a) Global-view Transfer: Pretrained topic em-\nbeddings instead of using word embeddings exclusively, and (b) Multi-view Transfer: Pretrained word and topic embeddings jointly obtained from a large source corpus in order to deal with polysemy and alleviate data sparsity issues in a small target corpus.\nContribution (2) Multi-source Transfer: Moreover, we first learn word and topic representations on multiple source domains to build WordPool and TopicPool, respectively and then perform multi-view and multi-source transfer learning within neural topic modeling by jointly using the complementary representations. In doing so, we guide the (unsupervised) generative process of learning hidden topics of the target domain by embeddings in WordPool and TopicPool such that the hidden topics become more meaningful and representative in explaining the target corpus.\nWe evaluate the effectiveness of our transfer learning approaches in neural topic modeling using 7 (5 low-resource and 2 high-resource) target and 5 (high-resource) source corpora from news and medical domains, consisting of short-text, long-text, small and large document collections. Particularly, we quantify the quality of text representations via generalization (perplexity), interpretability (topic coherence) and text retrieval. The code is provided with the supplementary." }, { "heading": "2 KNOWLEDGE TRANSFER IN NEURAL TOPIC MODELING", "text": "Consider a sparse target domain T and a set of |S| source domains S, we first prepare two knowledge bases (KBs) of representations (or embeddings) from each of the sources: (1) WordPool: Pretrained word embeddings matrices {E1, ...,E|S|}, where Ek ∈ RE×K , and (2) TopicPool: Pretrained latent topic embeddings {Z1, ...,Z|S|}, where Zk ∈ RH×K encodes a distribution over a vocabulary of K words. E and H are word embedding and latent topic dimensions, respectively. While topic modeling on T , we introduce the two types of knowledge transfers from one or many sources: Local (LVT) and Global (GVT) View Transfer using the two KBs of pretrained word (i.e., WordPool) and topic (i.e., TopicPool) embeddings, respectively. Specially, we employ a neural autoregressive topic model (i.e., DocNADE (Larochelle & Lauly, 2012)) to build the WordPool and TopicPool.\nNotice that a superscript indicates a source. See Table 1 for the notations used in this work." }, { "heading": "2.1 NEURAL AUTOREGRESSIVE TOPIC MODELS", "text": "DocNADE (Larochelle & Lauly, 2012) is an unsupervised neural-network based topic model that is inspired by the benefits of NADE (Larochelle & Murray, 2011) and RSM (Salakhutdinov & Hinton, 2009) architectures. RSM has difficulties due to intractability leading to approximate gradients of the negative log-likelihood, while NADE does not require such approximations. On other hand, RSM is a generative model of word count, while NADE is limited to binary data. Specifically, Doc-\nNADE factorizes the joint probability distribution of words in a document as a product of conditional distributions and efficiently models each conditional via a feed-forward neural network.\nAlgorithm 1 Computation of log p(v) and Loss L(v) Input: A target training document v, |S| source domains Input: WordPool: KB of pretrained word embedding matrices {E1, ...,E|S|} Input: TopicPool: KB of pretrained latent topics {Z1, ...,Z|S|} Parameters: Θ = {b, c,W,U,A1, ...,A|S|} Hyper-parameters: θ = {λ1, ..., λ|S|, γ1, ..., γ|S|, H} Initialize: a← c and p(v)← 1 for i from 1 to D do\nhi(v<i)← g(a), where g = {sigmoid, tanh} p(vi = w|v<i)← exp(bw+Uw,:hi(v<i))∑\nw′ exp(bw′+Uw′,:hi(v<i))\np(v)← p(v)p(vi|v<i) compute pre-activation at step, i: a← a + W:,vi if LVT then\nget word embedding for vi from source domain(s) a← a + ∑|S| k=1 λ\nk Ek:,vi L(v)← − log p(v) if GVT then L(v)← L(v) + ∑|S| k=1 γ k ∑H j=1 ||Akj,:W − Zkj,:||22\nDocNADE Formulation: For a document v = (v1, ..., vD) of size D, each word index vi takes value in {1, ...,K} of vocabulary size K. DocNADE learns topics in a language modeling fashion (Bengio et al., 2003) and decomposes the joint distribution p(v)= ∏D i=1 p(vi|v<i) such that each autoregressive conditional p(vi|v<i) is modeled by a feed-forward neural network using preceding words v<i in the sequence:\nhi(v<i) = g(c + ∑ q<i W:,vq ) and p(vi = w|v<i) = exp(bw + Uw,:hi(v<i))∑ w′ exp(bw′ + Uw′,:hi(v<i))\nfor i ∈ {1, ...D}, where v<i is the subvector consisting of all vq such that q < i i.e., v<i ∈ {v1, ..., vi−1}, g(·) is a non-linear activation function, W ∈ RH×K and U ∈ RK×H are weight matrices, c ∈ RH and b ∈ RK are bias parameter vectors. H is the number of hidden units (topics). Figure 1 (left) (without WordPool) provides an illustration of the ith autoregressive step of the DocNADE architecture, where the parameter W is shared in the feed-forward networks and hi encodes topic-proportion embedding. Importantly, the topic-word matrix W has a property that the column vector W:,vi corresponds to embedding of the word vi, whereas the row vector Wj,: encodes the jth topic. We leverage this property to introduce external knowledge via word and topic embeddings.\nAdditionally, DocNADE has shown to outperform traditional models such as LDA (Blei et al., 2003) and RSM (Salakhutdinov & Hinton, 2009) in terms of both the log-probability on unseen documents and retrieval accuracy. Recently, Gupta et al. (2019) has improved topic modeling on short texts by introducing word embeddings (Pennington et al., 2014) in DocNADE architecture. Thus, we adopt DocNADE to perform transfer learning within the neural topic modeling framework.\nAlgorithm 1 (for DocNADE, set LVT and GVT to False) demonstrates the computation of log p(v) and negative log-likelihood L(v) that is minimized using gradient descent. Moreover, computing hi is efficient (linear complexity) due to NADE architecture that leverages the pre-activation ai−1 of (i− 1)th step in computing the pre-activation ai. See Larochelle & Lauly (2012) for further details." }, { "heading": "2.2 MULTI-VIEW (MVT) AND MULTI-SOURCE TRANSFERS (MST) IN TOPIC MODELING", "text": "Here, we describe a transfer learning framework in topic modeling that jointly exploits the complementary knowledge using the WordPool and TopicPool, the KBs of pretrained word and (latent) topic embeddings, respectively obtained from large document collections (DCs) from several sources. In\nT 1 T 2 T 3 T 4 T 5 T 6\nS1 I I R D D D S2 D D D I D D S3 R R I D D D S4 R R R D D D S5 D D D D - -\nTable 3: Domain overlap in sourcetarget corpora. I: Identical, R: Related and D: Distant domains.\ndoing so, we first apply the DocNADE to generate a topic-word matrix for each of the DCs, where its column-vector and row-vector generate Ek and Zk, respectively for the kth source.\nLVT+MST Formulation for Multi-source Word Embedding Transfer: As illustrated in Figure 1 (left) and Algorithm 1 with LVT=True, we perform transfer learning on a target T using the WordPool of pretrained word embeddings {E1, ...,E|S|} from several sources S (i.e., multi-source):\nhi(v<i) = g(c + ∑ q<i W:,vq + ∑ q<i |S|∑ k=1 λk Ek:,vq )\nHere, k refers to the kth source and λk is a weight for Ek that controls the amount of knowledge transferred in T , based on domain overlap between target and source(s). Recently, DocNADEe (Gupta et al., 2019) has incorporated word embeddings (Pennington et al., 2014) in extending DocNADE; however, it is based on a single source.\nGVT+MST Formulation for Multi-source Topic Embedding Transfer: Next, we perform knowledge transfer exclusively using the TopicPool of pretrained topic embeddings (e.g., Zk) from one or several sources, S. In doing so, we add a regularization term to the loss function L(v) and require DocNADE to minimize the overall loss in a way that the (latent) topic features in W simultaneously inherit relevant topical features from each of the source domains S, and generate meaningful representations for the target T . The overall loss L(v) due to GVT+MST in DocNADE is given by:\nL(v) = − log p(v) + |S|∑ k=1 γk H∑ j=1 ||Akj,:W − Zkj,:||22\nHere, Ak∈RH×H aligns latent topics in the target T and kth source, and γk governs the degree of imitation of topic features Zk by W in T . Consequently, the generative process of learning meaningful topics in W of T is guided by relevant features in {Z}|S|1 to address data-sparsity. Algorithm 1 describes the computation of the loss, when GVT = True and LVT = False.\nMoreover, Figure 1 (right) illustrates the need for topic alignments between target and source(s). Here, j indicates the topic (i.e., row) index in a topic matrix, e.g., Zk. Observe that the first topic (gray curve), i.e., Z1j=1 ∈ Z1 of the first source aligns with the first row-vector (i.e., topic) of W (of target). However, the other two topics Z1j=2, Z 1 j=3 ∈ Z1 need alignment with the target topics.\nMVT+MST Formulation for Multi-source Word and Topic Embeddings Transfer: When LVT and GVT are True (Algorithm 1) for many sources, the two complementary representations are jointly used in transfer learning using WordPool and TopicPool, and therefore, the name multi-view and multi-source transfers." }, { "heading": "3 EVALUATION AND ANALYSIS", "text": "Datasets: Table 2 describes the datasets used in high-resource source and low-and high-resource target domains for our experiments. The target domain T consists of four short-text corpora (20NSshort, TMNtitle, R21578title and Ohsumedtitle), one small corpus (20NSsmall) and\ntwo large corpora (TMN and Ohsumed). However in source S , we use five large corpora (20NS, R21578, TMN, AGnews and PubMed) in different label spaces (i.e, domains). Here, the corpora (T 5, T 6 and S5) belong to medical and others to news. Additionally, Table 3 suggests domain overlap (in terms of label match) in the target and source corpora, where we define three types of overlap: I (identical) if all labels match,R (related) if some labels match, and D (distant) if a very few or no labels match. Note, our modeling approaches are completely unsupervised and do not use the data labels. See the data labels in appendices.\nReproducibility: For evaluations in the following sections, we follow the experimental setup similar to DocNADE (Larochelle & Lauly, 2012) and DocNADEe (Gupta et al., 2019), where the number of topics (H) is set to 200. While DocNADEe requires the dimension (i.e., E) of word embeddings be the same as the latent topic (i.e., H), we first apply a projection on the concatenation of the pretrained word embeddings obtained from several sources and then, introduce the prior knowledge in each of the autoregressive step following DocNADEe. We apply it in configurations where Glove and/or FastText (E=300) (Bojanowski et al., 2017) are employed. See appendices for the experimental setup, hyperparameters1 and optimal values of λk ∈ [0.1, 0.5, 1.0] and γk ∈ [0.1, 0.01, 0.001] (determined using development set) in different source-target configurations. (code provided)\nBaselines: As summarized in Table 4, we consider several baselines including (1) LDA-based and neural network-based topic models that use the target data, (2) topic models using pretrained word embeddings (i.e., LVT) from Pennington et al. (2014) (Glove), (3) unsupervised document representation, where we employ doc2vec (Le & Mikolov, 2014) and EmbSum (to represent a document by summing the embedding vectors of its words using Glove) in order to quantify the quality of document representations, (4) zero-shot topic modeling, where we use all source corpora and no target corpus, and (5) data-augmentation, where we use all source corpora along with a target corpus for TM on T . Using DocNADE, we first prepare the two KBs: WordPool and TopicPool from each of the source corpora and then use them in knowledge transfer to T . Tables 5 and 6 show the comparison of our proposed transfer learning approaches (i.e., LVT using WordPool, GVT using TopicPool, MVT and MST) with the baselines TMs that (1) do not, and (2) do employ pretrained word embeddings (e.g., DocNADE and DocNADEe, respectively)." }, { "heading": "3.1 GENERALIZATION: PERPLEXITY (PPL)", "text": "To evaluate generative performance of TM, we estimate the log-probabilities for the test documents and compute the average held-out perplexity per word as, PPL = exp ( − 1\nN ∑N t=1 1 |vt| log p(vt) ) ,\nwhere N and |vt| are the number of documents and words in a document vt, respectively.\n1selected with grid search; suboptimal results (see appendices) by learning λ and γ with backpropagation\nTables 5 and 6 quantitatively show PPL scores on the five target corpora (four short-text and one long-text) by the baselines and proposed transfer learning approaches (i.e., GVT, MVT and MST) using one or four sources. In Table 5 using TMN (as a single source) for LVT, GVT and MVT on TMNtitle, we see improved (reduced) PPL scores: (655 vs 706), (689 vs 706) and (663 vs 706) respectively in comparison to DocNADE. We also observe gains due to MST+LVT, MST+GVT and MST+MVT configurations on TMNtitle. Similarly in MST+LVT for R21578title, we observe a gain of 5.2% (182 vs 192), suggesting that transfer learning using pretrained word and topic embeddings (jointly) from one or many sources helps due to positive knowledge transfer, and it also verifies domain relatedness (e.g., in TMN-TMNtitle and AGnews-TMN). Similarly, Table 6 shows gains in PPL (e.g., on TMNtitle, R21578title, etc.) compared to DocNADEe.\nT S Model Topic-words (Top 5)\n2 0 N S s h o r t\n20NS DNE shipping, sale, prices, expensive, price -GVT sale, price, monitor, site, setup +GVT shipping, sale, price, expensive, subscribe\nAGnews DNE microsoft, software, ibm, linux, computer -GVT apple, modem, side, baud, perform +GVT microsoft, software, desktop, computer, apple\nT M N t i t l e AGnews DNE miners, earthquake, explosion, stormed, quake TMN DNE tsunami, quake, japan, earthquake, radiation -GVT strike, jackson, kill, earthquake, injures +GVT earthquake, radiation, explosion, wildfire\nTable 8: Source S and target T topics before (-) and after (+) topic transfer(s) (GVT) from one or more sources. DNE: DocNADE\nchip source corpora target corpus\n20NS R21578 AGnews 20NSshort\n-GVT +GVT key chips chips virus chips\nencrypted semiconductor chipmaker intel technology encryption miti processors gosh intel\nclipper makers semiconductor crash encryption keys semiconductors intel chips clipper\nTable 9: Five nearest neighbors of the word chip in source and target semantic spaces before (-) and after (+) knowledge transfer (MST+GVT)\nIn Table 7, we show PPL scores on two medical target corpora: Ohsumtitle and Ohsumed using two sources: AGnews (news corpus) and PubMed (medical abstracts) to perform cross-domain and in-domain knowledge transfers. We see that using PubMed for LVT on both the target corpora improves generalization. Overall, we report a gain of 17.3% (1268 vs 1534) on Ohsumtitle and 8.55% (1497 vs 1637) on Ohsumtitle, compared to DocNADEe. Additionally, MST+GVT and MST+MVT boost generalization performance compared to DocNADE(e)." }, { "heading": "3.2 INTERPRETABILTY: TOPIC COHERENCE (COH)", "text": "While PPL is used for model selection, adjusting parameters (e.g. H) and quantitative comparisons, Chang et al. (2009) showed in some cases humans preferred TMs (based on the semantic quality of topics) with higher (worse) PPLs. Thus beyond perplexity, we compute topic coherence to estimate the meaningfulness of words in each of the topics captured. In doing so, we choose the coherence measure proposed by Röder et al. (2015) that identifies context features for each topic word using a sliding window over the reference corpus. We follow Gupta et al. (2019) and compute COH with the top 10 words in each topic. Essentially, the higher scores imply the more coherent topics.\nTables 5 and 6 (under COH column) demonstrate that our proposed approaches (GVT, MVT and MST) of transfer learning in TMs show noticeable gains in COH and thus, improve topic quality. For instance in Table 5, when AGnews is used as a single source for 20NSsmall datatset, we observe a gain in COH due to GVT (.563 vs .462) and MVT (.566 vs .462). Additionally, noticeable gains are reported due to MST+LVT (.542 vs .462), MST+GVT (.585 vs .462) and MST+MVT (.637 vs .462), compared to DocNADE. Importantly, we find a trend MVT>GVT>LVT in COH scores for both the single-source and multi-source transfers. Similarly, Table 6 show noticeable gains (e.g., 39.3%, 9.95%, 7.08%, etc.) in COH due to MST and MVT with Glove and FastText word embeddings. Moreover, Table 7 shows gains in COH due to GVT on Ohsumedtitle and Ohsumed, using pretrained knowledge from PubMed. Overall, the GVT, MVT and MST boost COH for all the five target corpora compared to the baseline TMs (i.e., DocNADE and DocNADEe). This suggests that there is a need for the two complementary (pretrained word and topics) representations and multi-source transfer learning in order to guide meaningful topic learning in T . The results on both the low- and high-resource targets across domains conclude that the proposed modeling scales." }, { "heading": "3.3 APPLICABILITY: INFORMATION RETRIEVAL (IR)", "text": "For a greater impact of TMs, we further evaluate the quality of document representations and perform a document retrieval task on the target datasets, using their label information only to compute precision. We follow the experimental setup similar to Lauly et al. (2017), where all test documents are treated as queries to retrieve a fraction of the closest documents in the original training set using cosine similarity between their document vectors. To compute retrieval precision for each fraction (e.g., 0.02), we average the number of retrieved training documents with the same label as the query.\nTables 5 and 6 depict precision scores at retrieval fraction 0.02 (similar to Gupta et al. (2019)), where the configuration MST+MVT outperforms both the DocNADE and DocNADEe, respectively in retrieval performance on the four target (short-text) datasets. A gain in IR performance is noticeable for highly overlapping domains, e.g., TMN-TMNtitle (.555 vs .521 in Table 5 and .576 vs .540 in Table 6) than the related, e.g., AGnews-TMNtitle (.534 vs .521 in Table 5 and .565 vs .540 in Table 6). We observe large gains in precision at retrieval fraction 0.02: (a) Table 5: 20.7% (.326 vs .270) on 20NSsmall, 9.21% (.569 vs .521) on TMNtitle and 8.28% (.314 vs .290) on 20NSshort, (b) Table 6: 8.84% (.320 vs .294) on 20NSshort and 9.21% (.578 vs .540) on TMNtitle, and (c) Table 7: 4.91% (.192 vs .183) on Ohsumed and 4.0% (.182 vs .175) on Ohsumedtitle.\nAdditionally, Figures 2a, 2b, 2c and 2d illustrate precision on 20NSshort, 20NSsmall, TMNtitle and R21578title, respectively, where our approaches (MST+GVT and MST+MVT) consistently outperform the baselines at all fractions. Moreover, we split the training data of TMNtitle into several sets: 20%, 40%, 60%, 80% of the training set and then retrain DocNADE, DocNADEe and DocNADE+MST+MVT. We demonstrate the impact of transfer learning in sparse-data settings using WordPool and TopicPool jointly on IR task. Figure 2e plots precision at retrieval (recall) fraction 0.02 and demonstrates that the proposed modeling consistently outperform DocNADE(e)." }, { "heading": "3.4 ZERO-SHOT AND DATA-AUGMENTATION EVALUATIONS", "text": "Figures 2a, 2b, 2c and 2d show precision in the zero-shot (source-only training) and dataaugmentation (source+target training) configurations. Observe that the latter helps in learning\nmeaningful representations and performs better than zero-shot; however, it is outperformed by MST+MVT, suggesting that a naive (data space) augmentation does not add sufficient prior or relevant information to the sparse target. Thus, we find that it is beneficial to augment training data in feature space (e.g., LVT, GVT and MVT) especially for unsupervised TMs using WordPool and TopicPool. Beyond IR, we further investigate computing topic coherence (COH) for zero-shot and data-augmentation baselines, where the COH scores (Figure 2f) suggest that MST+MVT outperforms DocNADEe, zero-shot and data-augmentation." }, { "heading": "3.5 QUALITATIVE ANALYSIS: TOPICS AND NEAREST NEIGHBORS (NN)", "text": "For topic level inspection, we first extract topics using the rows of W of source and target corpora. Table 8 shows the topics (top-5 words) from source and target domains. Observe that the target topics become more coherent after transfer learning (i.e., +GVT) from one or more sources. The blue color signifies that a target topic has imitated certain topic words from the source. Observe that we also show topics from source domain(s) that align with the topics from target.\nFor word level inspection, we extract word representations using the columns of W. Table 9 shows nearest neighbors (NNs) of the word chip in 20NSshort (target) corpus, before and after GVT using three knowledge sources. Observe that the NNs in the target become more meaningful." }, { "heading": "4 CONCLUSION", "text": "Within neural topic modeling, we have introduced transfer learning approaches using complementary representations: pretrained word (local semantics) and topic (global semantics) embeddings exclusively or jointly from one or many sources (i.e., multi-view and multi-source). We have shown that the proposed approaches better deal with data-sparsity issues, especially in a short-text and/or small document collection. We have demonstrated learning meaningful topics and quality document representations on 7 (low- and high-resource) target corpora from news and medical domains." }, { "heading": "A DATA DESCRIPTION", "text": "In order to evaluate knowledge transfer within unsupervised neural topic modeling, we use the following seven datasets in the target domain T following the similar experimental setup as in DocNADEe:\n1. 20NSshort: We take documents from 20NewsGroups data, with document size less (in terms of number of words) than 20.\n2. 20NSsmall: We sample 20 document (each having more than 200 words) for training from each class of the 20NS dataset. For validation and test, 10 document for each class. Therefore, it is a corpus of few (long) documents.\n3. TMNtitle: Titles of the Tag My News (TMN) news dataset. 4. R21578title: Reuters corpus, a collection of new stories from nltk.corpus. We\ntake titles of the documents. 5. Ohsumedtitle: Titles of Ohsumed abstracts. Source: disi.unitn.it/\nmoschitti/corpora.htm. 6. Ohsumed: Ohsumed dataset, collection of medical abstracts. Source: disi.unitn.\nit/moschitti/corpora.htm. 7. TMN: The Tag My News (TMN) news dataset.\nTo prepare knowledge base of word embedings (local semantics) and latent topics (global semantics) features, we use the following six datasets in the source S:\n1. 20NS: 20NewsGroups corpus, a collection of news stories from nltk.corpus. 2. TMN: The Tag My News (TMN) news dataset. 3. R21578: Reuters corpus, a collection of new stories from nltk.corpus. 4. AGnews: AGnews data sellection. 5. PubMed: Medical abstracts of randomized controlled trials. Source: https://\ngithub.com/Franck-Dernoncourt/pubmed-rct." }, { "heading": "B GETTING WORD AND LATENT TOPIC REPRESENTATIONS FROM SOURCE(S)", "text": "Since in DocNADE, the column of W:,vi gives a word vector of the word vi, therefore the dimension of word embeddings in each of the Ek is same (i.e., H = 200). Thus, we prepare the knowledge base of word representations Ek from kth source using DocNADE, where each word vector is of H = 200 dimension.\nSince the row vector of Wj,: in DocNADE encodes jth topic feature, therefore each latent topic (i.e., row) in feature matrix W is a vector of K dimension, corresponding the definition of topics that it is a distribution over vocabulary. H is the number of latent topics and K is the vocabulary size, where K varies across corpora. Thus, we train DocNADE to learn a feature matrix specific to each of the source corpora, e.g. Wk ∈ RH×K of kth source.\nFor a target corpus of vocabulary size K ′ , the DocNADE learns a feature matrix WT ∈ RH×K′ . Similarly, Wk ∈ RH×K for kth source of vocabulary sizeK. Since in the sparse-data setting for the\ntarget, K ′ << K due to additional word in the source. In order to perform GVT, we need the same topic feature dimensions in the target and source, i.e., K ′ of the target. Therefore, we remove those column vectors from Wk ∈ RH×K of the kth source for which there is no corresponding word in the vocabulary of the target domain. As a result, we obtain Zk as a latent topic feature matrix to be used in knowledge transfer to the target domain. Following the similar steps, we prepare a KB of Zs such that each latent topic feature matrix from a source domain gets the same topic feature dimension as the target." }, { "heading": "C EXPERIMENTAL SETUP", "text": "For DocNADE and DocNADEe in different knowledge transfer configurations, we follow the same experimental setup as in DocNADE and DocNADEe. We rerun DocNADE and DocNADEe using the code released for DocNADEe.\nC.1 EXPERIMENTAL SETUP FOR GENERALIZATION\nWe set the maximum number of training passes to 100, topics to 200 and the learning rate to 0.001 with sigmoid hidden activation. Since the baseline DocNADE and DocNADEe reported better scores in PPL for H = 200 topics than using 50, therefore we use H = 200 in our experiments.\nSee Table 11 for hyperparameters used in generalization task, i.e., computing PPL.\nSee section C.4 to reproduce scores of Table 1.\nC.2 EXPERIMENTAL SETUP FOR IR TASK\nWe set the maximum number of training passes to 100, topics to 200 and the learning rate to 0.001 with tanh hidden activation. Since the baseline DocNADE and DocNADEe reported better scores in\nprecision for the retrieval task for H = 200 topics than using 50, therefore we use H = 200 in our experiments. We follow the similar experimental setup as in DocNADEe. For model selection, we used the validation set as the query set and used the average precision at 0.02 retrieved documents as the performance measure. Note that the labels are not used during training. The class labels are only used to check if the retrieved documents have the same class label as the query document. To perform document retrieval, we use the same (Table 2) train/development/test split of documents for all the datasets during learning.\nGiven DocNADE, the representation of a document of size D can be computed by taking the last hidden vector hD at the autoregressive stepD. Since, the RSM and DocNADE strictly outperformed LDA, therefore we only compare DocNADE and its recent extension DocNADEe. We use the same number of topic dimensions (H = 200) across all the source domains and the target in training with DocNADE.\nSee Table 12 for the hyperparameters in the document retrieval task, where λk and γk are weights for kth source. We use the same grid-search for all the source domains. We set γk smaller than λk to control the degree of imitation of the source domain(s) by the target domain. We use the development set of the target corpus to find the optimal setting in different configurations of knowledge transfers from several sources.\nSee section C.4 to reproduce scores of Table 5.\nC.3 {λ, γ} AS PARAMETER VS HYPERPARAMETERS\nHere, we treat λ and γ as parameters of the model, instead of hyperparameters and learn them with backpropagation. We initialize each λk = 0.5 and γk = 0.01 for each of the sources. We perform experiments on short-text datasets in MST+LVT, MST+GVT and MST+MVT configurations. We evaluate the topic modeling using PPL, topic coherence and retrieval accuracy.\nTable 13 reports the scores, when λ and γ are (1) learned with backpropagation, and (2) treated as hyperparameters. The experimental results suggest that the second configuration performs better the former. Therefore in the work, we have reported scores considering {λ, γ} as hyperparameters.\nC.4 REPRODUCIBILITY: OPTIMAL CONFIGURATIONS OF λ AND γ\nAs mentioned in Tables 11 and 12, the hyper-parameter λk takes on values in [1.0, 0.5, 0.1] for each of the word embeddings matrix Ek and γk in [0.1, 0.01, 0.001] for each of the latent topic features Zk, respectively for the kth source domain. To determine an optimal configuration, we perform grid-search over the values and use the scores on the development set to determine the best setting. We have a common model for PPL and COH scores due to generalization criteria.\nTo reproduce scores (best in Table 5), we mentioned the best settings of (λk, γk) in MST+MVT configuration for each of the target and source combinations:\n1. Generalization (PPL and COH) in MST+MVT when target is 20NSshort: (λ20NS = 1.0, γ20NS = 0.001, λTMN = 0.1, γTMN = 0.001, λR21578 = 0.5, γR21578 = 0.001, λAGnews = 0.1, γAGnews = 0.001\n2. Generalization (PPL and COH) in MST+MVT when target is TMNtitle: (λ20NS = 0.1, γ20NS = 0.001, λTMN = 1.0, γTMN = 0.001, λR21578 = 0.5, γR21578 = 0.001, λAGnews = 1.0, γAGnews = 0.001\n3. Generalization (PPL and COH) in MST+MVT when target is R21578title: (λ20NS = 0.1, γ20NS = 0.001, λTMN = 0.5, γTMN = 0.001, λR21578 = 1.0, γR21578 = 0.001, λAGnews = 1.0, γAGnews = 0.001\n4. Generalization (PPL and COH) in MST+MVT when target is 20NSsmall: (λ20NS = 0.5, γ20NS = 0.001, λTMN = 0.1, γTMN = 0.001, λR21578 = 0.1, γR21578 = 0.001, λAGnews = 0.1, γAGnews = 0.001\n5. Generalization (PPL and COH) in MST+MVT when target is Ohsumedtitle: (λAGnews = 0.1, γAGnews = 0.001, λPubMed = 1.0, γPubMed = 0.001\n6. Generalization (PPL and COH) in MST+MVT when target is Ohsumed: (λAGnews = 0.1, γAGnews = 0.001, λPubMed = 1.0, γPubMed = 0.001\n7. IR in MST+MVT when target is 20NSshort: (λ20NS = 1.0, γ20NS = 0.1, λTMN = 0.5, γTMN = 0.01, λR21578 = 0.1, γR21578 = 0.001, λAGnews = 1.0, γAGnews = 0.01\n8. IR in MST+MVT when target is TMNtitle: (λ20NS = 0.1, γ20NS = 0.01, λTMN = 1.0, γTMN = 0.01, λR21578 = 0.1, γR21578 = 0.01, λAGnews = 0.5, γAGnews = 0.001\n9. IR in MST+MVT when target is R21578title: (λ20NS = 0.1, γ20NS = 0.01, λTMN = 1.0, γTMN = 0.01, λR21578 = 1.0, γR21578 = 0.01, λAGnews = 0.5, γAGnews = 0.001\n10. IR in MST+GVT when target is 20NSsmall: (γ20NS = 0.01, γTMN = 0.01, γR21578 = 0.1, γAGnews = 0.01\n11. IR in MST+MVT when target is Ohsumedtitle: (λAGnews = 0.1, γAGnews = 0.001, λPubMed = 1.0, γPubMed = 0.1\n12. IR in MST+MVT when target is Ohsumed: (λAGnews = 0.1, γAGnews = 0.001, λPubMed = 0.5, γPubMed = 0.1\nThe hyper-parameters mentioned above also applies to a single source transfer configuration.\nAdditionally, we have also provided the code.\nWhile DocNADEe requires the dimension (i.e., E) of word embeddings be the same as the latent topic (i.e., H), we first apply a projection on the concatenation of the pretrained word embeddings obtained from several sources and then, introduce the prior knowledge in each of the autoregressive step following DocNADEe. We apply it in configurations where Glove and/or FastText (E=300) are employed. IN these settings, we use a single mixture weight λ ∈ [1.0, 0.5, 0.1] over the projected vector and then, introduced in TM following DocNADEe.\nC.5 EXPERIMENTAL SETUP FOR NVDM AND PRODLDA\nFor NVDM, we run the code availale at github.com/ysmiao/nvdm and train for 200 topics.\nFor ProdLDA, we run the code availale at github.com/akashgit/autoencoding_vi_ for_topic_models and train for 200 topics.\nC.6 EXPERIMENTAL SETUP FOR GLOVE-DMM\nWe used LFTM (https://github.com/datquocnguyen/LFTM) to train glove-DMM model. It is trained for 200 iterations with 2000 initial iterations using 200 topics. For short texts, we set the hyperparameter beta to 0.1, for long texts to 0.01; the mixture parameter lambda was set to 0.6 for all datasets. IR task was performed using relative topic proportions as input, where we inferred the topic distribution of the training and test documents and used the relative distribution as input in computing similarities in documents based on the inferred relative topic distribution.\nC.7 EXPERIMENTAL SETUP FOR DOC2VEC\nWe used gensim (https://github.com/RaRe-Technologies/gensim) to train Doc2Vec models. Models were trained with distributed bag of words, for 1000 iterations using a window size of 5 and a vector size of 500." } ]
2,019
null
SP:a289020322570c222c7bfdd2c6da0bd2cac95381
[ "This paper proposes a method that saves memory and computation in the task of video prediction by low-rank tensor representations via tensor decomposition. The method is able to outperform standard convolutional lstm and other methods by using less parameters when testing it in the Moving MNIST and KTH datasets. The authors also present a proof to validate their method.", "This paper proposed a convolutional tensor-train (CTT) format based high-order and convolutional LSTM approach for long-term video prediction. This paper is well-motivated. Video data usually have high dimensional input, and the proposed method aims to explicitly take into account more than one hidden representation of previous frames - both lead to a huge number of parameters. Therefore, some sort of parameter reduction is needed. This paper considers two different types of operations - convolution and tensor-train (TT) decomposition - in an interleaved way. The basic model considered in this paper is a high-order variant of convolutional LSTM (convLSTM)." ]
Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames. Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations. A potential solution is to extend to higher-order recurrent models. However, such a model requires a large number of parameters and operations, making it intractable to learn and prone to overfitting in practice. In this work, we propose Convolutional TensorTrain LSTM (Conv-TT-LSTM), which learns higher-order Convolutional Long Short-Term Memory (ConvLSTM) efficiently using Convolutional Tensor-Train Decomposition (CTTD). Our proposed model naturally incorporates higher-order spatio-temporal information with low memory and computational requirements by efficient low-rank tensor-train representations. We evaluate our model on MovingMNIST and KTH datasets and show improvements over standard ConvLSTM and other ConvLSTM-based approaches, but with much fewer parameters.
[]
[ { "authors": [ "Alexandre Alahi", "Kratarth Goel", "Vignesh Ramanathan", "Alexandre Robicquet", "Li Fei-Fei", "Silvio Savarese" ], "title": "Social lstm: Human trajectory prediction in crowded spaces", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Animashree Anandkumar", "Rong Ge", "Daniel Hsu", "Sham M Kakade", "Matus Telgarsky" ], "title": "Tensor decompositions for learning latent variable models", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H Campbell", "Sergey Levine" ], "title": "Stochastic variational video prediction", "venue": "arXiv preprint arXiv:1710.11252,", "year": 2017 }, { "authors": [ "Samy Bengio", "Oriol Vinyals", "Navdeep Jaitly", "Noam Shazeer" ], "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Wonmin Byeon", "Qin Wang", "Rupesh Kumar Srivastava", "Petros Koumoutsakos" ], "title": "Contextvp: Fully context-aware video prediction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Emily Denton", "Robert Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Emily L Denton" ], "title": "Unsupervised learning of disentangled representations from video", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Deep visual foresight for planning robot motion", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Chelsea Finn", "Ian Goodfellow", "Sergey Levine" ], "title": "Unsupervised learning for physical interaction through video prediction", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Jun-Ting Hsieh", "Bingbin Liu", "De-An Huang", "Li F Fei-Fei", "Juan Carlos Niebles" ], "title": "Learning to decompose and disentangle representations for video prediction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nal Kalchbrenner", "Aäron van den Oord", "Karen Simonyan", "Ivo Danihelka", "Oriol Vinyals", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Video pixel networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yong-Deok Kim", "Eunhyeok Park", "Sungjoo Yoo", "Taelim Choi", "Lu Yang", "Dongjun Shin" ], "title": "Compression of deep convolutional neural networks for fast and low power mobile applications", "venue": "arXiv preprint arXiv:1511.06530,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Arinbjörn Kolbeinsson", "Jean Kossaifi", "Yannis Panagakis", "Anima Anandkumar", "Ioanna Tzoulaki", "Paul Matthews" ], "title": "Stochastically rank-regularized tensor regression networks", "venue": null, "year": 1902 }, { "authors": [ "Tamara G Kolda", "Brett W Bader" ], "title": "Tensor decompositions and applications", "venue": "SIAM review,", "year": 2009 }, { "authors": [ "Jean Kossaifi", "Zachary Lipton", "Aran Khanna", "Tommaso Furlanello", "Anima Anandkumar" ], "title": "Tensor regression networks", "venue": null, "year": 2017 }, { "authors": [ "Jean Kossaifi", "Adrian Bulat", "Yannis Panagakis", "Maja Pantic" ], "title": "Efficient n-dimensional convolutions via higher-order factorization", "venue": "arXiv preprint arXiv:1906.06196,", "year": 2019 }, { "authors": [ "Ivan Laptev", "Barbara Caputo" ], "title": "Recognizing human actions: a local svm approach", "venue": "In null,", "year": 2004 }, { "authors": [ "Vadim Lebedev", "Yaroslav Ganin", "Maksim Rakhuba", "Ivan Oseledets", "Victor Lempitsky" ], "title": "Speeding-up convolutional neural networks using fine-tuned cp-decomposition", "venue": "arXiv preprint arXiv:1412.6553,", "year": 2014 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "William Lotter", "Gabriel Kreiman", "David Cox" ], "title": "Deep predictive coding networks for video prediction and unsupervised learning", "venue": "arXiv preprint arXiv:1605.08104,", "year": 2016 }, { "authors": [ "Xindian Ma", "Peng Zhang", "Shuai Zhang", "Nan Duan", "Yuexian Hou", "Dawei Song", "Ming Zhou" ], "title": "A tensorized transformer for language modeling", "venue": null, "year": 1906 }, { "authors": [ "Michael Mathieu", "Camille Couprie", "Yann LeCun" ], "title": "Deep multi-scale video prediction beyond mean square error", "venue": "arXiv preprint arXiv:1511.05440,", "year": 2015 }, { "authors": [ "Alexander Novikov", "Dmitrii Podoprikhin", "Anton Osokin", "Dmitry P Vetrov" ], "title": "Tensorizing neural networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Ivan V Oseledets" ], "title": "Tensor-train decomposition", "venue": "SIAM Journal on Scientific Computing,", "year": 2011 }, { "authors": [ "Rohollah Soltani", "Hui Jiang" ], "title": "Higher order recurrent neural networks", "venue": "arXiv preprint arXiv:1605.00064,", "year": 2016 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhudinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Jiahao Su", "Jingling Li", "Bobby Bhattacharjee", "Furong Huang" ], "title": "Tensorized spectrum preserving compression for neural networks", "venue": "arXiv preprint arXiv:1805.10352,", "year": 2018 }, { "authors": [ "Andros Tjandra", "Sakriani Sakti", "Satoshi Nakamura" ], "title": "Compressing recurrent neural network with tensor train", "venue": "In 2017 International Joint Conference on Neural Networks (IJCNN),", "year": 2017 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee" ], "title": "Decomposing motion and content for natural video sequence prediction", "venue": "arXiv preprint arXiv:1706.08033,", "year": 2017 }, { "authors": [ "Yunbo Wang", "Mingsheng Long", "Jianmin Wang", "Zhifeng Gao", "S Yu Philip" ], "title": "Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yunbo Wang", "Zhifeng Gao", "Mingsheng Long", "Jianmin Wang", "Philip S Yu" ], "title": "Predrnn++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning", "venue": "arXiv preprint arXiv:1804.06300,", "year": 2018 }, { "authors": [ "Yunbo Wang", "Lu Jiang", "Ming-Hsuan Yang", "Li-Jia Li", "Mingsheng Long", "Li Fei-Fei" ], "title": "Eidetic 3d lstm: A model for video prediction and beyond", "venue": null, "year": 2018 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "SHI Xingjian", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Stephan Zheng", "Anima Anandkumar", "Yisong Yue" ], "title": "Long-term forecasting using tensor", "venue": "Volume", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Understanding dynamics of videos and performing long-term predictions of the future is a highly challenging problem. It entails learning complex representation of real-world environment without external supervision. This arises in a wide range of applications, including autonomous driving, robot control (Finn & Levine, 2017), or other visual perception tasks like action recognition or object tracking (Alahi et al., 2016). However, long-term video prediction remains an open problem due to high complexity of the video contents. Therefore, prior works mostly focus on next or first few frames prediction (Lotter et al., 2016; Finn et al., 2016; Byeon et al., 2018).\nMany recent video models use Convolutional LSTM (ConvLSTM) as a basic block (Xingjian et al., 2015), where spatio-temporal information is encoded as a tensor explicitly in each cell. In ConvLSTM networks, each cell is a first-order recurrent model, where the hidden state is updated based on its immediate previous step. Therefore, they cannot easily capture higher-order temporal correlations needed for long-term prediction. Moreover, they are highly prone to error propagation.\nVarious approaches have been proposed to augment ConvLSTM, either by modifying networks to explicitly modeling motion (Finn et al., 2016), or by integrating spatio-temporal interaction in ConvLSTM cells (Wang et al., 2017; 2018a). These approaches are often incapable of capturing longterm dependencies and produce blurry prediction.\nAnother direction to augment ConvLSTM is to incorporate a higher-order RNNs (Soltani & Jiang, 2016) inside each LSTM cell, where its hidden state is updated using multiple past steps. However, a higher-order model for high-dimensional data (e.g. video) requires a huge number of model parameters, and the computation grows exponentially with the order of the RNNs. A principled approach to address the curse of dimensionality is tensor decomposition, where a higher-order tensor is compressed into smaller core tensors (Anandkumar et al., 2014). Tensor representations are powerful since they retain rich expressivity even with a small number of parameters. In this work, we propose a novel convolutional tensor decomposition, which allows for compact higher-order ConvLSTM.\nContributions. We propose Convolutional Tensor-Train LSTM (Conv-TT-LSTM), a modification of ConvLSTM, to build a higher-order spatio-temporal model. (1) We introduce Convolutional Tensor-Train Decomposition (CTTD) that factorizes a large convolutional kernel into a chain of\nsmaller tensors. (2) We integrate CTTD into ConvLSTM and propose Conv-TT-LSTM, which learns long-term dynamics in video sequence with a small number of model parameters. (3) We propose two versions of Conv-TT-LSTM: Fixed Window (FW) and Sliding Window (SW) (See Figures 1b and 1c), and we found that the SW version performs better than the FW one. (4) We found that training higher-order tensor models is not straightforward due to gradient instability. We present several approaches to overcome this such as good learning schedules and gradient clipping. (5) In the experiments, we show our proposed Conv-TT-LSTM consistently produces sharp prediction over a long period of time for both Moving-MNIST-2 and KTH action datasets. Conv-TT-LSTM outperforms the state-of-the-art PredRNN++ (Wang et al., 2018a) in LPIPS (Zhang et al., 2018) by 0.050 on the Moving-MNIST-2 and 0.071 on the KTH action dataset, with 5.6 times fewer parameters. Thus, we obtain best of both worlds: better long-term prediction and model compression." }, { "heading": "2 RELATED WORK", "text": "Tensor Decomposition In machine learning, tensor decompositions, including CP decomposition (Anandkumar et al., 2014), Tucker decomposition (Kolda & Bader, 2009), and tensor-train decomposition (Oseledets, 2011), are widely used for dimensionality reduction (Cichocki et al., 2016) and learning probabilistic models (Anandkumar et al., 2014). In deep learning, prior works focused on their application in model compression, where the parameters tensors are factorized into smaller tensors. This technique has been used in compressing convolutional networks (Lebedev et al., 2014; Kim et al., 2015; Novikov et al., 2015; Su et al., 2018; Kossaifi et al., 2017; Kolbeinsson et al., 2019; Kossaifi et al., 2019), recurrent networks (Tjandra et al., 2017; Yang et al., 2017) and transformers (Ma et al., 2019). Specifically, Yang et al. (2017) demonstrates that the accuracy of video classification increases if the parameters in recurrent networks are compressed by tensor-train decomposition (Oseledets, 2011). Yu et al. (2017) used tensor-train decomposition to constrain the complexity of higher-order LSTM, where each next step is computed based on the outer product of previous steps. While this work only considers vector input at each step, we extend their approach to higher-order ConvLSTM, where each step also encodes spatial information.\nVideo Prediction Prior works on video prediction have focused on several directions: predicting short-term video (Lotter et al., 2016; Byeon et al., 2018), decomposing motion and contents (Finn et al., 2016; Villegas et al., 2017; Denton et al., 2017; Hsieh et al., 2018), improving the objective function Mathieu et al. (2015), and handling diversity of the future (Denton & Fergus, 2018;\nBabaeizadeh et al., 2017; Lee et al., 2018). Many of these works use Convolutional LSTM (ConvLSTM) (Xingjian et al., 2015) as a base module, which deploys 2D convolutional operations in LSTM to efficiently exploit spatio-temporal information. Finn et al. (2016) used ConvLSTM to model pixel motion. Some works modified the standard ConvLSTM to better capture spatio-temporal correlations (Wang et al., 2017; 2018a). Wang et al. (2018b) integrated 3D convolutions into ConvLSTM. In addition, current cell states are combined with its historical records using self-attention to efficiently recall the history information. Byeon et al. (2018) applied ConvLSTM in all possible directions to capture full contexts in video and also demonstrated strong performance using a deep ConvLSTM network as a baseline. This baseline is adapted to obtain the base architecture in the present paper." }, { "heading": "3 TENSOR-TRAIN DECOMPOSITION AND SEQUENCE MODELING", "text": "The goal of tensor decomposition is to represent a higher-order tensor as a set of smaller and lowerorder core tensors, with fewer parameters while preserve essential information. In Yu et al. (2017), tensor-train decomposition (Oseledets, 2011) is used to reduce both parameters and computations in higher-order recurrent models, which we review in the first part of this section.\nHowever, the approach in Yu et al. (2017) only considers recurrent models with vector inputs and cannot cope with image inputs directly. In the second part, we extend the standard tensor-train decomposition to convolutional tensor-train decomposition (CTTD). With CTTD, a large convolutional kernel is factorized into a chain of smaller kernels. We show that such decomposition can reduce both parameters and operations of higher-order spatio-temporal recurrent models.\nStandard Tensor-train decomposition Given an m-order tensor T ∈ RI1×···×Im , where Il is the dimension of its l-th order, a standard tensor-train decomposition (TTD) (Oseledets, 2011) factorizes the tensor T into a set of m core tensors {T (l)}ml=1 with T (l) ∈ RIl×Rl×Rl+1 such that\nTi1,··· ,im , R1∑\nr1=1\n· · · Rm−1∑\nrm−1=1\nT (1)i1,1,r1 T (2) i2,r1,r2 · · · T (m)im,rm−1,1 (1)\nwhere tensor-train ranks {Rl}ml=0 (with R0 = Rm = 1) control the number of parameters in the tensor-train format Eq.(1). With TTD, the original tensor T of size ( ∏m l=1 Il) is compressed to\n( ∑m\nl=1 IlRl−1Rl) entries, which grows linearly with the order m (assuming Rl’s are constant). Therefore, TTD is commonly used to approximate higher-order tensors with fewer parameters.\nThe sequential structure in tensor-train decomposition makes it particularly suitable for sequence modeling (Yu et al., 2017). Consider a higher-order recurrent model that predicts a scalar output v ∈ R based on the outer product of a sequence of input vectors {u(l) ∈ RIl}ml=1 according to:\nv = 〈 T , ( u(1) ⊗ · · · ⊗ u(m) )〉 = I1∑ i1=1 · · · Im∑ im=1 Ti1,··· ,im u (1) i1 · · · u(m)im (2)\nThis model is intractable in practice since the number of parameters in T ∈ RI1×···Im (and therefore computational complexity of Eq. (2)) grows exponentially with the order m. Now suppose T takes a tensor-train format as in Eq. (1), we prove in Appendix A that (2) can be efficiently computed as\nv(l)rl = Il∑ il=1 Rl∑ rl−1=1 T (l)il,rl−1,rl v (l−1) rl−1 u (l) il , ∀l ∈ [m] (3)\nwhere the vectors {v(l) ∈ RRl}ml=1 are the intermediate steps, with v(0) ∈ R initialized as v(0) = 1, and final output v = v(m). Notice that the higher-order tensor T is never reconstructed in the sequential process in Eq. (3), therefore both space and computational complexities grow linearly (not exponentially compared to Eq. (2))with the order m assuming all tensor-train ranks are constants.\nConvolutional Tensor-Train Decomposition A convolutional layer in neural network is typically parameterized by a 4-th order tensor T ∈ RK×K×Rm×R0 , where K is the kernel size, Rm and R0 are the number of input and output channels respectively. Suppose the kernel size K takes the form K = m(k − 1) + 1 (e.g. K = 7 and m = 3, k = 3), a convolutional tensor-train decomposition\n(CTTD) factorizes T into a set of m core tensors {T (l)}ml=1 with T (l) ∈ Rk×k×Rl×Rl−1 such that\nT:,:,rm,r0 , R1∑\nr1=1\n· · · Rm−1∑\nrm−1=1\nT (1):,:,r1,r0 ∗ T (2) :,:,r2,r1 ∗ · · · ∗ T (m) :,:,rm,rm−1 (4)\nwhere ∗ denotes convolution between 2D-filters, and {Rl}ml=1 are the convolutional tensor-train ranks that control the complexity of the convolutional tensor-train format in Eq. (4). With CTTD, the number of parameters in the decomposed format reduces from K2R0Rm to (∑m l=1 k 2Rl−1Rl ) .\nSimilar to standard TTD, its convolutional counterpart can also be used to compress higher-order spatio-temporal recurrent models with convolutional operations. Consider a model that predicts a 3-rd order feature V ∈ RH×W×R0 based on a sequence of 3-rd features {U (l) ∈ RH×W×Rl}ml=1 (where H , W are height/width of the features and Rl is the number of channels in U (l)) such that\nV:,:,r0 = m∑ l=1 W(l):,:,rl,r0 ∗ U (l) :,:,rl , withW(l) = CTTD ( {T (l)}ml=k ) ,∀l ∈ [m] (5)\nwhereW(l) ∈ R[l(k−1)+1]×[l(k−1)+1]×Rl×R0 is the corresponding weights tensor for U (l). Suppose each W(l) takes a convolutional tensor-train format in Eq. (4), we prove in Appendix A that the model in Eq. (5) can be computed sequentially similarly without reconstructing the originalW(l)’s:\nV(l−1):,:,rl−1 = Rl∑\nrl=1\nT (l):,:,rl,rl−1 ∗ ( V(l):,:,rl + U (l) :,:,rl ) , ∀l ∈ [m] (6)\nwhere {V(l) ∈ RH×W×Rl}ml=1 are intermediate results of the sequential process, where V(m) ∈ RH×W×Rm is initialized as all zeros and final prediction V = V(0). The operations in Eq. (5) is illustrated in Figure 1a. In this paper, we denote the Eq.(5) simply as V = CTT({T (l)}ml=1, {U (l)}ml=1)." }, { "heading": "4 CONVOLUTIONAL TENSOR-TRAIN LSTM NETWORKS", "text": "Convolutional LSTM is a basic block for most recent video forecasting models (Xingjian et al., 2015), where the spatial information is encoded explicitly as tensors in the LSTM cells. In a ConvLSTM network, each cell is a first-order Markov model, i.e. the hidden state is updated based on its previous step. In this section, we propose convolutional tensor-train LSTM, where convolutional tensor-train is incorporated to model multi-steps spatio-temporal correlation explicitly.\nNotations. In this section, the symbol ∗ is overloaded to denote convolution between higher-order tensors. For instance, given a 4-th order weights tensor W ∈ RK×K×S×C and a 3-rd order input tensor X ∈ RH×W×S , Y = W ∗ X computes a 3-rd output tensor Y ∈ RH×W×T as Y:,:,c =∑\ns=1W:,:,s,c ∗ X:,:,s. The symbol ◦ is used to denote element-wise product between two tensors, and σ represents a function that performs element-wise (nonlinear) transformation on a tensor.\nConvolutional LSTM Xingjian et al. (2015) extended fully-connected LSTM (FC-LSTM) to Convolutional LSTM (ConvLSTM) to model spatio-temporal structures within each recurrent unit, where all features are encoded as 3-rd order tensors with dimensions (height × width × channels) and matrix multiplications are replaced by convolutions between tensors. In a ConvLSTM cell, the parameters are characterized by two 4-th order tensorsW ∈ RK×K×S×4C and T ∈ RK×K×C×4C , where K is the kernel size of all convolutions and S and C are the numbers of channels of the input X (t) ∈ RH×W×S and hidden states H(t) ∈ RH×W×C respectively. At each time step t, a ConvLSTM cell updates its hidden states H(t) ∈ RH×W×C based on the previous step H(t−1) and the current input X (t), where H and W are the height/width that are the same for X (t) andH(t).[\nI(t);F (t); C̃(t);O(t) ] = σ ( W ∗ X (t) + T ∗ H(t−1) ) (7)\nC(t) = C̃(t) ◦ I(t); H(t) = O(t) ◦ C(t) (8)\nwhere σ(·) applies sigmoid on the input gate I(t), forget gate F (t), output gateO(t), and hyperbolic tangent on memory cell C̃(t). Note that all tensors C(t), I(t), F (t), O(t) ∈ RH×W×C are 3-rd order.\nConvolutional Tensor-Train LSTM In Conv-TT-LSTM, we introduce a higher-order recurrent unit to capture multi-steps spatio-temporal correlations in LSTM, where the hidden state H(t) is updated based on its n previous steps {H(t−l)}nl=1 with anm-order convolutional tensor-train (CTT) as in Eq. (5). Concretely, suppose the parameters in CTT are characterized bym tensors of 4-th order {T (o)}mo=1, Conv-TT-LSTM replaces Eq. (7) in ConvLSTM by two equations:\nH̃(t,o) = f ( K(o), {H(t−l)}nl=1 ) ,∀o ∈ [m] (9)[\nI(t);F (t); C̃(t);O(t) ] = σ ( W ∗ X (t) + CTT ( {T (o)}mo=1, {H̃(t,o)}mo=1 )) (10)\n(1) Since CCT({T (l)}ml=1, ·) takes a sequence ofm tensors as inputs, the first step in Eq. (9) maps the n inputs {H(t−l)}nl=1 to m intermediate tensors {H(t,o)}mo=1 with a function f . (2) These m tensors {H̃(t,o)}mo=1 are then fed into CCT({T (l)}ml=1, ·) and compute the gates according to Eq. (10).\nWe propose two realizations of Eq. (9), where the first realization uses a fixed window of {H(t−l)}nl=1 to compute each H̃(t,o), while the second one adopts a sliding window strategy. At each step, the Conv-TT-LSTM model computesH(t) by replacing Eq. (9) by either Eq. (11a) or (11b).\nConv-TT-LSTM-FW: H̃(t,o) = K(o) ∗ Ĥ(t,o) = K(o) ∗ [ H(t−n); · · · ;H(t−1) ] (11a)\nConv-TT-LSTM-SW: H̃(t,o) = K(o) ∗ Ĥ(t,o) = K(o) ∗ [ H(t−n+m−l); · · · ;H(t−l) ] (11b)\nIn the fixed window version, the previous steps {H(l)}nl=1 are concatenated into a 3-rd order tensor Ĥ(t,o) ∈ RH×W×nC , which is then mapped to a tensor H̃(t,o) ∈ RH×W×R by 2D-convolution with a kernel K(l) ∈ Rk×k×nC×R. And in the sliding window version, {H(l)}nl=1 are concatenated into a 4-th order tensor Ĥ(t,o) ∈ RH×W×D×C (with D = n − m + 1), which is mapped to H̃(t,o) ∈ RH×W×R by 3D-convolution with a kernel K(l) ∈ Rk×k×D×R. For later reference, we name the model with Eqs.(11a) and (10) as Conv-TT-LSTM-FW and the one with Eqs.(11b) and (10) as Conv-TT-LSTM-SW. Figure 1b and Figure 1c visualize the difference between these two variants." }, { "heading": "5 EXPERIMENTS", "text": "We first evaluate our approach extensively on the synthetic Moving-MNIST-2 dataset (Srivastava et al., 2015). In addition, we use KTH human action dataset (Laptev et al., 2004) to test the performance of our models in more realistic scenario.\nModel Architecture All experiments use a stack of 12-layers of ConvLSTM or Conv-TT-LSTM with 32 channels for the first and last 3 layers, and 48 channels for the 6 layers in the middle. A convolutional layer is applied on top of all LSTM layers to compute the predicted frames. Following Byeon et al. (2018), two skip connections performing concatenation over channels are added between (3, 9) and (6, 12) layers. Illustration of the network architecture is included in the appendix. All parameters are initialized by Xavier’s normalized initializer (Glorot & Bengio, 2010) and initial states in ConvLSTM or Conv-TT-LSTM are initialized as zeros.\nEvaluation Metrics We use two traditional metrics MSE (or PSNR) and SSIM (Wang et al., 2004), and a recently proposed deep-learning based metric LPIPS (Zhang et al., 2018), which measures the similarity between deep features. Since MSE (or PSNR) is based on pixel-wise difference, it favors vague and blurry predictions, which is not a proper measurement of perceptual similarity. While SSIM was originally proposed to address the problem, Zhang et al. (2018) shows that their proposed LPIPS metric aligns better to human perception.\nLearning Strategy All models are trained with ADAM optimizer (Kingma & Ba, 2014) with L1 + L2 loss. Learning rate decay and scheduled sampling (Bengio et al., 2015) are used to ease training. Scheduled sampling is started once the model does not improve in 20 epochs (in term of validation loss), and the sampling ratio is decreased linearly from 1 until it reaches zero (by 2 × 10−4 each epoch for Moving-MNIST-2 and 5× 10−4 for KTH). Learning rate decay is further activated if the loss does not drop in 20 epochs, and the rate is decreased exponentially by 0.98 every 5 epochs.\nHyper-parameters Selection We perform a wide range of hyper-parameters search for Conv-TTLSTM to identify the best model, and Table 1 summarizes our search values. The initial learning rate\nof 10−3 is found for the models of kernel size 3 and 10−4 for the models of kernel size 5. We found that Conv-TT-LSTM models suffer from exploding gradients when learning rate is high (e.g. 10−3 in our experiments), therefore we also explore various gradient clipping values and select 1 for all Conv-TT-LSTM models. All hyper-parameters are selected using the best validation performance." }, { "heading": "5.1 MOVING-MNIST-2 DATASET", "text": "The Moving-MNIST-2 dataset is generated by moving two digits of size 28× 28 in MNIST dataset within a 64 × 64 black canvas. These digits are placed at a random initial location, and move with constant velocity in the canvas and bounce when they reach the boundary. Following Wang et al. (2018a), we generate 10,000 videos for training, 3,000 for validation, and 5,000 for test with default parameters in the generator1. All our models are trained to predict 10 frames given 10 input frames.\nMulti-Steps Prediction Table 2 reports the average statistics for 10 and 30 frames prediction, and Figure 2 shows comparison of per-frame statistics for PredRNN++ model, ConvLSTM baseline and our proposed Conv-TT-LSTM models. (1) Our Conv-TT-LSTM models consistently outperform the\n1 https://github.com/jthsieh/DDPAE-video-prediction/blob/master/data/moving_mnist.py 2The results are cited from the original paper, where the miscalculation of MSE is corrected in the table. 3The results are reproduced from https://github.com/Yunbo426/predrnn-pp with the same datasets in this paper. The original implementation crops each frame into patches as the input to the model. We find out such pre-processing is unnecessary and the performance is better than the original paper.\n12-layer ConvLSTM baseline for both 10 and 30 frames prediction with fewer parameters; (2) The Conv-TT-LSTMs outperform previous approaches in terms of SSIM and LPIPS (especially on 30 frames prediction), with less than one fifth of the model parameters.\nWe reproduce the PredRNN++ model (Wang et al., 2018a) from their source code2, and we find that (1) The PredRNN++ model tends to output vague and blurry results in long-term prediction (especially after 20 steps). (2) and our Conv-TT-LSTMs are able to produce sharp and realistic digits over all steps. An example of comparison for different models is shown in Figure 3. The visualization is consistent with the results in Table 2 and Figure 2.\nAblation Study To understand whether our proposed Conv-TT-LSTM universally improves upon ConvLSTM (i.e. not tied to specific architecture, loss function and learning schedule), we perform three ablation studies: (1) Reduce the number of layers from 12 layers to 4 layers (same as Xingjian et al. (2015) and Wang et al. (2018a)); (2) Change the loss function from L1 + L2 to L1 only; (3) Disable the scheduled sampling and use teacher forcing during training process. We evaluate the ConvLSTM baseline and our proposed Conv-TT-LSTM in these three settings, and summarize their comparisons in Table 3. The results show that our proposed Conv-TT-LSTM outperforms ConvLSTM consistently for all settings, i.e. the Conv-TT-LSTM model improves upon ConvLSTM in a board range of setups, which is not limited to the certain setting used in our paper. These ablation studies further show that our setup is optimal for predictive learning in Moving-MNIST-2." }, { "heading": "5.2 KTH ACTION DATASET", "text": "KTH action dataset (Laptev et al., 2004) contains videos of 25 individuals performing 6 types of actions on a simple background. Our experimental setup follows Wang et al. (2018a), which uses\npersons 1-16 for training and 17-25 for testing, and each frame is resized to 128 × 128 pixels. All our models are trained to predict 10 frames given 10 input frames. During training, we randomly select 20 contiguous frames from the training videos as a sample and group every 10,000 samples into one epoch to apply the learning strategy as explained at the beginning of this section.\nResults In Table 4, we report the evaluation on both 20 and 40 frames prediction. (1) Our models are consistently better than the ConvLSTM baseline for both 20 and 40 frames prediction. (2) While our proposed Conv-TT-LSTMs achieve lower SSIM value compared to the state-of-the-art models in 20 frames prediction, they outperform all previous models in LPIPS for both 20 and 40 frames prediction. An example of the predictions by the baseline and Conv-TT-LSTMs is shown in Figure 3." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed convolutional tensor-train decomposition to factorize a large convolutional kernel into a set of smaller core tensors. We applied this technique to efficiently construct convolutional tensor-train LSTM (Conv-TT-LSTM), a high-order spatio-temporal recurrent model whose parameters are represented in tensor-train format. We empirically demonstrated that our proposed Conv-TT-LSTM outperforms standard ConvLSTM and produce better/comparable results compared to other state-of-the-art models with fewer parameters. Utilizing the proposed model for high-resolution videos is still challenging due to gradient vanishing or explosion. Future direction will include investigating other training strategies or a model design to ease the training process.\n4 Wang et al. (2018b) mentions that the number of parameters is similar to PredRNN++ (Wang et al., 2018a)." }, { "heading": "A PROOF OF THE SEQUENTIAL ALGORITHMS IN SECTION 3", "text": "In this section, we prove the sequential algorithms in Eq. (3) for tensor-train decomposition (1) and Eq. (6) for convolutional tensor-train decomposition (4) both by induction.\nProof of Eq. (3) For simplicity, we denote the standard tensor-train decomposition in Eq. (1) as T = TTD({T (l)}ml=1), then Eq. (2) can be rewritten as Eq. (12) since R0 = 1 and v (0) 1 = 1.\nv = R0∑ r0=1 I1∑ i1=1 · · · Im∑ im=1 TTD ( {T (l)}ml=1 ) i1,··· ,im v(0)r0 ( u(1) ⊗ · · · ⊗ u(m) ) i1,··· ,im\n(12)\n= R0∑ r0=1 I1∑ i1=1 · · · Im∑ im=1 R1∑ r1=1 · · · Rm−1∑ rm−1=1 T (1)i1,r0,r1 · · · T (m) im,rm−1,rm v(0)r0 u(1)i1 · · ·u(m)im (13)\n=\nR1∑ r1=1 I2∑ i2=1 · · · Im∑ im=1 R2∑ r2=1 · · · Rm−1∑ rm−1=1 T (2)i2,r1,r2 · · · T (m) im,rm−1,rm (\nR0∑ r0=1 I1∑ i1=1 T (1)i1,r0,r1 v (0) r0 u (1) i1 ) u (2) i2 · · ·u(m)im\n(14)\n= R1∑ r1=1 I2∑ i2=1 · · · Im∑ im=1 TTD ( {T (l)}ml=2 ) i1,··· ,im v(1)r1 ( u(2) ⊗ · · · ⊗ u(m) ) i2,··· ,im\n(15)\nwhere R0 = 1, v (0) 1 = 1 and the sequential algorithm in Eq. (3) is performed at Eq. (14).\nProof of Eq. (6) For simplicity, we denote the convolutional tensor-train decomposition in Eq. (4) as T = CTTD(T (l))ml=1, then Eq. (5) can be rewritten as (16) since V(m) is an all zeros tensor.\nV:,:,r0 =\nm∑ l=1 Rl∑ rl=1 CTTD ( {T (t)}lt=1 ) :,:,rl,r0 ∗ U (l):,:,rl+\nRm∑ rm=1 CTTD ( {T (t)}mt=1 ) :,:,rm,r0 ∗ V(m):,:,rm\n(16)\n=\nm−1∑ l=1 Rl∑ rl=1 CTTD ( {T (t)}lt=1 ) :,:,rl,r0 ∗ U (l):,:,rl+\nRm∑ rm=1 CTTD ( {T (t)}mt=1 ) :,:,rm,r0 ∗ ( U (m):,:,rm + V (m) :,:,rm ) (17)\nNote that the second term in Eq. (17) can now be simplified as\nRm∑ rm=1 CTTD ( {T (t)}mt=1 ) :,:,rm,r0 ∗ ( U (m):,:,rm + V (m) :,:,rm ) (18)\n= Rm∑ rm=1 R1∑ r1=1 · · · Rm−1∑ rm−1=1 T (1):,:,r1,r0 ∗ · · · ∗ T (m) :,:,rm,rm−1 ∗ (U (m):,:,rm + V(m):,:,rm) (19)\n=\nRm−1∑ rm−1=1 R1∑ r1=1 · · · Rm−1∑ rm−1=1 T (1):,:,r1,r0 ∗ · · · ∗ T (m−1) :,:,rm−1,rm−2 ∗ [\nRm∑ rm=1 T (m):,:,rm,rm−1 ∗ ( U (m):,:,rm + V (m) :,:,rm\n)] (20)\n= Rm−1∑ rm−1=1 CTTD ( {T (t)}m−1t=1 ) :,:,rm−1,r0 ∗ V(m−1):,:,rm−1 (21)\nwhere the sequential algorithm in Eq. (5) is performed to achieve Eq. (21) from Eq. (20). Plugging Eq. (21) into Eq. (17), we reduce Eq. (17) back to the form as Eq. (16).\nV:,:,r0 =\nm−1∑ l=1 Rl∑ rl=1 CTTD ( {T (t)}lt=1 ) :,:,rl,r0 ∗ U (l):,:,rl+\nRm∑ rm=1 CTTD ( {T (t)}m−1t=1 ) :,:,rm−1,r0 ∗ V(m−1):,:,rm−1\n(22)\nwhich completes the induction." }, { "heading": "B SUPPLEMENTARY MATERIAL OF THE EXPERIMENTS", "text": "All experiments use a stack of 12-layers of ConvLSTM or Conv-TT-LSTM with 32 channels for the first and last 3 layers, and 48 channels for the 6 layers in the middle. A convolutional layer is applied on top of all LSTM layers to compute the predicted frames, followed by an optional sigmoid function (In the experiments, we add sigmoid for KTH dataset but not for Moving-MNIST-2). Additionally, two skip connections performing concatenation over channels are added between (3, 9) and (6, 12) layers as is shown in Figure 5." } ]
2,019
null
SP:1c4adc8ff01ce8ca27baf0d7e438634fa84e26e7
[ "The paper proposes a method for learning partial orders based on learning fuzzy pairwise comparisons (smaller/greater/approximatively equal to), and the retaining of a set of representants in each chain in order to allow consistent result by consistency maximization. The method is applied to the problem of estimating the age based on faces. Extensions proposed are the learning of multiple disjoint chains on this dataset, manually partitioned or learned through an iterative assignment algorithm that has similarities with a soft expectation-maximization principle. Extensive experiments are done on the age estimation problem with comparison to exisiting approaches, and an application to aesthetic assessment is proposed in appendix.", "This paper presents an order learning method and applies it to age estimation from facial images. It designs a pairwise comparator that categorizes ordering relationship between two instances into ternary classes of greater than, similar, and smaller than. Instead of directly estimating the class of each instance, it learns pairwise ordering relationship between two instances." ]
We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes. To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is ‘greater than,’ ‘similar to,’ or ‘smaller than’ the other. Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably. We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance. Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.
[ { "affiliations": [], "name": "Kyungsun Lim" }, { "affiliations": [], "name": "Nyeong-Ho Shin" }, { "affiliations": [], "name": "Young-Yoon Lee" }, { "affiliations": [], "name": "Chang-Su Kim" } ]
[]
[ { "heading": "1 INTRODUCTION", "text": "To measure the quality of something, we often compare it with other things of a similar kind. Before assigning 4 stars to a film, a critic would have thought, “It is better than 3-star films but worse than 5-stars.” This ranking through pairwise comparisons is done in various decision processes (Saaty, 1977). It is easier to tell the nearer one between two objects in a picture than to estimate the distance of each object directly (Chen et al., 2016; Lee & Kim, 2019a). Also, it is easy to tell a higher pitch between two notes, but absolute pitch is a rare ability (Bachem, 1955).\nRanking through comparisons has been investigated for machine learning. In learning to rank (LTR), the pairwise approach learns, between two documents, which one is more relevant to a query (Liu, 2009). Also, in ordinal regression (Frank & Hall, 2001; Li & Lin, 2007), to predict the rank of an object, binary classifications are performed to tell whether the rank is higher than a series of thresholds or not. In this paper, we propose order learning to learn ordering relationship between objects. Thus, order learning is related to LTR and ordinal regression. However, whereas LTR and ordinal regression assume that ranks form a total order (Hrbacek & Jech, 1984), order learning can be used for a partial order as well. Order learning is also related to metric learning (Xing et al., 2003). While metric learning is about whether an object is ‘similar to or dissimilar from’ another object, order learning is about ‘greater than or smaller than.’ Section 2 reviews this related work.\nIn order learning, a set of classes, Θ = {θ1, θ2, · · · , θn}, is ordered, where each class θi represents one or more object instances. Between two classes θi and θj , there are three possibilities: θi > θj or θi < θj or neither (i.e. incomparable). These relationships are represented by the order graph. The goal of order learning is to determine the order graph and then classify an instance into one of the classes in Θ. To achieve this, we develop a pairwise comparator that determines ordering relationship between two instances x and y into one of three categories: x is ‘greater than,’ ‘similar to,’ or ‘smaller than’ y. Then, we use the comparator to measure an input instance against multiple reference instances in known classes. Finally, we estimate the class of the input to maximize the consistency among the comparison results. It is noted that the parameter optimization of the pairwise comparator, the selection of the references, and the discovery of the order graph are jointly performed to minimize a common loss function. Section 3 proposes this order learning.\nWe apply order learning to facial age estimation. Order learning matches age estimation well, since it is easier to tell a younger one between two people than to estimate each person’s age directly (Chang et al., 2010; Zhang et al., 2017a). Even when we assume that age classes are linearly ordered, the proposed age estimator performs well. The performance is further improved, when classes are\ndivided into disjoint chains in a supervised manner using gender and ethnic group information or even in an unsupervised manner. Section 4 describes this age estimator and discusses its results. Finally, Section 5 concludes this work." }, { "heading": "2 RELATED WORK", "text": "Pairwise comparison: It is a fundamental problem to estimate the priorities (or ranks) of objects through pairwise comparison. In the classic paper, Saaty (1977) noted that, even when direct estimates of certain quantities are unavailable, rough ratios between them are easily obtained in many cases. Thus, he proposed the scaling method to reconstruct absolute priorities using only relative priorities. The scaling method was applied to monocular depth estimation (Lee & Kim, 2019a) and aesthetic assessment (Lee & Kim, 2019b). Ranking from a pairwise comparison matrix has been studied to handle cases, in which the matrix is huge or some elements are noisy (Braverman & Mossel, 2008; Jamieson & Nowak, 2011; Negahban et al., 2012; Wauthier et al., 2013). On the other hand, the pairwise approach to LTR learns, between two documents, which one is more relevant to a query (Liu, 2009; Herbrich et al., 1999; Burges et al., 2005; Tsai et al., 2007). The proposed order learning is related to LTR, since it also predicts the order between objects. But, while LTR sorts multiple objects with unknown ranks and focuses on the sorting quality, order learning compares a single object x with optimally selected references with known ranks to estimate the rank of x.\nOrdinal regression: Ordinal regression predicts an ordinal variable (or rank) of an instance. Suppose that a 20-year-old is misclassified as a 50-year old and a 25-year old, respectively. The former error should be more penalized than the latter. Ordinal regression exploits this characteristic in the design of a classifier or a regressor. In Frank & Hall (2001) and Li & Lin (2007), a conversion scheme was proposed to transform an ordinal regression problem into multiple binary classification problems. Ordinal regression based on this conversion scheme has been used in various applications, including age estimation (Chang et al., 2010; 2011; Niu et al., 2016; Chen et al., 2017) and monocular depth estimation (Fu et al., 2018). Note that order learning is different from ordinal regression. Order learning performs pairwise comparison between objects, instead of directly estimating the rank of each object. In age estimation, ordinal regression based on the conversion scheme is concerned with the problem, “Is a person’s age bigger than a threshold θ?” for each θ. In contrast, order learning concerns “Between two people, who is older?” Conceptually, order learning is easier. Technically, if there are N ranks, the conversion scheme requires N − 1 binary classifiers, but order learning needs only a single ternary classifier. Moreover, whereas ordinal regression assumes that ranks form a total order, order learning can be used even in the case of a partial order (Hrbacek & Jech, 1984).\nMetric learning: A distance metric can be learned from examples of similar pairs of points and those of dissimilar pairs (Xing et al., 2003). The similarity depends on an application and is implicitly defined by user-provided examples. If a learned metric generalizes well to unseen data, it can be used to enforce the desired similarity criterion in clustering (Xing et al., 2003), classification (Weinberger et al., 2006), or information retrieval (McFee & Lanckriet, 2010). Both metric learning and order learning learn important binary relations in mathematics: metric and order (Hrbacek & Jech, 1984). However, a metric decides whether an object x is similar to or dissimilar from another object y, whereas an order tells whether x is greater than or smaller than y. Thus, a learned metric is useful for grouping similar data, whereas a learned order is suitable for processing ordered data.\nAge estimation: Human ages can be estimated from facial appearance (Kwon & da Vitoria Lobo, 1994). Geng et al. (2007) proposed the aging pattern subspace, and Guo et al. (2009) introduced biologically inspired features to age estimation. Recently, deep learning has been adopted for age estimation. Niu et al. (2016) proposed OR-CNN for age estimation, which is an ordinal regressor using the conversion scheme. Chen et al. (2017) proposed Ranking-CNN, which is another ordinal regressor. While OR-CNN uses a common feature for multiple binary classifiers, Ranking-CNN employs a separate CNN to extract a feature for each binary classifier. Tan et al. (2018) grouped adjacent ages via the group-n encoding, determined whether a face belongs to each group, and combined the results to predict the age. Pan et al. (2018) proposed the mean-variance loss to train a CNN classifier for age estimation. Shen et al. (2018) proposed the deep regression forests for age estimation. Zhang et al. (2019) developed a compact age estimator using the two-points representation. Also, Li et al. (2019) proposed a continuity-aware probabilistic network for age estimation." }, { "heading": "3 ORDER LEARNING", "text": "" }, { "heading": "3.1 WHAT IS ORDER?", "text": "Let us first review mathematical definitions and concepts related to order. An order (Hrbacek & Jech, 1984; Bartle, 1976), often denoted by ≤, is a binary relation on a set Θ = {θ1, θ2, · · · , θn} that satisfies the three properties of\n• Reflexivity: θi ≤ θi for every θi ∈ Θ; • Antisymmetry: If θi ≤ θj and θj ≤ θi, then θi = θj ; • Transitivity: If θi ≤ θj and θj ≤ θk, then θi ≤ θk.\nIn real-world problems, an order describes ranks or priorities of objects. For example, in age estimation, θi ≤ θj means that people in age class θi look younger than those in θj . We may use the symbol→, instead of≤, to denote an order on a finite set Θ. Then, the order can be represented by a directed graph (Gross & Yellen, 2006) using elements in Θ as nodes. If θi → θj , there is a directed edge from node θi to node θj . The order graph is acyclic because of antisymmetry and transitivity. For example, for n,m ∈ N, let n → m denote that m is a multiple of n. Note that it is an order on any subset of N. Figure 1(a) is the graph representing this order on {1, . . . , 9}. Elements θi and θj are comparable if θi → θj or θj → θi, or incomparable otherwise. In Figure 1(a), 6 and 8 are incomparable. In age estimation, it is difficult to compare apparent ages of people in different ethnic groups or of different genders.\nAn order on a set Θ is total (or linear) if all elements in Θ are comparable to one another. In such a case, Θ is called a linearly ordered set. In some real-world problems, orders are not linear. In this work, a subset Θc of Θ is referred to as a chain, if Θc is linearly ordered and also maximal, i.e. there is no proper superset of Θc that is linearly ordered. In Figure 1(a), nodes 1, 2, 4, and 8 form a chain. In Figure 1(b), the entire set is composed of three disjoint chains." }, { "heading": "3.2 ORDER LEARNING – BASICS", "text": "Let Θ = {θ1, θ2, · · · , θn} be an ordered set of classes, where each class θi represents one or more object instances. For example, in age estimation, age class 11 is the set of 11-year-olds. The objective of order learning is to determine the order graph, such as Figure 1(a) or (b), and categorize an object instance into one of the classes. However, in many cases, order graphs are given explicitly or obvious from the contexts. For example, in quality assessment, there are typically five classes (poor → satisfactory → good → very good → excellent), forming a single chain. Also, in age estimation, suppose that an algorithm first classifies a person’s gender into female or male and then estimates the age differently according to the gender. In this case, implicitly, there are separate age classes for each gender, and the age classes compose two disjoint chains similarly to Figure 1(b). Thus, in this subsection, we assume that the order graph is already known. Also, given an object instance, we assume that the chain to which the instance belongs is known. Then, we attempt to categorize the instance into one of the classes in the chain. Section 3.4 will propose the order learning in the case of an unknown order graph, composed of disjoint chains.\nInstead of directly estimating the class of each instance, we learn pairwise ordering relationship between two instances. Let Θc = {0, 1, . . . , N − 1} be a chain, where N is the number of classes. Let\nx and y be two instances belonging to classes in Θc. Let θ(·) denote the class of an instance. Then, x and y are compared and their ordering relationship is defined according to their class difference as\nx y if θ(x)− θ(y) > τ, (1) x ≈ y if |θ(x)− θ(y)| ≤ τ, (2) x ≺ y if θ(x)− θ(y) < −τ, (3)\nwhere τ is a threshold. To avoid confusion, we use ‘ , ≈, ≺’ for the instance ordering, while ‘>, =, <’ for the class order. In practice, the categorization in (1)∼(3) is performed by a pairwise comparator in Figure 2, which consists of a Siamese network and a ternary classifier (Lee & Kim, 2019b). To train the comparator, only comparable instance pairs are employed.\nWe estimate the class θ(x) of a test instance x by comparing it with reference instances ym, 0 ≤ m ≤ M − 1, where M is the number of references. The references are selected from training data such that they are from the same chain as x. Given x and ym, the comparator provides one of three categories ‘ , ≈, ≺’ as a result. Let θ′ be an estimate of the true class θ(x). Then, the consistency between the comparator result and the estimate is defined as φcon(x, ym, θ\n′) = (4)[ x ym ][ θ′ − θ(ym) > τ ] + [ x ≈ ym ][ |θ′ − θ(ym)| ≤ τ ] + [ x ≺ ym ][ θ′ − θ(ym) < −τ ] where [·] is the indicator function. The function φcon(x, ym, θ′) returns either 0 for an inconsistent case or 1 for a consistent case. For example, suppose that the pairwise comparator declares x ≺ ym but θ′− θ(ym) > τ . Then, φcon(x, ym, θ′) = 0 ·1 + 0 ·0 + 1 ·0 = 0. Due to a possible classification error of the comparator, this inconsistency may occur even when the estimate θ′ equals the true class θ(x). To maximize the consistency with all references, we estimate the class of x by\nθ̂MC(x) = arg max θ′∈Θc M−1∑ m=0 φcon(x, ym, θ ′), (5)\nwhich is called the maximum consistency (MC) rule. Figure 3 illustrates this MC rule.\nIt is noted that ‘ , ≈, ≺’ is not an mathematical order. For example, if θ(x) + 34τ = θ(y) = θ(z)− 34τ , then x ≈ y and y ≈ z but x ≺ z. This is impossible in an order. More precisely, due to the quantization effect of the ternary classifier in (1)∼(3), ‘ , ≈, ≺’ is quasi-transitive (Sen, 1969), and ‘≈’ is symmetric but intransitive. We use this quasi-transitive relation to categorize an instance into one of the classes, on which a mathematical order is well defined." }, { "heading": "3.3 ORDER LEARNING – SUPERVISED CHAINS", "text": "" }, { "heading": "3.3.1 SINGLE-CHAIN HYPOTHESIS (1CH)", "text": "In the simplest case of 1CH, all classes form a single chain Θc = {0, 1, . . . , N − 1}. For example, in 1CH age estimation, people’s ages are estimated regardless of their ethnic groups or genders.\nWe implement the comparator in Figure 2 using CNNs, as described in Section 4.1. Let qxy = (qxy0 , q xy 1 , q xy 2 ) be the one-hot vector, indicating the ground-truth ordering relationship between training instances x and y. Specifically, (1, 0, 0), (0, 1, 0), and (0, 0, 1) represent x y, x ≈ y, and x ≺ y. Also, pxy = (pxy0 , p xy 1 , p xy 2 ) is the corresponding softmax probability vector of the comparator. We train the comparator to minimize the comparator loss\n`co = − ∑ x∈T ∑ y∈R 2∑ j=0 qxyj log p xy j (6)\nwhere T is the set of all training instances and R ⊂ T is the set of reference instances. First, we initialize R = T and minimize `co via the stochastic gradient descent. Then, we reduce the reference setR by sampling references from T . Specifically, for each class in Θc, we choose M/N reference images to minimize the same loss `co, where M is the number of all references and N is the number of classes. In other words, the reliability score of a reference candidate y is defined as\nα(y) = ∑ x∈T 2∑ j=0 qxyj log p xy j (7)\nand the M/N candidates with the highest reliability scores are selected. Next, after fixing the reference setR, the comparator is trained to minimize the loss `co. Then, after fixing the comparator parameters, the reference setR is updated to minimize the same loss `co, and so forth. In the test phase, an input instance is compared with the M references and its class is estimated using the MC rule in (5).\n3.3.2 K-CHAIN HYPOTHESIS (KCH)\nIn KCH, we assume that classes form K disjoint chains, as in Figure 1(b). For example, in the supervised 6CH for age estimation, we predict a person’s age according to the gender in {female, male} and the ethnic group in {African, Asian, European}. Thus, there are 6 chains in total. In this case, people in different chains are assumed to be incomparable for age estimation. It is supervised, since gender and ethnic group annotations are used to separate the chains. The supervised 2CH or 3CH also can be implemented by dividing chains by genders only or ethnic groups only.\nThe comparator is trained similarly to 1CH. However, in computing the comparator loss in (6), a training instance x and a reference y are constrained to be from the same chain. Also, during the test, the type (or chain) of a test instance should be determined. Therefore, a K-way type classifier is trained, which shares the feature extractor with the comparator in Figure 2 and uses additional fully-connected (FC) layers. Thus, the overall loss is given by\n` = `co + `ty (8)\nwhere `co is the comparator loss and `ty is the type classifier loss. The comparator and the type classifier are jointly trained to minimize this overall loss `.\nDuring the test, given an input instance, we determine its chain using the type classifier, and compare it with the references from the same chain, and then estimate its class using the MC rule in (5)." }, { "heading": "3.4 ORDER LEARNING – UNSUPERVISED CHAINS", "text": "This subsection proposes an algorithm to separate classes into K disjoint chains when there are no supervision or annotation data available for the separation. First, we randomly partition the training set T into T0, T1, . . . , TK−1, where T = T0∪ . . .∪TK−1 and Tk∩Tl = ∅ for k 6= l. Then, similarly\nAlgorithm 1 Order Learning with Unsupervised Chains Input: T = training set of ordinal data, K = # of chains, N = # of classes in each chain, and M = # of references in each chain 1: Partition T randomly into T0, . . . , TK−1 and train a pairwise comparator 2: for each chain k do . Reference Selection (Rk) 3: From Tk, select M/N references y with the highest reliability scores αk(y) 4: end for 5: repeat 6: for each instance x do . Membership Update (Tk) 7: Assign it to Tk∗ , where k∗ = argmaxk βk(x) subject to the regularization constraint 8: end for 9: Fine-tune the comparator and train a type classifier using T0, . . . , TK−1 to minimize ` = `co + `ty 10: for each instance x do . Membership Refinement (Tk) 11: Assign it to Tk′ where k′ is its type classification result 12: end for 13: for each chain k do . Reference Selection (Rk) 14: From Tk, select M/N references y with the highest reliability scores αk(y) 15: end for 16: until convergence or predefined number of iterations Output: Pairwise comparator, type classifier, reference setsR0, . . . ,RK−1\nto (6), the comparator loss `co can be written as\n`co = − K−1∑ k=0 ∑ x∈Tk ∑ y∈Rk 2∑ j=0 qxyj log p xy j = − K−1∑ k=0 ∑ y∈Rk αk(y) = − K−1∑ k=0 ∑ x∈Tk βk(x) (9)\nwhere Rk ⊂ Tk is the set of references for the kth chain, αk(y) = ∑ x∈Tk ∑ j q xy j log p xy j is the\nreliability of a reference y in the kth chain, and βk(x) = ∑ y∈Rk ∑ j q xy j log p xy j is the affinity\nof an instance x to the references in the kth chain. Note that βk(x) = − ∑ y∈Rk D(q\nxy‖pxy) where D is the Kullback-Leibler distance (Cover & Thomas, 2006). Second, after fixing the chain membership Tk for each chain k, we select references y to maximize the reliability scores αk(y). These references form Rk. Third, after fixing R0, . . . ,RK−1, we update the chain membership T0, . . . , TK−1, by assigning each training instance x to the kth chain that maximizes the affinity score βk(x). The second and third steps are iteratively repeated. Both steps decrease the same loss `co in (9).\nThe second and third steps are analogous to the centroid rule and the nearest neighbor rule in the Kmeans clustering (Gersho & Gray, 1991), respectively. The second step determines representatives in each chain (or cluster), while the third step assigns each instance to an optimal chain according to the affinity. Furthermore, both steps decrease the same loss alternately.\nHowever, as described in Algorithm 1, we modify this iterative algorithm by including the membership refinement step in lines 10 ∼ 12. Specifically, we train a K-way type classifier using T0, . . . , TK−1. Then, we accept the type classification results to refine T0, . . . , TK−1. This refinement is necessary because the type classifier should be used in the test phase to determine the chain of an unseen instance. Therefore, it is desirable to select the references also after refining the chain membership. Also, in line 7, if we assign an instance x to maximize βk(x) only, some classes may be assigned too few training instances, leading to data imbalance. To avoid this, we enforce the regularization constraint so that every class is assigned at least a predefined number of instances. This regularized membership update is described in Appendix A." }, { "heading": "4 AGE ESTIMATION", "text": "We develop an age estimator based on the proposed order learning. Order learning is suitable for age estimation, since telling the older one between two people is easier than estimating each person’s age directly (Chang et al., 2010; Zhang et al., 2017a)." }, { "heading": "4.1 IMPLEMENTATION DETAILS", "text": "It is less difficult to distinguish between a 5-year-old and a 10-year-old than between a 65-yearold and a 70-year-old. Therefore, in age estimation, we replace the categorization based on the\nTable 1: A summary of the balanced dataset, formed from MORPH II, AFAD, and UTK. An element n m means that, out ofm images in the original dataset, n images are sampled for the balanced dataset.\nMORPH II AFAD UTK Balanced\nMale Female Male Female Male Female Male Female\nAfrican 4,022 36,772\n4,446 5,748\n0 0\n0 0\n2,047 2,319\n1,871 2,209\n6,069 6,317\nAsian 153 153\n17 17\n5,000 100,752\n5,000 63,680\n1,015 1,575\n1,200 1,859\n6,168 6,217\nEuropean 1,852 7,992\n2,602 2,602\n0 0\n0 0\n4,487 5,477\n3,437 4,601\n6,339 6,039\narithmetic difference in (1)∼(3) with that based on the geometric ratio as follows.\nx y if log θ(x)− log θ(y) > τage, (10) x ≈ y if | log θ(x)− log θ(y)| ≤ τage, (11) x ≺ y if log θ(x)− log θ(y) < −τage, (12)\nwhich represent ‘older,’ ‘similar,’ and ‘younger.’ The consistency in (4) is also modified accordingly.\nThere are 5 reference images for each age class within range [15, 80] in this work (M = 330, N = 66). Thus, a test image should be compared with 330 references. However, we develop a twostep approach, which does at most 130 comparisons but performs as good as the method using 330 comparisons. The two-step estimation is employed in all experiments. It is described in Appendix B.\nWe align all facial images using SeetaFaceEngine (Zhang et al., 2014) and resize them into 256 × 256×3. Then, we crop a resized image into 224×224×3. For the feature extractors in Figure 2, we use VGG16 without the FC layers (Simonyan & Zisserman, 2014). They yield 512-channel feature vectors. Then, the vectors are concatenated and input to the ternary classifier, which has three FC layers, yielding 512-, 512-, and 3-channel vectors sequentially. The 3-channel vector is normalized to the softmax probabilities of the three categories ‘ , ≈, ≺.’ In (10)∼(12), τage is set to 0.1. In KCH with K ≥ 2, the type (or chain) of a test image should be determined. Thus, we design a type classifier, which shares the feature extractor with the comparator. Similarly to the ternary classifier, the type classifier uses three FC layers, yielding 512-, 512-, andK-channel vectors sequentially. The comparator and the type classifier are jointly trained.\nTo initialize the feature extractors, we adopt the VGG16 parameters pre-trained on ImageNet (Deng et al., 2009). We randomly initialize all the other layers. We update the parameters using the Adam optimizer (Kingma & Ba, 2014). We set the learning rate to 10−4 for the first 70 epochs. Then, we select 5 references for each age class. Using the selected references, we fine-tune the network with a learning rate of 10−5. We repeat the reference selection and the parameter fine-tuning up to 3 times.\nIn the case of unsupervised chains, we enforce the regularization constraint (line 7 in Algorithm 1). By default, for each age, all chains are constrained to be assigned the same number of training images. If there are L training images of θ-year-olds, the age classes θ in the K chains are assigned L/K images, respectively, according to the affinity scores βk(x) by Algorithm 2 in Appendix A." }, { "heading": "4.2 DATASETS AND EVALUATION METRICS", "text": "MORPH II (Ricanek & Tesafaye, 2006) is the most popular age estimation benchmark, containing about 55,000 facial images in the age range [16, 77]. IMDB-WIKI (Rothe et al., 2018) is another dataset containing about 500,000 celebrity images obtained from IMDB and Wikipedia. It is sometimes used to pre-train age estimation networks. Optionally, we also select 150,000 clean data from IMDB-WIKI to pre-train the proposed pairwise comparator.\nAlthough several facial age datasets are available, most are biased to specific ethnic groups or genders. Data unbalance restricts the usability and degrades the generalization performance. Thus, we form a ‘balanced dataset’ from MORPH II, AFAD (Niu et al., 2016), and UTK (Zhang et al., 2017b).\nTable 1 shows how the balanced dataset is organized. Before sampling images from MORPH II, AFAD, and UTK, we rectify inconsistent labels by following the strategy in Yip et al. (2018). For each combination of gender in {female, male} and ethnic group in {African, Asian, European}, we sample about 6,000 images. Also, during the sampling, we attempt to make the age distribution as\nuniform as possible within range [15, 80]. The balanced dataset is partitioned into training and test subsets with ratio 8 : 2.\nFor performance assessment, we calculate the mean absolute error (MAE) (Lanitis et al., 2004) and the cumulative score (CS) (Geng et al., 2006). MAE is the average absolute error between predicted and ground-truth ages. Given a tolerance level l, CS computes the percentage of test images whose absolute errors are less than or equal to l. In this work, l is fixed to 5, as done in Chang et al. (2011), Han et al. (2018), and Shen et al. (2018)." }, { "heading": "4.3 EXPERIMENTAL RESULTS", "text": "Table 2 compares the proposed algorithm (1CH) with conventional algorithms on MORPH II. As evaluation protocols for MORPH II, we use four different settings, including the 5-fold subjectexclusive (SE) and the 5-fold random split (RS) (Chang et al., 2010; Guo & Wang, 2012). Appendix C.1 describes these four settings in detail and provides an extended version of Table 2.\nOHRank, OR-CNN, and Ranking-CNN are all based on ordinal regression. OHRank uses traditional features, yielding relatively poor performances, whereas OR-CNN and Ranking-CNN use CNN features. DEX, DRFs, MO-CNN, MV, and BridgeNet employ VGG16 as backbone networks. Among them, MV and BridgeNet achieve the state-of-the-art results, by employing the mean-variance loss and the gating networks, respectively. The proposed algorithm outperforms these algorithms in setting C, which is the most challenging task. Furthermore, in terms of CS, the proposed algorithm yields the best performances in all four settings. These outstanding performances indicate that order learning is an effective approach to age estimation.\nIn Table 3, we analyze the performances of the proposed algorithm on the balanced dataset according to the number of hypothesized chains. We also implement and train the state-of-the-art MV on the balanced dataset and provide its results using supervised chains.\nLet us first analyze the performances of the proposed algorithm using ‘supervised’ chains. The MAE and CS scores on the balanced dataset are worse than those on MORPH II, since the balanced dataset contains more diverse data and thus is more challenging. By processing facial images separately according to the genders (2CH), the proposed algorithm reduces MAE by 0.05 and improves CS by 0.2% in comparison with 1CH. Similar improvements are obtained by 3CH or 6CH, which consider the ethnic groups only or both gender and ethnic groups, respectively. In contrast, in the case of MV, multi-chain hypotheses sometimes degrade the performances; e.g., MV (6CH) yields a lower CS than MV (1CH). Regardless of the number of chains, the proposed algorithm trains a single comparator but uses a different set of references for each chain. The comparator is a ternary classifier. In contrast, MV (6CH) should train six different age estimators, each of which is a 66-way classifier, to handle different chains. Thus, their training is more challenging than that of the single ternary classifier. Note that, for the multi-chain hypotheses, the proposed algorithm first identifies the chain of a test image using the type classifiers, whose accuracies are about 98%. In Table 3, these\ntype classifiers are used to obtain the results of the proposed algorithm, whereas the ground-truth gender and ethnic group of each test image are used for MV.\nFigure 4 shows how to estimate an age in 6CH. In this test, the subject is a 22-year-old Asian male. He is compared with the references who are also Asian males. Using the comparison results, the age is correctly estimated as 22 by the MC rule in (5).\nTable 4 lists the MAE results for each test chain. Europeans yield poorer MAEs than Africans or Asians. However, this is not due to inherent differences between ethnic groups. It is rather caused by differences in image qualities. As listed in Table 1, more European faces are sampled from UTK. The UTK faces were crawled from the Internet and their qualities are relatively low. Also, from the cross-chain test results using 6CH, some observations can be made:\n• Except for the As-F test chain, the lowest MAE is achieved by the references in the same chain. • Eu-M and Eu-F are mutually compatible. For Eu-M, the second best performance is obtained by\nFigure 5 shows how training images are divided into two chains in the unsupervised 2CH. During the membership update, for each age, each chain is regularized to include at least a certain percentage (κ) of the training images. In the default mode, the two chains are assigned the same number of images with κ = 50%. However, Appendix C.3 shows that the performance is not very sensitive to κ. At κ = 10%, MAE = 4.17 and CS = 73.7%. From Figure 5, we observe\n• The division of the chains is not clearly related to genders or ethnic groups. Regardless of genders\nor ethnic groups, about half of the images are assigned to chain 1 and the others to chain 2. • At κ = 10%, chain 1 mostly consists of middle ages, while chain 2 of 10s, 20s, 60s, and 70s. • At κ = 50%, there is no such strong age-dependent tendency. But, for some combinations of\ngender, ethnic group, and age band, it is not equal division. For example, for Asian females, a majority of 40s are assigned to chain 1 but a majority of 50s and 60s are assigned to chain 2.\nThe unsupervised algorithm is designed to divide instances into multiple clusters when gender and ethnic group information is unavailable. As shown in Appendix C.3, different κ’s yield various clustering results. Surprisingly, these different clusters still outperform the supervised algorithm.\nFor example, at κ = 10%, let us consider the age band of 20s and 30s. If the references in chain 2 are used to estimate the ages of people in chain 1, the average error is 4.6 years. On the contrary, if the references in chain 1 are used for chain 2, the average error is −5.4 years. These opposite biases mean that people in chain 1 tend to look older than those in chain 2. These ‘looking-older’ people in 20s and 30s compose the blue cluster (chain 1) together with most people in 40s and 50s in Figure 5. In this case, ‘looking-older’ people in 20s and 30s are separated from ‘looking-younger’ ones by the unsupervised algorithm. This is more effective than the gender-based or ethnic-group-based division of the supervised algorithm. Appendix C presents more results on age estimation." }, { "heading": "5 CONCLUSIONS", "text": "Order learning was proposed in this work. In order learning, classes form an ordered set, and each class represents object instances of the same rank. Its goal is to determine the order graph of classes and classify a test instance into one of the classes. To this end, we designed the pairwise comparator to learn ordering relationships between instances. We then decided the class of an instance by comparing it with reference instances in the same chain and maximizing the consistency among the comparison results. For age estimation, it was shown that the proposed algorithm yields the stateof-the-art performance even in the case of the single-chain hypothesis. The performance is further improved when the order graph is divided into multiple disjoint chains.\nIn this paper, we assumed that the order graph is composed of disjoint chains. However, there are more complicated graphs, e.g. Figure 1(a), than disjoint chains. For example, it is hard to recognize an infant’s sex from its facial image (Porter et al., 1984). But, after puberty, male and female take divergent paths. This can be reflected by an order graph, which consists of two chains sharing common nodes up to a certain age. It is an open problem to generalize order learning to find an optimal order graph, which is not restricted to disjoint chains." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by ‘The Cross-Ministry Giga KOREA Project’ grant funded by the Korea government (MSIT) (No. GK19P0200, Development of 4D reconstruction and dynamic deformable action model based hyperrealistic service technology), and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2018R1A2B3003896)." }, { "heading": "A REGULARIZED MEMBERSHIP UPDATE", "text": "During the chain membership update in Algorithm 1, we assign an instance x to chain k to maximize βk(x) subject to the regularization constraint. As mentioned in Section 4.1, in age estimation, this regularization is enforced for each age. Let X denote the set of θ-year-olds for a certain θ. Also, let K = {0, 1, . . . ,K − 1} be the set of chains. Suppose that we should assign at least a certain number (L) of instances in X to each chain. This is done by calling RegularAssign(K, X , L) in Algorithm 2, which is a recursive function. Algorithm 2 yields the membership function c(x) as output. For example, c(x) = 1 means that x belongs to chain 1.\nAlgorithm 2 RegularAssign(K, X , L) Input: K = set of chains, X = set of instances, and L = minimum number 1: for each k ∈ K do . Initialize chains 2: Xk = ∅ 3: end for 4: for each x ∈ X do . Irregular partitioning 5: c(x) = argmaxk∈K βk(x) 6: Xc(x) = Xc(x) ∪ {x} 7: end for 8: km = argmink∈K |Xk| . Chain of the minimum size 9: if |Xkm | ≥ L then 10: return 11: else 12: X = X − Xkm 13: while |Xkm | < L do . Increase Xkm 14: x′ = maxx∈X βkm(x) 15: X = X − {x′} 16: Xkm = Xkm ∪ {x′} 17: end while 18: RegularAssign(K − {km}, X , L) . Recursion 19: end if Output: Membership function c(x)" }, { "heading": "B TWO-STEP ESTIMATION", "text": "There are 5 reference images for each age within range [15, 80] in this work. Thus, for the age estimation of a test image using the MC rule in (5), the test image should be compared withM = 330 reference images. However, we reduce the number of comparisons using a two-step approach. First, the test image is compared with the 35 references of ages 15, 25, . . . , 75 only, and a rough age estimate θ̂1 is obtained using the MC rule. Second, it is compared with the 105 references of all ages within [θ̂1 − 10, θ̂1 + 10], and the final estimate θ̂2 is obtained. Since there are at least 10 common references in the first and second steps, the two-step estimation requires at most 130 comparisons." }, { "heading": "C MORE EXPERIMENTS", "text": "C.1 PERFORMANCE COMPARISON ON MORPH II\nFour experimental settings are used for performance comparison on MORPH II (Ricanek & Tesafaye, 2006).\nC.2 GENERALIZATION PERFORMANCE OF COMPARATOR ON FG-NET\nWe assess the proposed age estimator (1CH) on the FG-NET database (Panis et al., 2016). FG-NET is a relatively small dataset, composed of 1,002 facial images of 82 subjects. Ages range from 0 to 69. For FG-NET, the leave one person out (LOPO) approach is often used for evaluation. In other words, to perform tests on each subject, an estimator is trained using the remaining 81 subjects. Then, the results are averaged over all 82 subjects.\nIn order to assess the generalization performance, we do not retrain the comparator on the FG-NET data. Instead, we fix the comparator trained on the balanced dataset and just select references from the remaining subjects’ faces in each LOPO test. For the comparator, the arithmetic scheme in (1)∼(3) is tested as well as the default geometric scheme in (10)∼(12). For comparison, MV (Pan et al., 2018) is tested, but it is trained for each LOPO test.\nTable 6 summarizes the comparison results. MV provides better average performances on the entire age range [0, 69] than the proposed algorithm does. This is because the balanced dataset does not include subjects of ages between 0 and 14. If we reduce the test age range to [15, 69], the proposed algorithm outperforms MV, even though the comparator is not retrained. These results indicate that the comparator generalizes well to unseen data, as long as the training images cover a desired age range. Also, note that the geometric scheme provides better performances than the arithmetic scheme.\nC.3 PERFORMANCE ACCORDING TO κ\nC.4 PERFORMANCE ACCORDING TO THRESHOLDS τ AND τage\nThe ordering relationship between two instances can be categorized via the arithmetic scheme in (1)∼(3) using a threshold τ or the geometric scheme in (10)∼(12) using a threshold τage. Table 8 lists the performances of the proposed algorithm (1CH) according to these thresholds. We see that the geometric scheme outperforms the arithmetic scheme in general. The best performance is achieved with τage = 0.1, which is used in all experiments in the main paper. Note that the scores are poorer than those in Table 3, since the comparator is trained for a smaller number of epochs to facilitate this test. At τage = 0.1, two teenagers are declared to be not ‘similar to’ each other if their age difference is larger than about 1. Also, two forties are not ‘similar’ if the age difference is larger than about 5.\nC.5 PERFORMANCE ACCORDING TO NUMBER OF REFERENCES\nC.6 REFERENCE IMAGES\nFigure 8 shows all references in the supervised 6CH.\nM al e A fr ic\nan\nR ef\ner en\nce im\nag es\n15 16\n17 18\n19 20\n21 22\n23 24\n25 26\n27 28\n29 30\n31 32\n33 34\n35 36\n37 38\n39 40\n41 42\n43 44\n45 46\n47 48\n49 50\n6C H\n15 16\n17 18\n19 20\n21 22\n23 24\n25 26\n27 28\n29 30\n31 32\n33 34\n35 36\n37 38\n39 40\n41 42\n43 44\n45 46\n47 48\n49 50\n51 52\n53 54\n55 56\n57 58\n59 60\n61 62\n63 64\n65 66\n67 68\n69 70\n71 72\n73 74\n75 76\n77 78\n79 80\n51 52\n53 54\n55 56\n57 58\n59 60\n61 62\n63 64\n65 66\n67 68\n69 70\n71 72\n73 74\n75 76\n77 78\n79 80\n15 16\n17 18\n19 20\n21 22\n23 24\n25 26\n27 28\n29 30\n31 32\n33 34\n35 36\n37 38\n39 40\n41 42\n43 44\n45 46\n47 48\n49 50\n51 52\n53 54\n55 56\n57 58\n59 60\n61 62\n63 64\n65 66\n67 68\n69 70\n71 72\n73 74\n75 76\n77 78\n79 80\nFe m\nal e\nA fr\nic an\nM al e A si an\nFe m\nal e\nA si\nan\nM al e Eu\nro pe\nan\nFe m\nal e\nEu ro\npe an\nFi gu\nre 8:\nA ll\nre fe\nre nc\ne im\nag es\nin th\ne su\npe rv\nis ed\n6C H\n.F or\nso m\ne ag\nes in\nce rt\nai n\nch ai\nns ,t\nhe ba\nla nc\ned da\nta se\nti nc\nlu de\ns le\nss th\nan 5\nfa ce\ns. In\nsu ch\nca se\ns, th\ner e\nar e\nle ss\nth an\n5 re\nfe re\nnc es\n.\nC.7 AGE ESTIMATION EXAMPLES\n16 Af-F (16 Af-F) 20 Af-M (20 Af-M) 22 Af-M (22 Af-M) 27 Af-F (27 Af-F) 30 Af-M (30 Af-M) 32 Af-F (32 Af-F) 35 Af-M (35 Af-M) 42 Af-M (42 Af-M) 59 Af-M (59 Af-M)" }, { "heading": "18 As-M (18 As-M) 20 As-F (20 As-F) 23 As-F (23 As-F) 26 As-M (26 As-M) 30 As-F (30 As-F) 35 As-M (35 As-M) 39 As-F (39 As-F) 44 As-M (44 As-M) 60 As-M (60 As-M)", "text": "15 Eu-F (15 Eu-F) 17 Eu-M (17 Eu-M) 28 As-F (28 Eu-F) 29 Eu-M (29 Eu-M) 34 Eu-M (34 Eu-M) 36 Eu-F (36 Eu-F) 43 Eu-F (43 Eu-F) 58 Eu-M (58 Eu-M) 65 Eu-F (65 Eu-F)\n(a) Success cases\n27 Af-F (16 Af-F) 39 Af-F (28 Af-F) 23 Af-M (32 Af-M) 24 Af-F (35 Af-F) 27 Af-M (40 Af-M) 35 Af-M (56 Af-M) 66 Af-F (53 Af-F) 31 Af-F (42 Af-F) 74 Af-M (61 Af-M)" }, { "heading": "33 As-M (20 As-M) 24 As-F (38 As-F) 27 As-M (38 As-M) 27 As-M (17 As-M) 27 As-F (18 As-F) 29 As-M (20 As-M) 25 As-F (18 As-F) 27 As-F (17 As-F) 66 As-M (80 As-M)", "text": "28 Eu-M (15 Eu-M) 25 Eu-F (18 Eu-F) 35 Eu-F (24 Eu-F) 37 Eu-F (25 Eu-F) 42 Eu-M (30 Eu-M) 50 Eu-F (33 Eu-F) 35 Eu-M (74 Eu-M) 38 As-F (60 Eu-F) 67 Eu-M (45 Eu-M)\n(b) Failure cases\nFigure 9: Age estimation results of the proposed algorithm (supervised 6CH). For each face, the estimated label is provided together with the ground-truth in parentheses. In (a), the ages are estimated correctly. In the last row, third column, the ethnic group is misclassified. This happens rarely. In (b), failure cases are provided. These are hard examples due to various challenging factors, such as low quality photographs and occlusion by hairs, hats, hands, and stickers." } ]
2,020
AGE ESTIMATION
SP:a6047a76fb92417053518625713c7174eab16680
[ "In this work, the authors employ concepts from group theory to turn an arbitrary feed forward neural network into an equivariant one, i.e. a network whose output transforms in a way that is consistent with the transformation of the input. To this end, the authors first introduce the basic concepts of group theory required to follow their work and provide a comprehensive definition of equivariance. They then explain how to equivarify (w.r.t. a finite group G) a given neural network, and present experimental results on rotated MNIST digits to support their approach.", "The paper adds an interesting new perspective to equivariant neural nets. However, the actual construction looks equivalent to steerable neural nets to me (see the papers by Cohen and Welling). The generalization of steerable nets has been published under the name \"gauge equivariant neural nets\", it would be very interesting to chart out the exact connections between these concepts. " ]
Equivariant neural networks are special types of neural networks that preserve some symmetries on the data set. In this paper, we provide a method to modify a neural network to an equivariant one, which we call equivarification.
[]
[ { "authors": [ "2016a. Taco S Cohen", "Max Welling" ], "title": "Steerable cnns", "venue": "ence on machine learning,", "year": 2016 }, { "authors": [ "Taco S Cohen", "Maurice Weiler", "Berkay Kicanaoglu", "Max Welling" ], "title": "Gauge equivariant convolutional networks and the icosahedral cnn", "venue": null, "year": 1902 }, { "authors": [ "Sander Dieleman", "Kyle W Willett", "Joni Dambre" ], "title": "Rotation-invariant convolutional neural networks for galaxy morphology prediction", "venue": "Monthly notices of the royal astronomical society,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Honglak Lee", "Quoc V Le", "Andrew Saxe", "Andrew Y Ng" ], "title": "Measuring invariances in deep networks. In Advances in neural information processing", "venue": null, "year": 2009 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Jan Eric Lenssen", "Matthias Fey", "Pascal Libuschewski" ], "title": "Group equivariant capsule networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Diego Marcos", "Michele Volpi", "Nikos Komodakis", "Devis Tuia" ], "title": "Rotation equivariant vector field networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Diego Marcos", "Michele Volpi", "Benjamin Kellenberger", "Devis Tuia" ], "title": "Land cover mapping at very high resolution with rotation equivariant cnns: Towards small yet accurate models", "venue": "ISPRS journal of photogrammetry and remote sensing,", "year": 2018 }, { "authors": [ "Stavros J Perantonis", "Paulo JG Lisboa" ], "title": "Translation, rotation, and scale invariant pattern recognition by high-order neural networks and moment classifiers", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Henry A Rowley", "Shumeet Baluja", "Takeo Kanade" ], "title": "Rotation invariant neural network-based face detection", "venue": "In Proceedings", "year": 1998 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Bastiaan S Veeling", "Jasper Linmans", "Jim Winkens", "Taco Cohen", "Max Welling" ], "title": "Rotation equivariant cnns for digital pathology", "venue": "In International Conference on Medical image computing and computer-assisted intervention,", "year": 2018 }, { "authors": [ "Maurice Weiler", "Fred A Hamprecht", "Martin Storath" ], "title": "Learning steerable filters for rotation equivariant cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "One key issue in deep neural network training is the difficulty of tuning parameters, especially when the network size grows larger and larger Han et al. (2015). In order to reduce the complexity of the network, many techniques have been proposed by analyzing the structural characteristics of data, for example, sparsity Wen et al. (2016), invariance in movement Goodfellow et al. (2009).\nOne of the most important structural characteristics of data is symmetry. By utilizing the translation symmetry of an object in the photo, convolutional neural network (CNN)(Krizhevsky et al. (2012)) uses shared filters to reduce the number of parameters compared with fully connected networks. However, to handle the case of rotation and reflection, people usually use the data augmentation approach to generate additional input data that has a bunch of training images with different rotation angles of the original images.\nIn contrast to data augmentation approach, another idea is to design more sophisticated neural networks, such that the input data with certain symmetries can be trained together and applied to reduce the training complexity. Recent attempts have been made in the equivariant CNN Cohen & Welling (2016a); Cohen et al. (2018); Weiler et al. (2018). These existing works target at special cases and cannot be easily generalized to arbitrary networks and arbitrary symmetries.\nIn this paper, we take advantage of the symmetry of the data set and provide a general method to modify an arbitrary neural network so that it preserves the symmetry. The process is called equivarification. A key feature is that our equivarification method can be applied without detailed knowledge of a layer in a neural network, and hence, can be generalized to any feedforward neural networks. Another feature is that the number of parameters in the new neural network that we need to train is the same as the original one, if we equivarify the original network globally (see the second paragraph in Section 4 for details). In addition, we can also output how each data instance is changed from the canonical form (for example, in an image classification problem, additionally we can also output how many degrees an image is rotated from original upside-up image) using the same network.\nWe rigorously prove that our equivarification method produces truely equivariant neural networks. Practically, we equivarify a CNN as an illustration. We conduct experiments using the MNIST data set where the resulting equivariant neural network predicts the number and also the angle. If we forget the angle and keep only the number, we get an invariant neural network." }, { "heading": "1.1 MOTIVATION", "text": "By the symmetry of the data set, we mean a group action (see Definition 2.2) on the data set. Let us consider a simple cat image classification example. Then the data set can be the set of all images of a fixed size. Rotation symmetry means that we can rotate an image and the resulting image still lies in the data set. In other words, the group of rotations acts on the data set.\nOne can build a cat classifier that assigns a number between 0 and 1 to each image indicating the probability that it is an image of a cat. If one image is a rotation (say by 90 degree counterclockwise)\nof another one, then it makes sense to require a classifier to assign the same probability to these two images. A classifier that satisfies this property is said to be invariant under 90-degree rotation, which is a special case of being equivariant under 90-degree rotation. To give an example of an (noninvariant) equivariant neural network, we furthermore require our classifier not only produces the probability of being a cat image, but also outputs an angle, say in {0, 90, 180, 270} (more precisely, the probability of each angle). Suppose that the equivariant classifier predicts that an image is rotated by 90 degrees, then it should predict a 270-degree rotation for the same image but rotated by 180 degrees.\nNot only does it make sense to require the cat classifier to be equivariant, but also it is more “economical” to be equivariant if implemented correctly. Roughly speaking, a regular classifier “treats” an image and its rotated ones as separated things, but ideally an equivariant neural network “sees” their connections and “treats” them together. From another point of view, given a regular neural network, if we apply the data augmentation method carefully, since the training data is symmetric, it is possible that after training, the network is (approximately) equivariant (depending on the initialization). The fact that it is equivariant implies that there is now symmetry among the parameters. For example, some parameters are the same as other parameters, so there is redundancy. While in our equivariant neural network, the equivariance of our neural network is built into the structure of the neural network by sharing parameters, and in particular, it is independent of the loss function, initialization, and the data set. For instance, for the training data, it does not make any difference in results whether we prepare it by randomly rotating and recording the angles, or not rotating and labeling everything as degree 0.\nIn our approach, to make a neural network equivariant, at each layer other than the input layer we add a bunch of neurons (multiplying by the order of the group), but we don’t introduce new parameters. Instead the added neurons share parameters with the original ones." }, { "heading": "1.2 RELATED WORK", "text": "Invariance in neural networks has attracted attention for decades, aiming at designing neural networks that capture the invariant features of data, for example, face detection system at any degree of rotation in the image plane Rowley et al. (1998), invariance for pattern recognition Barnard & Casasent (1991), translation, rotation, and scale invariance Perantonis & Lisboa (1992).\nRecently, several works have started to look into equivariant CNN, by studying the exact mapping functions of each layer Cohen & Welling (2016a;b); Cohen et al. (2018); Marcos et al. (2017); Lenssen et al. (2018); Cohen et al. (2019), and some symmetries are studied such as translation symmetry, rotation symmetry. These methods have been used in different application domains, such as in remote sensing Marcos et al. (2018), digital pathology Veeling et al. (2018), galaxy morphology prediction Dieleman et al. (2015).\nThere are also works that aiming to construct an equivariant neural network without having to specify the symmetry Sabour et al. (2017).\nAs far as we know, our construction provides the first truly equivariant neural network (See Section 5.1)." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we talk about some basics in group theory, like group actions, equivariance, etc. For those who would like to get a more close look at the group theory and various mathematical tools behind it, please refer to any abstract algebra books, for example, Lang (2002).\nHere, we first give a couple of definitions about groups in order to help readers quickly grasp the concepts.\nDefinition 2.1. A group (G, ·) consists of a set G together with a binary operation “·” (which we usually call multiplication without causing confusing with the traditional sense of multiplication for real numbers) that needs to satisfy the four following axioms.\n1) Closure: for all a, b ∈ G, the multiplication a · b ∈ G.\n2) Associativity: for all a, b, c ∈ G, the multiplication satisfies (a · b) · c = a · (b · c).\n3) Identity element: there exists a unique identity element e ∈ G, such that, for every element a ∈ G, we have e · a = a · e = a.\n4) Inverse element: for each element a ∈ G, there exists an element b ∈ G, denoted a−1, such that a · b = b · a = e, where e is the identity element.\nNote that in general, commutativity does not apply here, namely, for a, b ∈ G, a · b = b · a does not always hold true.\nFor example, all integers with the operation addition (Z,+). One can easily check that the four axioms are satisfied and 0 is the identity element.\nAs another example, a group consists of a set {0, 1} together with the operation + (mod 2) where 0+1 = 1+0 = 1 and 1+1 = 0+0 = 0. The identity element is 0. When applying this to the image processing tasks, this can be interpreted as follows. 1 represents the action of rotating an image by 180o and 0 represents the action of not rotating an image. Then 0 + 1 represents the combination of operations that we first rotate an image by 180o and then keep it as it is, so that the final effect is to rotate an image by 180o; while 1+1 represents that we first rotate an image by 180o and then rotate again by 180o, which is equivalent to that we do not rotate the original image.\nSimilarly, we can define the element 1 as the operation of flipping an image vertically or horizontally, which we can give similar explanations for the group.\nTherefore, instead of using translations, rotations, flippings, etc., we can use abstract groups to represent operations on images, and hence, we are able to design corresponding equivariant neural networks disregarding the operations of images (symmetries of data) and just following the group representation.\nIn the following, we give a definition about group actions. Let X be a set, and G be a group.\nDefinition 2.2. We define a G-action on X to be a map\nT : G×X → X\n(on the left) such that for any x ∈ X\n• T (e, x) = x, where e ∈ G is the identity element,\n• for any g1, g2 ∈ G, we have\nT (g1, T (g2, x)) = T (g1g2, x).\nFrequently, when there is no confusion, instead of saying that T is a G-action on X or equivalently that G acts on X via T , we simply say that G acts on X ; and T is also understood from the notation, i.e., instead of T (g, x) we simply write gx, and the above formula becomes g1(g2x) = (g1g2)x.\nWe say G acts trivially on X if gx = x for all g ∈ G and x ∈ X .\nLet X,Y be two sets, and G be a group that acts on both X and Y .\nDefinition 2.3. A map F : X → Y is said to be G-equivariant, if F (gx) = gF (x) for all x ∈ X and g ∈ G. Moreover, if G acts trivially on Y then we say F is G-invariant.\nExample 2.4. Let X be the space of all images of 28 × 28 pixels, which contains the MNIST data set. Let G be the cyclic group of order 4. Pick a generator g of G, and we define the action of g on X by setting gx to be the image obtained from rotating x counterclockwise by 90 degrees. Let Y be the set {0, 1, 2, ..., 9}× {0, 90, 180, 270}. For any y = (num, θ) ∈ Y we define\ngy := (num, (θ + 90)mod360).\nAn equivariant neural network that classifies the number and rotation angle can be viewed as a map F from X to Y . Equivariance means if F (x) = (num, θ) then F (gx) = (num, (θ + 90)mod360), for all x ∈ X .\nThus, we can see that we can model each layer in the neural network as a group G that acts on a set X , where we can interpret the set X as input to this layer and the group action as the function mapping or operation of this layer. By abstracting the behaviors on the original input data using groups\n(for example, using a same group to represent either rotation by 180o and flipping, or even more abstract operations during intermediate layers), we are able to apply group actions on different sets X1, X2, X3, . . . (where each one represents input to different layers) and design similar equivariant network structures based on an original one." }, { "heading": "3 EQUIVARIFICATION", "text": "In this section, we talk about the detailed method of performing equivarification and its theoretical foundation. This is the key part of the paper to understand our proposed equivarification method. Those who would like to avoid mathematical proofs can directly go to the examples we provide to get intuitive ideas of how we construct the equivariant neural networks.\nIn this section we fix X and Z to be two sets, and G to be a group that acts on X .\nDefinition 3.1. Define the G-product of Z to be\nZ×G = {s : G → Z},\nthe set of all maps from G to Z .\nWe define a G-action on Z×G by\nG× Z×G → Z×G\n(g, s) 7→ gs,\nwhere gs as a map from G to Z is defined as\n(gs)(g′) := s(g−1g′), (3.1)\nfor any g′ ∈ G.\nWe have the projection map p : Z×G → Z defined by\np(s) = s(e), (3.2)\nfor any s ∈ Z×G where e ∈ G is the identity element. Then\nLemma 3.2. For any map F : X → Z , there exists a unique G-equivariant map F̂ : X → Z×G\nsuch that p(F̂ (x)) = F (x) for all x ∈ X .\nProof. For any x ∈ X , we define F̂ (x) as a map from G to Z by\n(F̂ (x))(g) = F (g−1x),\nfor any g ∈ G. To see that F̂ is G-equivariant, we need to check for any x ∈ X and g ∈ G, F̂ (gx) = g(F̂ (x)). For any h ∈ G, (F̂ (gx))(h) = F (h−1gx) by the definition of F̂ , while (g(F̂ (x)))(h) = (F̂ (x))(g−1h) = F (h−1gx). We leave the proof of uniqueness to the readers.\nRemark 3.3. In the Definition 3.1 and Lemma 3.2, G is an arbitrary group and Z is an arbitrary set. It is easy to see that we can adjust them, if we consider other categories. For example, when G is a compact Lie group and Z is a differentiable manifold, we can re-define Z×G to be the space of differentiable maps from G to Z; when G is a non-compact Lie group and Z is a differentiable manifold, we can consider the space of compact supported smooth maps. When we implemented the neural network for infinite G, we have to approximate it by a finite subset of G. Then in this case, we need to work in the realm of approximately equivariant. For this implementation reason, we restrict our group G to be a finite group for the rest of the paper.\nThis lemma can be summarized as the commutative diagram in Figure 1. It motivates the following general definition.\nDefinition 3.4. We say a tuple (Ẑ, T, p) a G-equivarification of Z if\n• Ẑ is a set with a G-action T ;\n• p is a map from Ẑ to Z;\n• For any set X with a G-action, and map F : X → Z , there exists a G-equivariant map\nF̂ : X → Ẑ such that p ◦ F̂ = F .\nHere ◦ denotes the composition of maps. As usual, T will be omitted from notation.\nIn Section 4 we will see applications of G-equivarification to neural networks. From Lemma 3.2 we know that the triple of the G-product Z×G, the G-action defined in the Formula 3.1, and the projection p defined in Formula 3.2 is a G-equivarification. There are other G-equivarifications. See Appendix for more discussion.\nExample 3.5. Let G be the cyclic group of order 4. More concretely, we order elements of G by (e, g, g2, g3). The set Z×G can be identified with Z×4 = Z × Z × Z × Z via the map\ns 7→ (s(e), s(g), s(g2), s(g3)). (3.3)\nThen G acts on Z×4 by g(z0, z1, z2, z3) = (z3, z0, z1, z2), and the projection map p : Z ×4 → Z is given by (z0, z1, z2, z3) 7→ z0. Let F : X → Z be an arbitrary map, then after the identification F̂ becomes a map from X to Z×4 and\nF̂ (x) = (F (x), F (g−1x), F (g−2x), F (g−3x)).\nOne can check that F̂ is G-equivariant. The map p is given by\np(z0, z1, z2, z3) = z0.\nIt is easy to see that p ◦ F̂ = F ." }, { "heading": "4 APPLICATION TO NEURAL NETWORKS", "text": "In this section, we show through an example how our proposed equivarification method works.\nLet {Li : Xi → Xi+1} n i=0 be an n-layer neural network (which can be CNN, multi-layer perceptrons, etc.). In particular, X0 is the input data set, and Xn+1 is the output data set. Let G be a finite group that acts on X0. Let L be the composition of all layers\nL = Ln ◦ Ln−1 ◦ · · · ◦ L0 : X0 → Xn+1.\nThen we can equivarify L and get maps L̂ : X0 → X̂n and p : X̂n+1 → Xn+1. Then L̂ is an equivariant neural network.\nAlternatively, one can construct an equivariant neural network layer by layer. More precisely, the equivariant neural network is given by {L̂i ◦ pi : X̂i → X̂i+1} n i=0, where L̂i ◦ pi is the equivarification of Li ◦ pi for i ∈ {0, 1, ..., n}, X̂0 = X0 and p0 = id is the identity map (See Figure 2). More precisely, by the commutativity of Figure 2 we know that\npn+1 ◦ L̂n ◦ pn ◦ ̂Ln−1 ◦ pn−1 ◦ · · · ◦ L̂0 ◦ p0 = L = p ◦ L̂.\nThen both L̂n ◦ pn ◦ ̂Ln−1 ◦ pn−1 ◦ · · · ◦ L̂0 ◦ p0 and L̂ are equivarifications of L. Suppose that for both equivarifications we have chosen X̂n+1 to be Xn+1 ×G. Then by the statement in Theorem 3.2, we have L̂n ◦ pn ◦ ̂Ln−1 ◦ pn−1 ◦ · · · ◦ L̂0 ◦ p0 = L̂.\nSometimes, other than equivarifying the map Li ◦ pi : X̂i → Xi+1, it makes sense to construct some other map L′i from X̂i to some other set X ′ i+1, and then we can equivarify L ′ i. This makes the equivariant neural network more interesting (see the example below).\nExample 4.1. Let the 0-th layer L0 : X0 → X1 of a neural network that is defined on the MNIST data set be a convolutional layer, and X1 = R\nℓ1 , where ℓ1 = 28 × 28 × c1, and c1 is the number of channels (strides = (1, 1), padding = ‘same’). Let G = {e, g, g2, g3} be the cyclic group of order 4 such that g acts on X0 as the 90-degree counterclockwise rotation. Then we construct L̂0 : X0 → R 4ℓ1 by\nx0 7→ (L0(x0), L0(g −1x0), L0(g −2x0), L0(g −3x0)).\nFor the next layer, instead of equivarifying L1 ◦ p1 : R 4ℓ1 → Rℓ2 , we can construct another convolutional layer directly from R4ℓ1 by concatenating the four copies of Rℓ1 along the channel axis to obtain R28×28×4c1 , and build a standard convolution layer on it. This new construction of course changes the number of variables compared to that of the original network.\nFrom the above analysis and Lemma 3.2, it is not hard to derive the following summary.\nMain result Let X = {Li : Xi → Xi+1}0≤i≤n+1 be an original neural network that can process input data {xj0}j ⊆ X0 and labelling data {x j n+1}j ⊆ Xn+1. Let G be a finite group that acts on X0. The proposed G-equivarification method is able to generate a G-equivariant neural network X̂ = {L̂i : X̂i → X̂i+1}0≤i≤n+1 that can process input data {x j 0}j ⊆ X0 = X̂0 and enhanced labeling data {x̂in+1}j ⊆ X̂n+1. Furthermore, the number of parameters of X̂ is the same as that of X." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we show our experiments on the MNIST dataset, which achieve promising performance1\nOur designed equivariant neural network using the proposed equivarification method is shown in Figure 3. Note that equivarification process does not increase the number of variables. In our case, in order to illustrate flexibility we choose not to simply equivarify the original neural network, so the layer conv2 and conv3 have four times the number of variables compared to the corresponding original layers.\nNext, we discuss the labeling of the input data. Since the neural network is G-equivariant, it makes sense to encode the labels G-equivariantly. Suppose (x0, xn+1) ∈ X0 ×Xn+1 is one labelled data point. Then in the ideal case, one hopes to achieve L(x0) = xn+1. Assuming this, to give a new label for the data point x0 for our equivariant neural network we need to define x̂n+1 = L̂(x0). For this, it is sufficient to define L̂(x0)(g) for all g ∈ G. By equivariance, L̂(x0)(g) = L(g −1x0). If g = e then L̂(x0)(g) = L(x0) = xn+1. If g 6= e, it is very likely that the data g −1x0 originally does not have a label, so we do not know ideally what L(g−1x0) should be. In the naive data augmentation approach, L(g−1x0) is labeled the same as L(x0) hoping to get L as close to an invariant map as possible. In our case, we do not have such restriction, since we do not need L to be invariant. In\n1The code in tensorflow is uploaded as the supplementary material. In the code, we also include a version that allows 90 degree rotation and horizontal and vertical flips, just to make the group non-commutative.\npractice, Xn+1 is a vector space, and we choose to label L(g −1x0) by the origin of Xn+1. In our MNIST example, this choice is the same as the following.\nFor m ∈ {0, 1, 2, ..., 9} denote\nem = (0, · · · , 0, 1, 0, · · · , 0) ∈ R 10.\n↑\nm-th spot\nFor an unrotated image x0 ∈ X0 that represents the number m, we assign the label em⊕0⊕0⊕0 ∈ R40. Then based on the equivariance, we assign\ngx0 7→ 0⊕ em ⊕ 0 ⊕ 0,\ng2x0 7→ 0⊕ 0 ⊕ em ⊕ 0,\ng3x0 7→ 0⊕ 0 ⊕ 0 ⊕ em.\nFor each testing image in the MNIST data set, we randomly rotate it by an angle of degree in {0, 90, 180, 270}, and we prepare the label as above. For the training images, we can do the same, but just for convenience, we actually do not rotate them, since it won’t affect the training result at all." }, { "heading": "5.1 EQUIVARIANCE VERIFICATION", "text": "To spot check the equivariance after implementation, we print out probability vectors in R40 of an image of the number 7 and its rotations. We see that the probability vectors are identical after a shift by 10 slots. See Figure 4." }, { "heading": "5.2 ACCURACY", "text": "Here we count the prediction as correct if both the number and the angle are predicted correctly. The accuracy of our neural network on the test data is 96.8%. This is promising when considering the fact that some numbers are quite hard to determine its angles, such as 0, 1, and 8." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proposed an equivarification method to design equivariant neural networks that are able to efficiently process input data with symmetries. Our proposed method can be generalized to arbitrary networks or functions by leveraging group action, which enables our design to be uniform across layers of feedforward neural networks, such as multi-layer perceptrons, CNNs, without being aware of the knowledge of detailed functions of a layer in a neural network. As an illustration example, we show how to equivarifying a CNN for image classification. Results show that our proposed method performs as expected, yet with significantly reduction in the design and training complexity." }, { "heading": "A APPENDIX - MORE ABOUT EQUIVARIFICATION", "text": "In Section 3 we define (Z×G, p) as an example of G-equivarification. In this section, we show that it is “minimal” in the sense of its universal property.\nLemma A.1 (universal property). For any G-equivarification (Ẑ ′, p′) of Z , there exists a Gequivariant map π : Ẑ ′ → Z×G such that p′ = p◦π. Moreover, for any set X and map F : X → Z , the lifts F̂ : X → Z×G and F̂ ′ : Z → Ẑ ′ of F satisfy π ◦ F̂ ′ = F̂ . (See Figure 5.)\nProof. We define the map π : Ẑ ′ → Z×G by [π(ẑ′)](g) = p′(g−1ẑ′), where ẑ′ ∈ Ẑ ′ and g ∈ G. To show p′ = p ◦ π, for any ẑ′ ∈ Ẑ , we check p ◦ π(ẑ′) = p[π(ẑ′)] = [π(ẑ′)](e) = p′(ẑ′). To show π is G-equivariant, for any ẑ′ ∈ Ẑ, and h ∈ G, we compare π(hẑ′) and hπ(ẑ′): for any g ∈ G, [π(hẑ′)](g) = p′(g−1hẑ′) and [hπ(ẑ′)](g) = [π(ẑ′)](h−1g) = p′(g−1hẑ′). Lastly, we show π ◦ F̂ ′ = F̂ . Note that π ◦ F̂ ′ is a G-equivariant map from X to Z×G, and\np ◦ (π ◦ F̂ ′) = p′ ◦ F̂ ′ = F,\nso by the uniqueness of Lemma 3.2, we get π ◦ F̂ ′ = F̂ .\nNow we discuss about finding a “smaller” equivarification in another direction, shrinking the group by bring in the information about X . Let N = {g ∈ G | gx = x for all x ∈ G}, the set of elements in G that acts trivially on X . It is easy to check that N is a normal subgroup of G. We say G acts on X effectively if N = {e}. In the case when G does not act effectively, it makes sense to consider the G/N -product of Z , where G/N is the quotient group. More precisely, consider Z×G/N = {s : G/N → Z}, which is smaller in size than Z×G. For any map F : X → Z ,\nwe can get a G/N -equivariant lift F̂ of F following the same construction as Lemma 3.2 (with G replaced by G/N ). Since G maps to the quotient G/N , we have that G acts on Z×G/N and F̂ is also G-equivariant." } ]
2,019
null
SP:ac74f77f4d4c8c6c2ca9304bb1050aa22a87df63
[ "The paper \"On Variational Learning of Controllable Representations for Text without Supervision\" tackles the problem of latent vacancy of text representation via variational text auto-encoders. Based on the observation that a single factor of the sentence encoding gathers most of relevant information for classifying the sentence as positive or negative (sentiment classification), authors study the impact of manipulating this factor in term of the corresponding decoded sentences. They reasonnably claim that if such a manipulation fails at decoding accurate sentences, it is because we fall in representation areas that the decoder never seen during training. Thus they propose a way to constrain the posterior mean to a", "This paper presents a method for controlled text generation by using a new loss function (standard VAE loss with auxiliary losses added on). The method is tested on style transfer datasets: Yelp and Amazon. The central hypothesis is that when manipulating latent codes of a VAE, you can end up in low-density regions of the aggregated posterior. Such latent codes are rarely seen by the decoder so that quality of generation is low. To address this problem, they constrain the posterior mean to a learnt probability simplex and try to ensure that the simplex is densely filled. They do this by adding 2 regularizing losses to the VAE loss." ]
The variational autoencoder (VAE) has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature. In this work, we investigate the reason why unsupervised learning of controllable representations fails for text. We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer. Furthermore, when switching the latent factor (e.g., topic) during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way – a capability that has never been attempted by previous methods.
[]
[ { "authors": [ "Yu Bao", "Hao Zhou", "Shujian Huang", "Lei Li", "Lili Mou", "Olga Vechtomova", "Xinyu Dai", "Jiajun Chen" ], "title": "Generating sentences from disentangled syntactic and semantic spaces", "venue": null, "year": 1907 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "David M Blei", "Andrew Y Ng", "Michael I Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of machine Learning research,", "year": 2003 }, { "authors": [ "Samuel R Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew M Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "arXiv preprint arXiv:1511.06349,", "year": 2015 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Ondřej Cı́fka", "Aliaksei Severyn", "Enrique Alfonseca", "Katja Filippova" ], "title": "Eval all, trust a few, do wrong to none: Comparing sentence generation models", "venue": "arXiv preprint arXiv:1804.07972,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Zhenxin Fu", "Xiaoye Tan", "Nanyun Peng", "Dongyan Zhao", "Rui Yan" ], "title": "Style transfer in text: Exploration and evaluation", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": null, "year": 1901 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Toward controlled generation of text", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "arXiv preprint arXiv:1711.00848,", "year": 2017 }, { "authors": [ "Juncen Li", "Robin Jia", "He He", "Percy Liang" ], "title": "Delete, retrieve, generate: A simple approach to sentiment and style transfer", "venue": "arXiv preprint arXiv:1804.06437,", "year": 2018 }, { "authors": [ "Lajanugen Logeswaran", "Honglak Lee", "Samy Bengio" ], "title": "Content preserving text generation with attribute controls", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christopher Manning", "Prabhakar Raghavan", "Hinrich Schütze" ], "title": "Introduction to information retrieval", "venue": "Natural Language Engineering,", "year": 2010 }, { "authors": [ "Courtney Napoles", "Keisuke Sakaguchi", "Matt Post", "Joel Tetreault" ], "title": "Ground truth for grammatical error correction metrics", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume", "year": 2015 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th annual meeting on association for computational linguistics,", "year": 2002 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of causal inference: foundations and learning algorithms", "venue": "MIT press,", "year": 2017 }, { "authors": [ "Alec Radford", "Rafal Jozefowicz", "Ilya Sutskever" ], "title": "Learning to generate reviews and discovering sentiment", "venue": "arXiv preprint arXiv:1704.01444,", "year": 2017 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Ali Razavi", "Aäron van den Oord", "Ben Poole", "Oriol Vinyals" ], "title": "Preventing posterior collapse with delta-vaes", "venue": "arXiv preprint arXiv:1901.03416,", "year": 2019 }, { "authors": [ "Tianxiao Shen", "Tao Lei", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Style transfer from non-parallel text by cross-aligment", "venue": "Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Akhilesh Sudhakar", "Bhargav Upadhyay", "Arjun Maheswaran" ], "title": "Transforming delete, retrieve, generate approach for controlled text style transfer", "venue": "arXiv preprint arXiv:1908.09368,", "year": 2019 }, { "authors": [ "Jakub M Tomczak", "Max Welling" ], "title": "Vae with a vampprior", "venue": "arXiv preprint arXiv:1705.07120,", "year": 2017 }, { "authors": [ "Tsung-Hsien Wen", "David Vandyke", "Nikola Mrksic", "Milica Gasic", "Lina M Rojas-Barahona", "Pei-Hao Su", "Stefan Ultes", "Steve Young" ], "title": "A network-based end-to-end trainable task-oriented dialogue system", "venue": "arXiv preprint arXiv:1604.04562,", "year": 2016 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Jake Zhao", "Yoon Kim", "Kelly Zhang", "Alexander M Rush", "Yann LeCun" ], "title": "Adversarially regularized autoencoders", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "D K" ], "title": "COMPARISONS WITH BASELINES ON TOPIC MODELLING Experimental setup: We use the AG news dataset for this task constructed by (Zhang et al., 2015). It contains four topic categories which are World, Sports, Business and Sci/Tech, with the title and description fields. For each category, there are 30, 000 training samples and 1, 900 test samples", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "High-dimensional data, such as images and text, are often causally generated through the interaction of many complex factors, such as lighting and pose in images or style and content in texts. Recently, VAEs and other unsupervised generative models have found successes in modelling the manifold of natural images (Higgins et al., 2017; Kumar et al., 2017; Chen et al., 2016). These models often discover controllable latent factors that allow manipulation of the images through conditional generation from interpolated or extrapolated latent codes, often with impressive quality. On the other hand, while various attributes of text such as sentiment and topic can be discovered in an unsupervised way, manipulating the text by changing these learned factors have not been possible with unsupervised generative models to the best of our knowledge. Cı́fka et al. (2018); Zhao et al. (2018) observed that text manipulation is generally more challenging compared to images, and the successes of these models cannot be directly transferred to texts.\nControllable text generation aims at generating realistic text with control over various attributes including sentiment, topic and other high-level properties. Besides being a scientific curiosity, the possibility of unsupervised controllable text generation could help in a wide range of application, e.g., dialogues systems (Wen et al., 2016). Existing promising progress (Shen et al., 2017; Fu et al., 2018; Li et al., 2018; Sudhakar et al., 2019) all relies on supervised learning from annotated attributes to generate the text in a controllable fashion. The high cost of labelling large training corpora with attributes of interest limits the usage of these models, as pre-existing annotations often do not align with some downstream goal. Even if cheap labels are available, for example, review scores as a proxy for sentiment, the control is limited to the variation defined by the attributes.\nIn this work, we examine the obstacles that prevent sequence VAEs from performing well in unsupervised controllable text generation. We empirically discover that manipulating the latent factors for typical semantic variations often leads to latent codes that reside in some low-density region of the\naggregated posterior distribution. In other words, there are vacant regions in the latent code space (Makhzani et al., 2015; Rezende & Viola, 2018) not being considered by the decoding network, at least not at convergence. As a result, the decoding network is unable to process such manipulated latent codes, yielding unpredictable generation results of low quality.\nIn order to mitigate the latent vacancy problem, we propose to constrain the posterior mean to a learned probability simplex and only perform manipulation within the probability simplex. Two regularizers are added to the original objective of VAE. The first enforces an orthogonal structure of the learned probability simplex; the other encourages this simplex to be filled without holes. Besides confirming that latent vacancy is indeed a cause of failure in previous sequence VAEs’, it is also the first successful attempt towards unsupervised learning of controllable representations for text to the best of our knowledge. Experimental results on text style transfer show that our approach significantly outperforms unsupervised baselines, and is competitive with strong supervised approaches across a wide range of evaluation metrics. Our proposed framework also enables finer-grained and more flexible control over text generation. In particular, we can switch the topic in the middle of sentence generation, and the model will often still find a way to complete the sentence in a natural way." }, { "heading": "2 BACKGROUND: VARIATIONAL AUTOENCODERS", "text": "The variational autoencoder (VAE) (Kingma & Welling, 2013) is a generative model defined by a prior p(z) and a conditional distribution pθ(x|z). The VAE is trained to optimize a tractable variational lower bound of log pθ(x):\nLVAE(x;θ,φ) = Ez∼qφ(z|x)[log pθ(x|z)]−DKL(qφ(z |x)||p(z)), (1)\nwhere qφ(z |x) is a variational distribution parameterized by an encoding network with parameters φ, and pθ(x|z) denotes the decoding network with parameters θ. This objective tries to minimize the reconstruction error to generate the data, and at the same time regularizes qφ(z |x) towards the prior p(z). In this paper, p(z) is chosen as N (0, I ). For text modelling, the input x is some observed text. Both the encoding and decoding network are usually recurrent neural networks.\nNote that during learning, the decoding network pθ only learns to decode conditioned on z that are sampled from qφ(z |x). In other words, the decoding network only learns to process z sampled from the aggregated posterior distribution qφ(z) = Ex∼pd(x)qφ(z |x), where pd(x) is the data distribution. If qφ(z) has regions of low density, there is no guarantee that pθ would decode well in such regions. This is an important intuition that will become central to our analysis in Sec. 3." }, { "heading": "3 LATENT VACANCY PREVENTS EFFECTIVE MANIPULATION", "text": "In this section, we take a deeper look into the aggregated posterior latent space of sequence VAE trained on text, and provide justification for the alternative solution we propose in Section 4." }, { "heading": "3.1 OBSERVATIONS FROM UNSUPERVISED SENTIMENT MANIPULATION", "text": "As pointed out by Bowman et al. (2015), one of the motivations to apply VAEs on text is to allow generation of the sentences conditioned on extrinsic features by controlling the latent codes. Without annotated labels, no previous methods have successfully learned controllable latent factors as mentioned in Sec. 1. To understand what is missing, we conduct exploratory experiments to use VAE for unsupervised sentiment manipulation.\nWe use the Yelp restaurant reviews dataset and the same data split following Li et al. (2018). We train a β-VAE (Higgins et al., 2017)1 with a latent space of 80 dimensions, an LSTM encoder, and an LSTM decoder. Details about this experiment are described in Appendix A.1.\nBy inspecting the accuracy on the validation set, we find that there exists one dimension of latent code achieving higher than 90% sentiment classification accuracy by its value alone, while other latent codes get accuracy around 50%. Further details can be found in Appendix A.2. It means that this latent dimension is an effective sentiment indicator. Similar phenomena have been observed in\n1We also try state-of-the-art techniques (He et al., 2019) on VAE w.r.t. optimizing ELBO but the KL term trained with those techniques are too small to capture the details of the source sentence.\nlarge-scale language models (Radford et al., 2017). However, the direct influence on the generative process of the model observed in Radford et al. (2017) does not apply on the VAE. When we try to perform sentiment manipulation by modifying this latent dimension2, the decoding network fails to generate the desired outputs most of the time, as evidenced by the poor quantitative evaluation in Table. 1, and poor samples shown in Appendix A.3." }, { "heading": "3.2 LATENT VACANCY IN TEXT MODELLING", "text": "One possible reason for the failure is that the decoding network is never trained on codes like the manipulated ones. This is the case if the aggregated posterior has holes or regions of low density, and the manipulated codes fall into such vacant regions. Supposing the aggregated posterior latent space possesses a shape as shown in Fig. 1, the direct manipulated latent codes will fall out of the aggregated posterior latent space for most input samples. Such latent codes are never seen by the model during training and possess a low density under the aggregated posterior distribution, leading to unpredictable behaviours during decoding.\nTo verify our hypothesis demonstrated in Fig. 1, we empirically estimate the density of sentimentmanipulated codes under the aggregated posterior distribution of our trained VAE. Here, we approximate the data distribution pd(x) with the empirical distribution over all the training samples. As a result, the estimated aggregated posterior distribution is a large mixture of Gaussian distribution. For all 1000 test samples, we move the dimension of code capturing sentiment from µ− 2σ to µ+ 2σ where µ and σ are the mean and the standard deviation estimated on all the training samples and\n2Different strategies are attempted, see Appendix A.4 for details.\nmeasure the averaged negative log-likelihood (NLL) under the aggregated posterior distribution. As depicted in Fig. 3 (a), the NLL plotted in blue dot curve rises sharply when moving away from µ even if there is only one dimension of code is changing, indicating the existence of the vacancy in the aggregated posterior latent space. In addition, we draw the histogram of all the test samples’ NLL considering their original latent codes and modified ones in Fig. 3 (b). The histogram shows that there is a large divergence in NLL between the original latent codes and the modified ones. Also, the modified latent codes have two separate modes, confirming the irregular shape of the aggregated posterior latent space. In order to resolve this issue, the approach proposed in this work is to constrain the posterior in a way that the manipulation only happens in a learned simplex, as depicted in Fig. 2. In this constrained subspace, the phenomenom of low density holes of aggregated posterior is significantly reduced, as Fig. 3 (a) and (c) empirically show that there is little change in NLL of original versus modified codes. The details of our method is presented in the next section." }, { "heading": "4 METHOD", "text": "" }, { "heading": "4.1 OVERVIEW", "text": "The experiments conducted in Sec. 3 validates the existence of vacancy in the aggregated posterior latent space. One potential way to resolve the problem is to better match the aggregated posterior with the prior (Makhzani et al., 2015; Tomczak & Welling, 2017; Zhao et al., 2018). However, in terms of unsupervised learning of controllable representation for text, these previous methods have not shown successes; Zhao et al. (2018) only attempted supervised text style transfer, and also reported negative results from the AAE (Makhzani et al., 2015). Another way to resolve the vacancy issue is to directly enforce that the aggregated posterior itself has no vacant region anywhere where we would like to perform latent code manipulation. We propose to map the posterior Gaussian mean to a constrained space, more specifically a learned probability simplex, where we can encourage the constrained latent space to be filled without vacancy, and perform manipulation to be within this simplex. As illustrated in Fig. 2, we add an additional mapping function as part of the encoding network which maps the mean of the Gaussian posterior to a constrained space. Two regularization terms are introduced later to ensure the learned simplex is not degenerate and that this subspace is well filled.\nIn addition, we separately model the relevant factors that we wish to control and the irrelevant factors by splitting z into two parts, z(1) and z(2), following prior work (Bao et al., 2019). The first part captures the relevant factors that are dominant in the data without an inductive bias from external signals, while the second part learns to encode the remaining local information that is useful for reconstructing the source sentences. As a result, qφ(z |x) is decomposed into qφ1(z(1)|x)qφ2(z(2)|x) where φ = φ1 ∪ φ2. With diagonal covariances the KL divergence term in Eq. 1 splits into two separate KL terms. In practice, we use a MLP encoding network to parametrize z(1) with some sentence representations as the input (e.g., averaging GloVe embeddings (Pennington et al., 2014) over the input tokens) and a LSTM encoding network to parametrize z(2). We only constrain the posterior of z(1) and z(2) is optimized the same way as the traditional VAE." }, { "heading": "4.2 CONSTRAINING THE POSTERIOR", "text": "We now describe how to map the mean µ of the Gaussian posterior for z(1) ∈ RN to a constrained latent space. We would like to constrain the mean µ to have a structure as follows:\nµ = K∑ i=1 piei, K∑ i=1 pi = 1, 〈ei, ej〉 = 0, i 6= j, K ≤ N (2)\nwhere ei are vectors representing the relevant factors, pi is the proportion of ith relevant factor encoded in z(1) andK is a hyperparameter indicating the number of relevant factors to discover. In other words, the mean of the Gaussian posterior of z(1) is constrained to be inside a K-dimension probability simplex in RN whose vertices are represented by the orthogonal basis vectors ei, i = 1, . . . ,K. Given the outputs of the MLP encoder h and logσ2, we learn an additional mapping function π which maps h to the constrained posterior space, which can be treated as part of the encoding network:\nµ = π(h) = E · softmax(Wh + b), (3) whereE = [e1, . . . , eK ] is a learnable embedding matrix representing the bases,W is the learnable weight matrix, and b is the learnable bias vector. As a result, the constrained posterior is parametrized by µ and logσ2 as a Gaussian distribution N (µ, diag(σ2)).\nWith the mapping function alone, the proposed VAE suffers from posterior collapse (Bowman et al., 2015), a well-known problem where the model ignores the latent code z during the training. Further complicating matters is the fact that there is an abundance of signals for predicting the next token in the text, but the signals indicating high-level semantics are quite sparse. It is thus unlikely that the VAEs can capture useful relevant factors from raw text without collapse. For these reasons, we enforce orthogonality in the learnt basis vectors as defined in Eq. 2, which introduces a natural recipe to prevent posterior collapse for z(1). Note that the KL divergence between qφ1(z (1)|x) and p(z(1)) is\nDKL(qφ1(z (1)|x)‖p(z(1))) = 1 2 µ>µ + 1 2\n( σ>σ − logσ>σ − 1 ) . (4)\nWith orthogonality in the basis vectors, the first term in the above equation can be factorized into µ>µ = ( ∑ i piei) >( ∑ i piei) = ∑ i p2ie > i ei. (5)\nTo encourage orthogonality in the basis vectors, a regularization term is added to the objective function:\nLREG(x;φ1) = ‖E>E − αI‖, (6)\nwhere I is the identity matrix and α is a hyperparamter. When LREG = 0, e>i ei = α. In this case, µ>µ = α ∑ i p 2 i reaches its minimum α K when p is a uniform distribution. The proof can be found in Appendix C. In practice, LREG will quickly decrease to around 0, ensuring that the KL term will never fully collapse with the structural constraint. When it comes to controlled generation, one can choose a vertex or any desired point in the probability simplex, as illustrated in Fig. 2.\nNote that the constrained posterior also means that the aggregated posterior can never match the isotropic Gaussian prior. In other word, we achieve good controlled text generation potentially at the cost of poor uncontrolled generation from the prior, but such is not the focus of this current work, and could potentially be resolved by selecting or learning a better prior as in Tomczak & Welling (2017)." }, { "heading": "4.3 FILLING THE CONSTRAINED SPACE", "text": "Constraining the posterior inside a certain space does not guarantee that this space will be filled after training. In order to prevent this, we want the probability distribution over the relevant factors p to cover as much of the constrained latent space as possible. We introduce a reconstruction error of the structured latent code in order to push p away from a uniform distribution. For each input sentence, we randomly sample m sentences from the training data as negative samples. By applying the same encoding process, we get the structured latent code µ(−)i for each negative sample. Our goal is to make the raw latent code h similar to the restructured latent code µ while different from latent codes µ (−) i of the negative samples, so that p is generally different for each input sample. The structured reconstruction loss is formulated as a margin loss as follows:\nLS-REC(x;φ1) = Ez(1)∼qφ1 (z(1)|x)\n[ 1\nm m∑ i=1 max(0, 1− h ·µ + h ·µ(−)i )\n] . (7)\nOur final objective function is defined as follows:\nL(x;θ,φ) = LVAE + LREG + LS-REC. (8)" }, { "heading": "5 RELATED WORK", "text": "" }, { "heading": "5.1 UNSUPERVISED LEARNING OF DISENTANGLED REPRESENTATIONS", "text": "Learning disentangled representations is an important step towards better representation learning (Bengio et al., 2013) which can be useful for (semi-)supervised learning of downstream tasks, transfer and few-shot learning (Peters et al., 2017). VAEs have achieved promising results for unsupervised learning of disentangled representations. Several variations of VAEs have been proposed to achieve better disentanglement (Higgins et al., 2017; Kumar et al., 2017; Chen et al., 2016; Razavi et al., 2019). However, most recent progress in this direction has been restricted to the domain of images." }, { "heading": "5.2 CONTROLLED TEXT GENERATION", "text": "In order to perform controllable text generation, previous methods either assume annotated attributes or multiple text datasets with different known styles (Hu et al., 2017; Shen et al., 2017; Zhao et al., 2018; Fu et al., 2018; Li et al., 2018; Sudhakar et al., 2019; Logeswaran et al., 2018; Lample et al., 2018). The requirement of labelled data largely restricts the capabilities and the applications of these models. Instead, all our proposed framework needs is raw text without any annotated attribute. The dominant underlying relevant factors in the given corpus will be discovered and disentangled by our unsupervised method, which can in turn be used for controlled generation." }, { "heading": "6 EXPERIMENTS", "text": "To demonstrate the effectiveness of our approach, we compare it to unsupervised baselines with traditional VAEs, considering the density under the aggregated posterior distribution and the performance on sentiment manipulation. Following evaluation protocols in text style transfer, we also compare our method to strong supervised approaches. Furthermore, we showcase the ability of finer-grained style discovery and transition possessed by our system, which has not been attempted in the literature.\nIn this section, our proposed framework is referred as CP-VAE (Constrained Posterior VAE). Detailed configurations including the hyperparameters, model architecture, training regimes, and decoding strategy are found in Appendix B." }, { "heading": "6.1 COMPARISONS WITH UNSUPERVISED BASELINES", "text": "Experimental setup: We use the same experimental setting and dataset as mentioned in Sec. 3. The 80D latent code is split into 16 and 64 dimensions for z(1) and z(2) respectively. The sentence representations used for z(1) is the averaged GloVe embeddings over the input tokens andK is chosen as 3. To decide which basis vector corresponds to which sentiment, we sample 10 positive and 10 negative sentences respectively in the development set, pass them to the encoder, and choose the basis vector with the highest average pi in p = softmax(Wh + b), yielding vp as the positive basis and vn as the negative basis. If vp and vn are chosen to be the same vector, we choose the index with the second highest pi for vp. To perform sentiment manipulation, we fix z(1) to be the chosen basis vector; that is, vp or vn.\nComparisons on density under the aggregated posterior distribution: First, we do linear interpolation between the two discovered basis vectors vp and vn and estimate the averaged NLL under the aggregated posterior distribution the same way as introduced in Sec. 3. The green solid curve in Fig. 3 (a) shows that the NLL of CP-VAE is relatively stable for the whole range of the interpolation. In Fig. 3 (c), the original latent codes and the modified ones largely overlap with each other. Both observations validate the effectiveness of CP-VAE in resolving the latent vacancy problem, leading to significant improvements on unsupervised sentiment manipulation, as seen later.\nComparsions with metrics on text style transfer: For quantitative evaluation, we adopt automatic evaluation metrics used in text style transfer (Sudhakar et al., 2019) including classification accuracy\n(AC), BLEU score (BL), GLEU score (GL) and language model perplexity (PL), whose definitions are elaborated in the next section. We also report DKL(qφ1(z\n(1)|x)‖p(z)) (KL) for z(1) of CP-VAE. As shown in Tab. 1, CP-VAE performs significantly better than β-VAE in terms of accuracy, BLEU and GLEU. The lower perplexity of β-VAE is due to mode collapse, which produces very short pivot sentences such as “great !”. The results match our observations from the experiments on density under the aggregated posterior distribution, confirming that latent vacancy prevents effective manipulation of the latent codes. We also conduct an ablation study by removing LREG and LS-REC from the objective. The results demonstrate that both terms are crucial to the success of CP-VAE. Without LREG, CP-VAE experiences posterior collapse for z(1). As a result, vp and vn collide with each other, leading to failure in disentangled representation learning. Since we choose K as 3, it is convenient to visualize the samples during training with p in the learnt probability simplex, as shown in Fig. 4. We can see that the whole simplex is mostly covered with samples with the help of LS-REC. Without LS-REC, the decoding network fails to recognize the basis vectors due to the poor coverage of the probability simplex, causing the model to lose most of its transferring ability." }, { "heading": "6.2 COMPARISONS TO SUPERVISED APPROACHES ON TEXT STYLE TRANSFER", "text": "Experimental setup: We choose two datasets, Yelp and Amazon, used in works (Li et al., 2018; Sudhakar et al., 2019) on text style transfer which provide human gold-standard references for the test set. The same train-dev-test splits are used in our experiments. Two different sentence representations are used in this experiment, averaged GloVe and BERT (Devlin et al., 2018), denoted as CP-G(loVe) and CP-B(ert) respectively. The remaining settings are as described in the above section.\nCompared supervised approaches: On the two datasets, we compare to three adversarially trained models: StyleEmbedding (SE) (Fu et al., 2018), MultiDecoder (MD) (Fu et al., 2018), CrossAligned (CA) (Shen et al., 2017) and two state-of-the-art models based on a “delete, transform, and generate” framework: DeleteAndRetrieve (D&R) (Li et al., 2018) and Blind-GenerativeStyleTransformer (B-GST) (Sudhakar et al., 2019).\nEvaluation protocols: Four different automatic evaluation metrics are used to measure the different perspectives of the transferring quality, following Sudhakar et al. (2019). To measure transferring ability, we use pre-trained CNN based classifiers achieving 98% and 84% accuracies on the test sets of Yelp and Amazon respectively. To measure content preservation, we use the BLEU (Papineni et al., 2002) score between the transferred sentences and the source sentences. To measure fluency, we finetune OpenAI GPT-2 (Radford et al., 2019) with 345 million parameters on the same trainingdev-test split to obtain the perplexity of generated sentences. The fine-tuned language models achieve perplexities of 26.6 and 34.5 on the test sets of Yelp and Amazon respectively. In addition, Sudhakar et al. (2019) argued that the Generalized Language Evaluation Understanding Metric (GLEU) has a better correlation with the human judgement. Here, we use the implementation of GLEU3 provided by Napoles et al. (2015) to calculate the GLEU score. Result Analysis: As observed by Li et al. (2018) and Sudhakar et al. (2019), accuracy, BLEU score and perplexity do not correlate well with human evaluations. Therefore, it is important to not consider them in isolation. Tab. 2 shows that our proposed approaches get similar scores on these metrics with human reference sentences on the second row, indicating that the generated sentences of our proposed approaches is reasonable considering the combination of these metrics. As seen by Sudhakar et al. (2019) and verified in Sec. 6.1, GLEU strike a balance between target style match and content retention and correlate well with the human evaluations. From Tab. 2, CP-VAE consistently outperforms the three adversarially trained models on GLEU by a noticeable margin and achieve competitive results as compared to the recent state-of-the-art models. By checking the samples generated from the models as shown in Tab. 3, B-GST, the current state-of-the-art, is more\n3https://github.com/cnap/gec-ranking\nconsistent to the source sentence, which can be expected, since it only makes necessary edits to flip the sentiment. CP-VAE tends to generate more diverse contents which may not be relevant sometimes, but the overall quality is reasonable considering it is trained without the label information. More samples can be found in Appendix E." }, { "heading": "6.3 FINER-GRAINED STYLE DISCOVERY AND TRANSITION", "text": "To further explore the potential of CP-VAE, we conduct the following exploratory experiments. We use the AG news dataset constructed by (Zhang et al., 2015), which contains four topic categories which are World, Sports, Business and Sci/Tech, with the title and description fields. Here, we drop the title and just use the description field to train CP-VAE and set K = 10. All four topics are automatically discovered by CP-VAE and identified as described in Sec. 6.1. We also compare the results of our identified topics to standard baselines for unsupervised topic modelling, the details\ncan be found in Appendix D. We choose a basis vector discovered by our model and generate a few tokens. Then, we switch the basis vector and continue the generation until the end-of-seq token is generated. Generated samples are shown in Table 4. We see that our model learns to transition from one topic to another in a natural and fluent way within the same sentence. Several observations can be made based on these samples: (1) it is good at detecting name entities and replacing them with the name entities related to the chosen topic; (2) there is no hard restriction on when to switch the topic; the model will determine an appropriate way to do the transition by itself. Such observations confirm that CP-VAE possesses a filled constrained latent space which make the latent code robust to manipulation across different time steps, which can be effectively reflected in the generation process. Due to space limitations, we put more samples in Appendix F." }, { "heading": "7 CONCLUSION", "text": "In this work, we investigate latent vacancy as an important problem in unsupervised learning of controllable representations when modelling text with VAEs. To mitigate this, we propose to constrain the posterior within a learned probability simplex, achieving the first success towards controlled text generation without supervision." }, { "heading": "A DETAILS ABOUT EXPLORATORY EXPERIMENTS", "text": "A.1 MODEL DETAILS\nFor the β-VAE we used for the exploratory experiments, we use a LSTM encoding network and a LSTM decoding network. For the encoding network, the input size is 256, and the hidden size is 1,024. For the decoding network, the input size is 256, the hidden size is 1,024, and dropouts with probability 0.5 are applied on after the embedding layer and the LSTM layer in the decoding network. β is chosen as 0.35, the dimension for the latent code is 80, and the batch size is 32. We use SGD with learning rate 1.0 to update the parameters for both the encoding and the decoding network. We train the model until the reconstruction loss stops decreasing.\nA.2 IDENTIFYING THE LATENT FACTOR INDICATING THE SENTIMENT\nFirst, we normalize the value of each latent code by subtracting the mean estimated over all the training samples. Then we use the polarity of each latent code to classify the sentiment in the validation set. The one with the highest accuracy is identified as the latent factor indicating the sentiment.\nA.3 SAMPLES GENERATED FROM β-VAE\nA.4 MANIPULATION STRATEGIES\nFollowing manipulation strategies have been attempted: (1) fixing the relevant factor to µ+ 2σ and µ−2σ; (2) fixing the relevant factor to µ−σ and µ−σ; (3) fixing the relevant factor to the maximum value and the minimum value of the relevant factor appearing in the training samples; (4) calculating a latent vector based on 10 manually constructed parallel sentences with opposite sentiment while keeping other factors unchanged. However, none of these four strategies is effective considering the generation results. We report the result with the first strategy in the paper, since it performs the best considering the accuracy and the BLEU score." }, { "heading": "B DETAILS ABOUT EXPERIMENTS ON TEXT STYLE TRANSFER", "text": "B.1 TRAINING REGIMES\nAcross all the datasets, we use Adam with learning rate 0.001 to update the parameters for the encoding network, while SGD with learning rate 1.0 to update the parameters for the decoding network. The batch size is chosen to be 32. Dropouts with drop probability 0.5 are applied on applied on after the embedding layer and the LSTM layer in the decoding network. We train the model until the reconstruction loss stops decreasing.\nB.2 MITIGATING POSTERIOR COLLAPSE\nFor the structured part z(1), we use β-VAE setting β as 0.2 across all the datasets. For the unstructured part z(2), different strategies are employed for each dataset:\n• Yelp: β-VAE setting β as 0.35. • Amazon: β-VAE setting β as 0.35. • AG-News: KL annealing, from 0.1 to 1.0 in 10 epochs.\nB.3 HYPERPARAMETER SETTINGS\nThe hyperparameters are chosen by checkingLVAE, KL, and the generated outputs on the development set for Yelp and AG-News. Amazon follows the same setting as Yelp without extra tuning.\nB.4 DECODING STRATEGY\nFor decoding, we use beam search with a beam size of 5." }, { "heading": "C PROOF OF MINIMALIZATION OF EQ. 5", "text": "The problem can be formulated as an optimization problem as follows:\nmaximize K∑ i=1 p2i , subject to K∑ i=1 pi = 1.\nBy introducing a Lagrange multiplier λ, the Lagrange function is defined as\nL(p1, p2, . . . , pK , λ) = K∑ i=1 p2i − λ( K∑ i=1 pi − 1).\nIn order to find the optimal point, we require that\n∂\n∂pi ( K∑ i=1 p2i − λ( K∑ i=1 pi − 1) ) = 2pi − λ = 0, i = 1, 2, . . . ,K,\nwhich shows that all pi are equal. By using the constraint ∑ i pi = 1, we find pi = 1 K , i =\n1, 2, . . . ,K. By plugging into the results, µ>µ = α ∑ i p 2 i reaches its minimum α K ." }, { "heading": "D COMPARISONS WITH BASELINES ON TOPIC MODELLING", "text": "Experimental setup: We use the AG news dataset for this task constructed by (Zhang et al., 2015). It contains four topic categories which are World, Sports, Business and Sci/Tech, with the title and description fields. For each category, there are 30, 000 training samples and 1, 900 test samples. In this paper, we drop the title and just use the description field. We compare our approach to two standard baselines for unsupervised topic modelling: (1) LDA (Blei et al., 2003), a standard implementation of LDA is used for this baseline4; (2) k-means. To show the power of our approach\n4https://radimrehurek.com/gensim/\nbeyond the pre-trained sentence representations, we perform k-means clustering directly on the sentence representations. Following (Manning et al., 2010), we assign each inferred topic to one of the gold-standard topics with the optimal mapping and report the precision (a.k.a. purity), recall (a.k.a. collocation) and F1 score. The number of topics is chosen to be 10. The results reported for the baselines and our model are the average over 10 runs.\nQuantitative results: The results are shown in Table 7. We can see that our approach achieves comparable results to LDA while significantly outperforming k-means in all four categories, indicating that our approach can go beyond just clustering on pre-trained sentence representations." }, { "heading": "E TEXT TRANSFER EXAMPLES", "text": "E.1 SENTIMENT MANIPULATION ON YELP DATASET\nE.2 SENTIMENT MANIPULATION ON AMAZON DATASET" }, { "heading": "F TEXT TRANSITION EXAMPLES ON AG NEWS", "text": "" } ]
2,019
ON VARIATIONAL LEARNING OF CONTROLLABLE REPRESENTATIONS FOR TEXT WITHOUT SUPERVISION
SP:eb5d45ef0112f93ade7aa89d9a5132062590f9e1
[ "The paper has two main messages: 1- Averaging over the explanation (saliency map in the case of image data) of different methods results in a smaller error than an expected error of a single explanation method. 2- Introducing a new saliency map evaluation method by seeking to mitigate the effect of high spatial correlation in image data through grouping pixels into coherent segments. The paper then reports experimental results of the methods introduced in the first message being superior to existing saliency map methods using the second message (and an additional saliency map evaluation method in the literature). They also seek to magnify the capability of the 2nd message's evaluation method by showing its better capability at distinguishing between a random explanation and an explanation method with a signal in it.", "This paper, inspired by the established technique of model ensembling, proposes two methods (AGG-Mean and AGG-Var) for aggregating different model explanations into a single unified explanation. The authors mathematically prove that the derived explanation is guaranteed to be more truthful than the average performance of the constituent explanations. In practice, the aggregation consistently outperforms *all* individual explanations, not just their aggregated performance. Additionally, the paper introduces a new quantitative evaluation metric for explanations, free of human intervention: IROF (Incremental Removal of Features) incrementally grays out the segments deemed as relevant by an explanation method and observes how quickly the end-task performance is degraded (good explanations will cause fast degradation). Solid validation confirms that the IROF metric is sound." ]
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation. Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. The aggregation is more robust and aligns better with the neural network than any single explanation method. Secondly, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes.
[]
[ { "authors": [ "Radhakrishna Achanta", "Appu Shaji", "Kevin Smith", "Aurelien Lucchi", "Pascal Fua", "Sabine Süsstrunk" ], "title": "Slic superpixels compared to state-of-the-art superpixel methods", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2012 }, { "authors": [ "Julius Adebayo", "Justin Gilmer", "Michael Muelly", "Ian Goodfellow", "Moritz Hardt", "Been Kim" ], "title": "Sanity checks for saliency maps", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marco Ancona", "Enea Ceolini", "Cengiz Oztireli", "Markus Gross" ], "title": "Towards better understanding of gradient-based attribution methods for deep neural networks", "venue": "In 6th International Conference on Learning Representations (ICLR", "year": 2018 }, { "authors": [ "Sebastian Bach", "Alexander Binder", "Grégoire Montavon", "Frederick Klauschen", "Klaus-Robert Müller", "Wojciech Samek" ], "title": "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation", "venue": "PloS one,", "year": 2015 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Chun-Hao Chang", "Elliot Creager", "Anna Goldenberg", "David Duvenaud" ], "title": "Explaining image classifiers by counterfactual generation", "venue": null, "year": 2018 }, { "authors": [ "Francois Chollet" ], "title": "Xception: Deep learning with depthwise separable convolutions", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Ruth C Fong", "Andrea Vedaldi" ], "title": "Interpretable explanations of black boxes by meaningful perturbation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Stuart Geman", "Elie Bienenstock", "René Doursat" ], "title": "Neural networks and the bias/variance dilemma", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Lars Kai Hansen", "Finn Årup Nielsen", "Stephen C Strother", "Nicholas Lange" ], "title": "Consensus inference in neuroimaging", "venue": null, "year": 2001 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sara Hooker", "Dumitru Erhan", "Pieter-Jan Kindermans", "Been Kim" ], "title": "Evaluating feature importance estimates", "venue": "arXiv preprint arXiv:1806.10758,", "year": 2018 }, { "authors": [ "Been Kim", "Martin Wattenberg", "Justin Gilmer", "Carrie Cai", "James Wexler", "Fernanda Viegas" ], "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Pieter-Jan Kindermans", "Sara Hooker", "Julius Adebayo", "Google Brain", "Maximilian Alber", "Kristof T Schütt", "Sven Dähne", "Dumitru Erhan", "Been Kim" ], "title": "The (un)reliability of saliency methods", "venue": "In Proceedings Workshop on Interpreting, Explaining and Visualizing Deep Learning (at NIPS),", "year": 2017 }, { "authors": [ "Scott M Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sina Mohseni", "Eric D Ragan" ], "title": "A human-grounded evaluation benchmark for local explanations of machine learning", "venue": "arXiv preprint arXiv:1801.05075,", "year": 2018 }, { "authors": [ "Grégoire Montavon", "Sebastian Lapuschkin", "Alexander Binder", "Wojciech Samek", "Klaus-Robert Müller" ], "title": "Explaining nonlinear classification decisions with deep taylor decomposition", "venue": "Pattern Recognition,", "year": 2017 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Why should i trust you?: Explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Laura Rieger", "Pattarawat Chormai", "Grégoire Montavon", "Lars Kai Hansen", "Klaus-Robert Müller" ], "title": "Structuring Neural Networks for More Explainable Predictions", "venue": null, "year": 2018 }, { "authors": [ "Wojciech Samek", "Alexander Binder", "Grégoire Montavon", "Sebastian Lapuschkin", "Klaus-Robert Müller" ], "title": "Evaluating the visualization of what a deep neural network has learned", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2016 }, { "authors": [ "Wojciech Samek", "Grégoire Montavon", "Andrea Vedaldi", "Lars Kai Hansen", "Klaus-Robert Muller" ], "title": "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning", "venue": null, "year": 2019 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Avanti Shrikumar", "Peyton Greenside", "Anshul Kundaje" ], "title": "Learning important features through propagating activation differences", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sigurdur Sigurdsson", "Peter Alshede Philipsen", "Lars Kai Hansen", "Jan Larsen", "Monika Gniadecka", "Hans-Christian Wulf" ], "title": "Detection of skin cancer by classification of raman spectra", "venue": "IEEE transactions on biomedical engineering,", "year": 2004 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", "venue": "URL http://arxiv. org/abs/1312.6034", "year": 2013 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": null, "year": 2017 }, { "authors": [ "JT Springenberg", "A Dosovitskiy", "T Brox", "M Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "In ICLR (workshop track),", "year": 2014 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Andrea Vedaldi", "Stefano Soatto" ], "title": "Quick shift and kernel methods for mode seeking", "venue": "In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),", "year": 2008 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. aug 2017", "venue": "URL https://arxiv.org/abs/1708", "year": 2017 }, { "authors": [ "Matthew D Zeiler" ], "title": "Adadelta: an adaptive learning rate method", "venue": "arXiv preprint arXiv:1212.5701,", "year": 2012 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Quanshi Zhang", "Ying Nian Wu", "Song-Chun Zhu" ], "title": "Interpretable convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Luisa M Zintgraf", "Taco S Cohen", "Tameem Adel", "Max Welling" ], "title": "Visualizing deep neural network decisions: Prediction difference analysis", "venue": "In ICLR,", "year": 2017 } ]
[ { "heading": null, "text": "Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation. Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. The aggregation is more robust and aligns better with the neural network than any single explanation method. Secondly, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes." }, { "heading": "1 INTRODUCTION", "text": "Despite the great success of neural networks especially in classic visual recognition problems, explaining the networks’ decisions remains an open research problem Samek et al. (2019). This is due in part to the complexity of the visual recognition problem and in part to the basic ’ill-posedness’ of the explanation task. This challenge is amplified by the fact that there is no agreement on what a sufficient explanation is and how to evaluate an explanation method.\nMany different explanation strategies and methods have been proposed (Simonyan et al., 2013; Zeiler & Fergus, 2014; Bach et al., 2015; Selvaraju et al., 2017; Smilkov et al., 2017; Sundararajan et al., 2017). Focusing on visual explanations for individual decisions, most methods either use a backpropagation approach or aim to construct a simpler linear model with an intuitive explanation. The plethora of explanation approaches is a signature of the high-level epistemic uncertainty of the explanation task.\nThis paper is motivated by a key insight in machine learning: Ensemble models can reduce both bias and variance compared to applying a single model. A related approach was pursued for functional visualization in neuroimaging (Hansen et al., 2001). Here we for the first time explore the potential of aggregating explanations of individual visual decisions in reducing epistemic uncertainty for neural networks.\nWe test the hypothesis that ensembles of multiple explanation methods are more robust than any single method. This idea is analyzed theoretically and evaluated empirically. We discuss the properties of the aggregate explanations and provide visual evidence that they combine features, hence are more complete and less biased than individual schemes. Based on this insight, we propose two ways to aggregate explanation methods, AGG-Mean and AGG-Var. In experiments on Imagenet, MNIST, and FashionMNIST, the aggregates identify relevant parts of the image more accurately than any single method.\nSecond, we introduce IROF (Iterative Removal Of Features) as a new approach to quantitatively evaluate explanation methods without relying on human evaluation. We circumvent the problems of high correlation between neighbor pixels as well as the human bias that are present in current evaluation methods." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 EXPLANATION METHODS", "text": "The open problem of explainability is reflected in a lot of recent work (Kindermans et al., 2017; Selvaraju et al., 2017; Bach et al., 2015; Zhang et al., 2018; Zhou et al., 2016; Ancona et al., 2018; Ribeiro et al., 2016; Rieger et al., 2018; Kim et al., 2018; Lundberg & Lee, 2017; Zintgraf et al., 2017; Simonyan et al., 2013; Zeiler & Fergus, 2014; Selvaraju et al., 2017; Smilkov et al., 2017; Sundararajan et al., 2017; Shrikumar et al., 2017; Montavon et al., 2017; Chang et al., 2018). We focus on generating visual explanations for single samples. The first work in this direction was Simonyan et al. (2013) with Saliency Maps (SM) that proposed backpropagating the output onto the input to gain an understanding of a neural network decision. The relevance for each input dimension is extracted by taking the gradient of the output w. r. t. to the input. This idea was extended by Springenberg et al. (2014) into Guided Backpropagation (GM) by applying ReLU non-linearities after each layer during the backpropagation. Compared to Saliency, this removes visual noise in the explanation. Grad-CAM (GC) from Selvaraju et al. (2017) is an explanation method, developed for use with convolutional neural networks. By backpropagating relevance through the dense layers and up-sampling the evidence for the convolutional part of the network, the method obtains coarse heatmaps that highlight relevant parts of the input image. Integrated Gradients (IG) Sundararajan et al. (2017) sums up the gradients from linearly interpolated pictures between a baseline, e.g. a black image, and the actual image. SmoothGrad (SG) filters out noise from a basic saliency map by creating many samples of the original input with Gaussian noise Smilkov et al. (2017). The final saliency map is the average over all samples.\nFinally, we also consider LIME Ribeiro et al. (2016). In contrast to the other methods, LIME is not based on backpropagation. Instead, it approximates the neural network with a linear model locally around the input to be explained. The coefficients of the linear model for the respective input dimensions give the importance of each dimension. Compared to the other methods this is much more computationally expensive as it requires many passes through the neural network." }, { "heading": "2.2 EVALUATION OF EXPLANATION METHODS", "text": "The evaluation of explanation methods is a relatively recent topic with few systematic approaches (Bau et al., 2017; Ancona et al., 2018; Hooker et al., 2018; Adebayo et al., 2018; Fong & Vedaldi, 2017). To our knowledge, Bach et al. (2015) proposed the first quantitative approach to evaluate an explanation method by flipping pixels to their opposite and comparing the decrease in output with the relevance attributed to the pixel by the explanation method. As the authors note, this only works for low-dimensional input. This work was followed up upon in Samek et al. (2016). By dividing high-dimensional images into squares, they make the method feasible for high-dimensional inputs. Squares with high relevance (as measured by the explanation method) consecutively get replaced with noise sampled from the uniform distribution. The difference between the original output and the output for the degraded images indicates the quality of the explanation method.\nHooker et al. (2018) proposes another quantitative approach to evaluate explanation methods called ROAR. For each explanation method they extract the relevance maps over the entire training set. They degrade the training set by setting different percentages of the pixels with the highest relevance to the mean and retrain the network. Each retrained network is evaluated on the test set. The accuracy on the test set decreases dependent on the percentage of pixels set to the mean. This requires retraining the same architecture multiple times for each explanation method at a high computational cost.\nAncona et al. (2018) proposed a different approach to evaluate explanation methods, called Sensitivityn, based on the notion that the decrease in output when a number of inputs are canceled out should be equal to the sum of their relevances. For a range of n (between 1 and the total number of inputs) they sample a hundred subsets of the input. For each n, the Pearson Correlation Coefficient (PCC) between the decrease in output, when the subset of features is removed, and the sum of their relevances is reported. The result is a curve of the PCC dependent on the percentage of the input being removed. For a good explanation method, the PCC will decrease slowly." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 AGGREGATING EXPLANATION METHODS TO REDUCE VARIANCE", "text": "All currently available explanation methods have weaknesses that are inherent to the approach and include significant uncertainty in the resulting heatmap (Kindermans et al., 2017; Adebayo et al., 2018; Smilkov et al., 2017). A natural way to mitigate this issue and reduce noise is to combine multiple explanation methods. Ensemble methods have been used for a long time to reduce the variance and bias of machine learning models. We apply the same idea to explanation methods and build an ensemble of explanation methods.\nWe assume a neural network F : X 7→ y with X ∈ Rm×m and a set of explanation methods {ej}Jj=1 with ej : X, y, F 7→ E with E ∈ Rm×m. We write Ej,n for the explanation obtained for Xn with method ej and denote the mean aggregate explanation as ē with Ēn = 1J ∑J j=1Ej,n. While we assume the input to be an image ∈ Rm×m, this method is generalizable to inputs of other dimensionalities as well.\nTo get a theoretical understanding of the benefit of aggregation, we hypothesize the existence of a ’true’ explanation Ên. This allows us to quantify the error of an explanation method as the mean squared difference between the ’true’ explanation and an explanation procured by an explanation method, i.e. the MSE.\nFor clarity we subsequently omit the notation for the neural network. We write the error of explanation method j on image Xn as errj,n = ||Ej,n − Ên||2 with\nMSE(Ej) = 1\nN ∑ n errj,n\nand MSE(Ē) = 1N ∑\nn ||Ēn− Ên||2 is the MSE of the aggregate. The typical error of an explanation method is the mean error over all explanation methods\nMSE = 1\nJ ∑ j MSE(Ej).\nWith these definitions we can do a standard bias-variance decomposition (Geman et al., 1992). Accordingly we can show the error of the aggregate will be less that the typical error of explanation methods,\nMSE = 1N ∑ n 1 J ∑ j ||Ên − Ej,n||2 (1)\n= 1N ∑ n ||Ên − Ēn||2 + 1 NJ ∑ n,j ||Ēn − Ej,n||2,\nhence,\nMSE = 1\nJ ∑ j 1 N ∑ n ||Ēn − Ej,n||2︸ ︷︷ ︸ epistemic uncertainty +MSE(Ē) ≥ MSE(Ē).\nA detailed calculation is given in appendix A.1. The error of the aggregate MSE(Ē) is less than the typical error of the participating methods. The difference - a ‘variance’ term - represents the epistemic uncertainty and only vanishes if all methods produce identical maps. By taking the average over all available explanation methods, we reduce the variance of the explanation compared to using a single method. To obtain this average, we normalize all input heatmaps such that the relevance over all pixels sum up to one. This reflects our initial assumption that all individual explanation methods are equally good estimators. We refer to this approach as AGG-Mean.\nEAgg-Mean,n = 1\nJ J∑ j=1 Ej,n\nThis estimator however does not take into account the estimate of the local epistemic uncertainty, i.e. the disagreement between methods. A way to incorporate this information is to form an ’effect\nsize’ map by dividing the mean aggregate locally with its standard deviation (Sigurdsson et al., 2004). Intuitively, this will assign less relevance to segments with high disagreement between methods.\nFor stability, we divide not directly by the local variance but add a constant to the estimate of the local variance. This can be interpreted as a smoothing regularizer or a priori information regarding epistemic and aleatoric uncertainties. We refer to this approach as AGG-Var.\nEAGG-Var,n = 1\nJ J∑ j=1 Ej,n σ(Ej∈J,n) +\nwhere σ(Ej∈J,n) is the point-wise standard deviation over all explanations j ∈ J for Xn In section 4 we will evaluate and compare AGG-Mean and AGG-Var against basic explanation methods." }, { "heading": "3.2 EVALUATING EXPLANATION METHODS QUANTITATIVELY WITH IROF", "text": "Quantitative evaluation is a recurring problem with explainability methods. This is especially true for high-dimensional input, such as images, where important features are made up of locally highly correlated pixels. If the information in one pixel is lost, this will not change the overall feature and should therefore not result in a changed output score. The relevance values of single pixels are not indicative of the feature’s importance as a whole. We circumvent this problem by utilizing conventional image segmentation, a well-explored area. By first dividing the image into coherent segments, we avoid the interdependency between the inputs.\nMethodology We assume a neural network F : X 7→ y with X ∈ Rm×m and a set of explanation methods {ej}Jj=1 with ej : X, y, F 7→ E with E ∈ Rm×m.\nFurthermore we partition each image Xn into a set of segments {Sln}Ll=1 using a given segmentation method with sln,i,j = 1 indicating that pixel xn,i,j belongs to segment l. Computing the mean importance ||Ej,nS l n||1\n||Sln||1 of each segment according to a given explanation method j, two segments can\nbe directly compared against each other. By sorting the segments in decreasing order of importance according to the explanation method, we get a ranking of how relevant each segment of the image is.\nWe use X ′l n to indicate Xn with the l segments with highest mean relevance replaced with the mean value. Taking F (X ′l n )y repeatedly with increasing l ∈ 0, ..., L results in a curve of the class score dependent on how many segments of the image are removed. Dividing this curve by F (X ′0 n )y normalizes the scores to be within [0, 1] and makes curves comparable between input samples and networks.\nIf an explanation method works well, it will attribute high relevance to segments important for classification. As segments with high relevance are removed first, the score for the target class will go down faster. By computing the area over the curve (AOC) for the class score curve and averaging over a number of input samples, we can identify the method that identifies relevant areas of the image more reliably. For a good explanation method, the AOC will be higher. We refer to this evaluation method as the iterative removal of features (IROF). The IROF for a given explanation method ej is expressed as:\nIROF(ej) = N∑\nn=1\nAOC\n( F (X ′l n )y\nF (X ′0n )y )L l=0\nThis approach is a quantitative comparison of two or more explainability methods that does not rely on human evaluation or alignment between human and neural network reasoning. For each explanation method the workflow produces a single value, enabling convenient comparison between two or more explanation methods. If the AOC is higher, the explanation method captures more information about the neural network classification.\nIROF is dependent on having meaningful segments in the input, as natural images do. Dividing up text or non-natural images such as EEG into meaningful and independent segments does not have a natural solution and is left for future research." }, { "heading": "4 EXPERIMENTS", "text": "We first present empirical validation of our proposed evaluation technique IROF in section 4.2. Subsequently we evaluate the aggregation of explanation techniques against the vanilla techniques with IROF, Sensitivity-n and qualitative evaluation. In appendix A.6.1 we compare aggregated methods on a dataset of human-annotated heatmaps." }, { "heading": "4.1 EXPERIMENTAL DETAILS", "text": "We tested our method on five neural network architectures that were pre-trained on ImageNet: VGG19, Xception, Inception, ResNet50 and ResNet101 (Deng et al., 2009; Simonyan & Zisserman, 2014; He et al., 2016; Chollet, 2017; Szegedy et al., 2016). 1 Additionally, we ran experiments on CNN trained on the MNIST and FashionMNIST dataset LeCun & Cortes (2010); Xiao et al. (2017).\nWe compared the aggregation methods against Saliency (SM), Guided Backpropagation (GB), SmoothGrad (SG), Grad-CAM (GC) and Integrated Gradients (IG) to have a selection of attributionbased methods. Additionally we compared against LIME as a method that is not based on attribution but rather on local approximation Ribeiro et al. (2016). The aggregations are based on all attributionbased methods.\nSome of the methods result in positive and negative evidence. We only considered positive evidence for the ImageNet tasks to compare methods against each other. To check that this does not corrupt the methods, we compared the methods that do contain negative results against their filtered version and found negligible difference between the two versions of a method in the used metrics.\nFor Agg-Mean we introduced an additional parameter, to the divisor. In our experiments we set to be ten times the mean σ over the entire dataset.\n1Models retrieved from https://github.com/keras-team/keras." }, { "heading": "4.2 EVALUATING IROF FOR VALIDITY AS AN EVALUATION METHOD", "text": "A good evaluation method should be able to reject the null hypothesis (a given explanation method is no better than random choice) with high confidence. We use this as motivation to evaluate and compare IROF by calculating the paired t-test of an explanation method versus random guessing. This is done with multiple explanation methods and networks, to reduce the impact of the explanation method.\nWe compare IROF and pixel removal with mean value and black as a replacement value respectively. Additionally we compare against Samek et al. (2016) as explained in section 2. For IROF and Samek et al. (2016) we set the 10% most relevant segments to the mean value over the dataset. For pixel removal, we set the equivalent number of pixels to the mean value. The percentage of segments or pixels being removed was chosen ad hoc. If the difference in degradation between random choice and the explanation method is high, the explanation method reports meaningful information. Since we compare the same explanation method on the same neural network with different evaluation methods, the p-values only contain information about how meaningful the evaluation method is.\nWe computed IROF and pixel removal with black or mean replacement values and compared the p-value changes dependent on the number of samples. Results are shown in fig. 2 (extended in appendix A.6). In table 1 we provide results for forty images in tabular form for an easier overview (other methods in appendix A.6). On forty images, all evaluation methods produce p-values below 0.05. Thus, all evaluation methods can distinguish between random guessing and an explanation method.\nHowever, IROF can reject the null hypothesis (the explanation method does not contain any information), with much higher confidence with the same number of samples for any configuration. We can conclude that IROF is more sensitive to the explanation method than pixel removal or Samek et al. (2016), making it the better choice to quantitatively evaluate an explanation method." }, { "heading": "4.3 EVALUATING EXPLANATION METHODS WITH IROF", "text": "To quantitatively compare the quality of the explanation methods on a more challenging dataset, we use IROF on a number of neural network architectures trained on ImageNet. In table 2 we report\nthe IROF as described in section 3.2. We include two non-informative baselines. Random randomly chooses segments to remove. Sobel is a sobel edge detector. Neither of them contain information about the neural network.\nAll explanation methods have a lower IROF than the random baseline on all architectures tested, indicating that all methods contain information about the image classification. Except for LIME, all methods also surpass the stronger baseline, SOBEL. The ranking of unaggregated methods varies considerably between architectures. This variance indicates that the accuracy of an explanation method depends on the complexity and structure of the neural network architecture. For all architectures AGG-Mean and AGG-Var have a lower IROF score than any non-aggregated method. For ResNet101 the difference between the best unaggregated method and the aggregated methods is especially large. We hypothesize, that the benefit of aggregating explanation methods increases for more complex neural network with large epistemic uncertainty on the explanation.\nWe can empirically confirm that aggregating methods improves over unaggregated methods and more reliably identifies parts of the images that are relevant for classification." }, { "heading": "4.4 QUALITATIVE VISUAL EVALUATION", "text": "We show heatmaps for individual examination for each of the methods in fig. 3 and compare qualitatively (large version of fig. 3 in appendix A.6.2). While visual evaluation of explanations for neural networks can be misleading, there is no better way available of checking whether any given explanation method agrees with intuitive human understanding (Adebayo et al., 2018). Additionally, we compute alignment between human-annotated images and the explanation methods in ??, using the human benchmark for evaluation introduced in Mohseni & Ragan (2018).\nAGG-Var combines features of the aggregrated methods by attributing relevance on the classified object as a whole, but considering smaller details such as the face of an animal as more relevant. It is a combination of the detail-oriented and shape-oriented methods. Compared to SmoothGrad, which concentrates on one isolated feature, the relevance is more evenly distributed and aligned with our human intuition that classification takes context into account and does not rely on e.g. only the beak of a bird. We can conclude that combining explainability methods provides a meaningful visual improvement over single methods.\n4.5 EVALUATION WITH SENSITIVITY-n ON LOW-DIMENSIONAL INPUT\nTo quantitatively compare explanation methods on a low-dimensional input we use Sensitivity-n (Ancona et al., 2018). The exact procedure is described in section 2. We compare on MNIST (LeCun & Cortes, 2010) and FashionMNIST (Xiao et al., 2017), two low-dimensional dataset with a basic CNN2 (architecture in appendix) . We follow the procedure suggested in Ancona et al. (2018) and test on a hundred randomly sampled subsets for 1000 randomly sampled test images. The number of pixels in the set n is chosen at fifteen points logarithmically spaced between 10 and 780 pixels.\nAs described in section 2, for a range of n (between 1 and the total number of inputs) a hundred subsets of the input features are removed. For each n, the average Pearson Correlation Coefficient (PCC) between the decrease in output and the relevance of the removed output features is reported. The result is a curve of the PCC dependent on the removed percentage.\nWe show results in fig. 4. AGG-Mean and AGG-Var perform in range of the best methods. For the CNN trained on FashionMNIST, AGG-Mean and AGG-Var perform better than unaggregated methods. For the CNN trained on MNIST, Guided Backpropagation and AGG-Mean perform best. For both networks (trained on FashionMNIST and MNIST respectively), SmoothGrad and GradCAM perform considerably worse than the other methods.\nIn summary, aggregation seems to not be as beneficial when applied to a low-dimensional, \"easier\" tasks such as MNIST as it is for ImageNet, performing in range of the best unaggregated method. We hypothesize that this is because there is less epistemic uncertainty in explanations for less complex tasks and network architectures." }, { "heading": "5 CONCLUSION", "text": "In this work we gave a simple proof that aggregating explanation methods will perform at least as good as the typical individual method. In practice, we found evidence that aggregating methods outperforms any single method. We found this evidence substantiated across quantitative metrics. While our results show that different vanilla explanation methods perform best on different network architectures, an aggregation supersedes all of them on any given architecture.\n2Model and code retrieved from https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py.\nAdditionally we proposed a novel way of evaluation for explanation methods that circumvents the problem of high correlation between pixels and does not rely on visual inspection by humans, an inherently misleading metric." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 AGGREGATING EXPLANATION METHODS TO REDUCE VARIANCE - DETAILED DERIVATION", "text": "All currently available explanation methods have weaknesses that are inherent to the approach and include significant noise in the heatmap (Kindermans et al., 2017; Adebayo et al., 2018; Smilkov et al., 2017). A natural way to mitigate this issue and reduce noise is to combine multiple explanation methods. Ensemble methods have been used for a long time to reduce the variance and bias of machine learning models. We apply the same idea to explanation methods and build an ensemble of explanation methods.\nWe assume a neural network F : X 7→ y with X ∈ Rmxm and a set of explanation methods {ej}Jj=1 with ej : X, y, F 7→ E with E ∈ Rmxm. We write Ej,n for the explanation obtained for Xn with method ej and denote the mean aggregate explanation as ē with Ēn = 1J ∑J j=1Ej,n. While we assume the input to be an image ∈ Rmxm, this method is generalizable to inputs of other dimensions as well.\nWe define the error of an explanation method as the mean squared difference between a hypothetical ’true’ explanation and an explanation procured by the explanation method, i.e. the MSE. For this definition we assume the existence of the hypothetical ’true’ explanation Ên for image Xn.\nFor clarity we subsequently omit the notation for the neural network.\nWe write the error of explanation method j on image Xn as errj,n = ||Ej,n − Ên||2 with\nMSE(Ej) = 1\nN ∑ n errj,n\nand MSE(Ē) = 1N ∑\nn ||Ēn− Ên||2 is the MSE of the aggregate. The typical error of an explanation method is represented by the mean\nMSE = 1\nN ∑ n 1 J ∑ j ||Ên − Ej,n||2\n= 1\nNJ ∑ n,j ||Ên − Ej,n + Ēn − Ēn||2\n= 1\nNJ ∑ n,j ||(Ên − Ēn) + (Ēn − Ej,n)||2\n= 1\nNJ ∑ n,j ( ||Ên − Ēn||2 + ||Ēn − Ej,n||2 + 2Tr [ (Ên − Ēn)(Ēn − Ej,n) ])\n= 1\nN ∑ n ||Ên − Ēn||2 + 1 NJ ∑ n,j ||Ēn − Ej,n||2 + 2 1 N ∑ n Tr (Ên − Ēn) 1 J ∑ j (Ēn − Ej,n) \n= 1\nN ∑ n ||Ên − Ēn||2 + 1 NJ ∑ n,j ||Ēn − Ej,n||2 + 2 1 N ∑ n Tr (Ên − Ēn) 1J ∑ j\n(Ēn − Ej,n)︸ ︷︷ ︸ =0 = 1\nN ∑ n ||Ên − Ēn||2 + 1 NJ ∑ n,j ||Ēn − Ej,n||2,\nhence,\nMSE = MSE(Ē) + 1\nNJ ∑ n,j ||Ēn − Ej,n||2︸ ︷︷ ︸ epistemic uncertainty ≥ MSE(Ē)\nThe error of the aggregate MSE(Ē) is less than the typical error of the participating methods. The difference - a ‘variance’ term - represents the epistemic uncertainty and only vanishes if all methods produce identical maps." }, { "heading": "A.2 COMPARING AGGREGATE OF TWO METHODS", "text": "In section 3.1 we showed theoretically that the average MSE of two or more explanation methods will always be higher than the error of the averaged of those methods. Empirically, we test this for IROF with combinations of any two methods for ResNet101 and show the results in fig. 5. For any two methods, the matrix shows the ratio between the aggregate method IROF and the average IROF of the aggregated methods. The aggregate IROF is always lower, confirming our theoretical results.\nA.3 IROF EXTENDED" }, { "heading": "A.4 CHOOSING EXPLANATION METHODS", "text": "An important part of our method is the choice of explanation methods to be included in the aggregation. In our work we focus on backpropagation-based methods, since they tend to be computationally cheap. In our opinion this makes for a more realistic use case, not only for human but also for machine post processing.\nIn contrast, locality based methods such as LIME or the method by Fong & Vedaldi (2017) require many forward passes, since their methods essentially \"learn\" what parts of the input are relevant. We included LIME in our experiments to have a not-backpropagation based method included." }, { "heading": "A.5 EXPERIMENTAL SETUP", "text": "" }, { "heading": "A.5.1 GENERAL", "text": "We use SLIC for image segmentation due to availability and quick run time (Achanta et al., 2012). Preliminary experiments with Quickshift showed similar results (Vedaldi & Soatto, 2008). SLIC was chosen over Quickshift due to the quicker run time. The number of segments was set to 300 ad hoc. fig. 1 shows the same procedure with 100 segments for the sake of clarity. For AGG-Var, we add a constant to the denominator. We set this constant to 10 times the mean std, a value chosen empirically after trying values in the range of [1, 10, 100] times the mean. Evaluations were run with a set random seed for reproducibility. Stddev were reported either for each individual result or if they were non-significant in the caption to avoid cluttering the results. Since our work does not include computationally heavy training, we did not record the exact computing infrastructure." }, { "heading": "A.5.2 MNISTS", "text": "The training for both models was equivalent. The architecture was as follows: (input)-(conv(32,3,3))-(conv(64,3,3))-(maxPool(2,2))-(dropout(0.25)) -(fully connected(128))-(dropout(0.5))-(output(10))\nReLU was used as a non-linearity for both. All networks were trained with Adadelta and early stopping on the validation set (patience of three epochs) Zeiler (2012). The final accuracy for MNIST was 99.21%.: The final accuracy on FashionMNIST was 92.46%.\nA.5.3 IMAGENET\nWe tested our method on five network architectures that were pre-trained on ImageNet: VGG19, Xception, Inception, ResNet50 and ResNet101 (Deng et al., 2009; Simonyan & Zisserman, 2014; He et al., 2016; Chollet, 2017; Szegedy et al., 2016). We used the pre-trained networks VGG19, Xception and Inception, obtained from the keras library and did not change the networks in any way. (Deng et al., 2009; Szegedy et al., 2016; Chollet, 2017; Simonyan & Zisserman, 2014).\nWe downloaded the data from the ImageNet Large Scale Visual Recognition Challenge website and used the validation set only. No images were excluded. The images were preprocessed to be within [−1, 1] unless a custom range was used for training (indicated by the preprocess function of keras)." }, { "heading": "A.6 EVALUATING THE EVALUATION", "text": "We report p-values for evaluating with 50 images on ResNet101 in the manner described in section 4.2 in tabular form to provide a clear overview.\nWe provide an extended version of fig. 2.\nFi gu\nre 7:\nSe ns\niti vi\nty -n\nfo r\nex pl\nan at\nio n\nm et\nho ds\n. H\nig he\nr is\nbe tte\nr. T\nhe pr\nop os\ned m\net ho\nds ,A\nG G\n-M ea\nn an\nd AG\nG -V\nar pe\nrf or\nm be\ntte r\nor eq\nua lly\ngo od\nas al\nlo th er m et ho ds .( la rg e ve rs io n of fig .4 )" }, { "heading": "A.6.1 ALIGNMENT BETWEEN HUMAN ATTRIBUTION AND EXPLANATION METHODS", "text": "We want to quantify whether an explanation method agrees with human judgement on which parts of an image should be important. While human annotation is expensive, there exists a benchmark for human evaluation introduced in Mohseni & Ragan (2018). The benchmark includes ninety images of\ncategories in the ImageNet Challenge (ten images were excluded due to the category not being in the ImageNet challenge) and provides annotations of relevant segments that ten human test objects found important. Example images are shown in fig. 8.\nWhile human evaluation is not a precise measure, we still expect some correlation between neural network and human judgement.\nTo test the alignment, we calculate the cosine similarity, similarity(ej) = ∑N\nn=1AnEj,n√∑N n=1A 2 n √∑N n=1E 2 j,n\nbetween the human annotation and the explanations produced by the respective explanation methods. An is the human annotation of what is important for image Xn\nSince the images in this dataset are 224x224 pixel large, we only compute the cosine similarity for the network architectures where pretrained networks with this input size were available.\nWe see that AGG-Mean and AGG-Var perform on-par with the best methods (SmoothGrad and GradCAM). While the aggregated methods perform better than the average explanation method, they do not surpass the best method.\nWhen we combine the two best-performing single methods, SmoothGrad and GradCAM, we surpass each individual method. We hypothesize that this is because the epistemic uncertainty is reduced by the aggregate." }, { "heading": "A.6.2 EXAMPLE HEATMAPS", "text": "" } ]
2,019
AGGREGATING EXPLANATION METHODS FOR NEURAL NETWORKS STABILIZES EXPLANATIONS
SP:6aab83c6e2805838fee314ae400ce5a8bb08f8f3
[ "This submission introduces a new concept, termed insideness, to study semantic segmentation in deep learning era. The authors raise many interesting questions, such as (1) Does deep neural networks (DNN) understand insideness? (2) What representations do DNNs use to address the long-range relationships of insideness? (3) How do architectural choices affect the learning of these representations? This work adopts two popular networks, dilated DNN and ConvLSTM, to implement solutions for insideness problem in isolation. The results can help future research in semantic segmentation for the models to generalize better. ", "This paper investigates the problem of modeling insideness using neural networks. To this end, the authors carefully designed both feedforward and recurrent neural networks, which are, in principle, able to learn the insideness in its global optima. For evaluation, these methods are trained to predict the insideness in synthetically generated Jordan curves and tested under various settings such as generalization to the different configuration of curves or even different types of curves. The experiment results showed that the tested models are able to learn insideness, but it is not generalizable due to the severe overfitting. Authors also demonstrated that injecting step-wise supervision in coloring routine in recurrent networks can help the model to learn generalizable insideness. " ]
Image segmentation aims at grouping pixels that belong to the same object or region. At the heart of image segmentation lies the problem of determining whether a pixel is inside or outside a region, which we denote as the “insideness” problem. Many Deep Neural Networks (DNNs) variants excel in segmentation benchmarks, but regarding insideness, they have not been well visualized or understood: What representations do DNNs use to address the long-range relationships of insideness? How do architectural choices affect the learning of these representations? In this paper, we take the reductionist approach by analyzing DNNs solving the insideness problem in isolation, ie. determining the inside of closed (Jordan) curves. We demonstrate analytically that state-of-the-art feed-forward and recurrent architectures can implement solutions of the insideness problem for any given curve. Yet, only recurrent networks could learn these general solutions when the training enforced a specific “routine” capable of breaking down the long-range relationships. Our results highlights the need for new training strategies that decompose the learning into appropriate stages, and that lead to the general class of solutions necessary for DNNs to understand insideness.
[]
[ { "authors": [ "Vijay Badrinarayanan", "Alex Kendall", "Roberto Cipolla" ], "title": "SegNet: A deep convolutional encoderdecoder architecture for image segmentation", "venue": null, "year": 2017 }, { "authors": [ "Liang-Chieh Chen", "Alexander Hermans", "George Papandreou", "Florian Schroff", "Peng Wang", "Hartwig Adam" ], "title": "MaskLab: Instance segmentation by refining object detection with semantic and direction features", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. TPAMI, 2018b", "venue": null, "year": 2018 }, { "authors": [ "Liang-Chieh Chen", "Yukun Zhu", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "venue": "CoRR, abs/1802.02611,", "year": 2018 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "M. Everingham", "L. Van Gool", "C.K.I. Williams", "J. Winn", "A. Zisserman" ], "title": "The PASCAL Visual Object Classes Challenge 2012", "venue": null, "year": 2012 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Eric Haines" ], "title": "Point in polygon strategies", "venue": "Graphics Gems IV,", "year": 1994 }, { "authors": [ "Ronghang Hu", "Piotr Dollár", "Kaiming He", "Trevor Darrell", "Ross Girshick" ], "title": "Learning to segment every thing", "venue": null, "year": 2018 }, { "authors": [ "Hiroaki Iwashita", "Yoshio Nakazawa", "Jun Kawahara", "Takeaki Uno", "Shinichi Minato" ], "title": "Fast computation of the number of paths in a grid graph", "venue": "In The 16th Japan Conference on Discrete and Computational Geometry and Graphs", "year": 2013 }, { "authors": [ "Fahad Lateef", "Yassine Ruichek" ], "title": "Survey on semantic segmentation using deep learning techniques", "venue": null, "year": 2019 }, { "authors": [ "Ke Li", "Bharath Hariharan", "Jitendra Malik" ], "title": "Iterative instance segmentation", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Yi Li", "Haozhi Qi", "Jifeng Dai", "Xiangyang Ji", "Yichen Wei" ], "title": "Fully convolutional instance-aware semantic segmentation", "venue": null, "year": 2017 }, { "authors": [ "Drew Linsley", "Junkyung Kim", "Vijay Veerabadran", "Charles Windolf", "Thomas Serre" ], "title": "Learning long-range spatial dependencies with horizontal gated recurrent units", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Rosanne Liu", "Joel Lehman", "Piero Molino", "Felipe Petroski Such", "Eric Frank", "Alex Sergeev", "Jason Yosinski" ], "title": "An intriguing failing of convolutional neural networks and the coordconv solution", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Shu Liu", "Lu Qi", "Haifang Qin", "Jianping Shi", "Jiaya Jia" ], "title": "Path aggregation network for instance segmentation", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Kevis-Kokitsi Maninis", "Sergi Caelles", "Jordi Pont-Tuset", "Luc Van Gool" ], "title": "Deep extreme cut: From extreme points to object segmentation", "venue": null, "year": 2018 }, { "authors": [ "Marvin L. Minsky", "Seymour A. Papert" ], "title": "Perceptrons: An Introduction to Computational Geometry", "venue": null, "year": 1969 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2015 }, { "authors": [ "Azriel Rosenfeld" ], "title": "Connectivity in digital pictures", "venue": "J. ACM,", "year": 1970 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li FeiFei" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Shaked Shammah" ], "title": "Failures of gradient-based deep learning", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Gwangmo Song", "Heesoo Myeong", "Kyoung Mu Lee" ], "title": "SeedNet: Automatic seed generation with deep reinforcement learning for robust interactive segmentation", "venue": null, "year": 2018 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Shimon Ullman" ], "title": "High-Level Vision: Object Recognition and Visual Cognition", "venue": "MIT Press,", "year": 1996 }, { "authors": [ "Francesco Visin", "Marco Ciccone", "Adriana Romero", "Kyle Kastner", "Kyunghyun Cho", "Yoshua Bengio", "Matteo Matteucci", "Aaron Courville" ], "title": "ReSeg: A recurrent neural network-based model for semantic segmentation", "venue": "In CVPR Workshops,", "year": 2016 }, { "authors": [ "Xiaolin Wu", "Xi Zhang", "Xiao Shu" ], "title": "Cognitive deficit of deep learning in numerosity", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "SHI Xingjian", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional LSTM network: A machine learning approach for precipitation nowcasting", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Fisher Yu", "Vladlen Koltun" ], "title": "Multi-scale context aggregation by dilated convolutions", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Shalev-Shwartz" ], "title": "2017), the input of the network is a string", "venue": null, "year": 2017 }, { "authors": [ "Shalev-Shwartz" ], "title": "2017) after the sum of bits is done, ie. after the first dot", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Image segmentation is necessary for complete image understanding. A key component of image segmentation is to determine whether a pixel is inside or outside a region, ie. the “insideness” problem (Ullman, 1984; 1996). Deep Neural Networks (DNNs) have been tremendously successful in image segmentation benchmarks, but it is not well understood whether DNNs represent insideness or how.\nInsideness has been overlooked in DNNs for segmentation since they have been mainly applied to the modality of “semantic segmentation”, ie. labelling each pixel with its object category (Ronneberger et al., 2015; Yu & Koltun, 2016; Visin et al., 2016; Badrinarayanan et al., 2017; Chen et al., 2018b; Long et al., 2015; Lateef & Ruichek, 2019). In such cases, insideness is not necessary since a solution can rely only on object recognition. Yet, the recent need to solve more sophisticated visual tasks has fueled the development of DNNs with the ability to segment individual object instances, rather than object categories (Li et al., 2016; 2017; Song et al., 2018; Chen et al., 2018a; Hu et al., 2018; Maninis et al., 2018; Liu et al., 2018b; He et al., 2017). In these segmentation modalities, insideness plays a central role, especially when there are few cues besides the boundaries of the objects, e.g. when there is lack of texture and color, and objects are unfamiliar. Thus, insideness is necessary to achieve true generalization in image segmentation.\nIn this paper, we investigate derived and learned insideness-related representations in DNNs for segmentation. We take the reductionist approach by isolating insideness from other components in image segmentation. We analyze the segmentation of closed curves, similar to the methodology in Minsky & Papert’s historic book Perceptrons (Minsky & Papert, 1969). In this way, we distill insideness to a minimum representation by eliminating other components.\nWe analytically demonstrate that two state-of-the-art network architectures, namely, DNNs with dilated convolutions (Yu & Koltun, 2016; Chen et al., 2018b) and convolutional LSTMs (ConvLSTMs) (Xingjian et al., 2015), among other networks, can exactly solve the insideness problem for any given curve with network sizes that are easily implemented in practice. The proofs draw on\nalgorithmic ideas from classical work on visual routines (Ullman, 1984; 1996), namely, the rayintersection method and the coloring method, to derive equivalent neural networks that implement these algorithms. Then, in a series of experiments with synthetically generated closed curves, we evaluate the capabilities of these DNNs to learn the insideness problem. The experiments show that when using standard training strategies, the DNNs do not learn general solutions for insideness, even though these DNNs are sufficiently complex to capture the long-range relationships. The only network that achieves almost full generalization in all tested cases is a recurrent network with a training strategy designed to encourage a specific mechanism for dealing with long-range relationships.\nThese results add to the growing body of works that show that DNNs have problems in learning to solve some elemental visual tasks (Linsley et al., 2018; Liu et al., 2018a; Wu et al., 2018; ShalevShwartz et al., 2017). Shalev-Shwartz et al. (2017) introduced several tasks that DNNs can in theory solve, as it was shown mathematically, but the networks were unable to learn, not even for the given dataset, due to difficulties in the optimization with gradient descent. In contrast, the challenges we report for insideness are related to poor generalization rather than optimization, as our experiments show the networks succeed in solving insideness for the given dataset. Linsley et al. (2018) introduced new architectures that better capture the long-range dependencies in images. Here, we show that the training strategy has a big impact in capturing the long-range dependencies. Even if the DNNs we tested had the capacity to capture such long-range dependencies, they do not learn a general solution with the standard training strategies." }, { "heading": "2 THE REDUCTIONIST APPROACH TO INSIDENESS", "text": "We now introduce the paradigm that will serve to analyze insideness-related representations in DNNs. Rather than using natural images, we use synthetic stimuli that solely contains a closed curve. In this way, we do not mix the insideness problem with other components of image segmentation found in natural images, e.g. self-similarity of segments at the level of object categories or parts, representation of the hierarchy of segments, etc.These components will be studied separately in future works, and finally put together to improve and understand how DNNs segment images.\nLet X ∈ {0, 1}N×N be an image or a matrix of size N × N pixels. We use Xi,j and (X)i,j , indistinguishably, to denote the value of the image in position (i, j). We use this notation for indexing elements in any of the images and matrices that appear in the rest of the paper. Also, in the figures we use white and black to represent 0 and 1, respectively.\nInsideness refers to finding which pixels are in the inside and which ones in the outside of a closed curve. We assume without loss of generality that there is only one closed curve in the image and that it is a digital version of a Jordan curve (Kong, 2001), ie. a closed curve without self-crosses nor self-touches and containing only horizontal and vertical turns, as shown in Fig. 1. We further assume that the curve does not contain the border of the image. The curve is the set of pixels equal to 1 and is denoted by FX = {(i, j)|Xi,j = 1}. The pixels in X that are not in FX can be classified into two categories: the inside and the outside of the curve (Kong, 2001). We define the segmentation of X as S(X) ∈ {0, 1}N×N , where\nS(X)i,j = { 0 if Xi,j is inside 1 if Xi,j is outside , (1)\nand for the pixels in FX , the value of S(X)i,j can be either 0 or 1. Note that unlike object recognition, the definition of insideness is rigorously and uniquely determined by the input image itself.\nThe number of all digital Jordan curves is enormous even if the image size is relatively small, e.g. it is more than 1047 for the size 32×32 (App. A). In addition, insideness is a global problem; whether a pixel is inside or outside depends on the entire image, and not just on some local area around the pixel. Therefore, simple pattern matching, ie. memorization, is impossible in practice." }, { "heading": "3 CAN DNNS FOR SEGMENTATION SOLVE INSIDENESS?", "text": "The universal approximation theorem (Cybenko, 1989) tells us that even a shallow neural network is able to solve the insideness problem. Yet, it could be that the amount of units is too large to\nbe implementable in practice. In this Section, we introduce two DNN architectures that are able to solve the insideness problem at perfection and they are easily implementable in practice. One architecture is feed-forward with dilated convolutions (Yu & Koltun, 2016; Chen et al., 2018b) and the other is recurrent: a ConvLSTM (Xingjian et al., 2015)." }, { "heading": "3.1 FEED-FORWARD ARCHITECTURE WITH DILATED CONVOLUTIONS", "text": "Dilated convolutions facilitate capturing long-range dependencies which are key for segmentation (Yu & Koltun, 2016; Chen et al., 2018b). To demonstrate that there are architectures with dilated convolutions that can solve the insideness problem, we borrow insights from the ray-intersection method. The ray-intersection method (Ullman, 1984; 1996), also known as the crossings test or the even-odd test (Haines, 1994), is built on the following fact: Any ray that goes from a pixel to the border of the image alternates between inside and outside regions every time it crosses the curve. Therefore, the parity of the total number of such crossings determines the region to which the pixel belongs. If the parity is odd then the pixel is inside, otherwise it is outside (see Fig. 1a).\nThe definition of a crossing should take into account cases like the one depicted in Fig. 1b, in which the ray intersects the curve, but does not change region after the intersection. To address these cases, we enumerate all possible intersections of a ray and a curve, and analyze which cases should count as crossings and which ones should not. Without loss of generality, we consider only horizontal rays. As we can see in Fig. 1c, there are only five cases for how a horizontal ray can intersect the curve. The three cases at the top of Fig. 1c, are crosses because the ray goes from one region to the opposite one, while the two cases at the bottom (like in Fig. 1b) are not considered crosses because the ray remains in the same region.\nLet ~X(i, j) ∈ {0, 1}1×N be a horizontal ray starting from pixel (i, j), which we define as ~X(i, j) = [Xi,j , Xi,j+1, Xi,j+2, . . . , Xi,N , 0, . . . , 0], (2)\nwhere zeros are padded to the vector if the ray goes outside the image, such that ~X(i, j) is always of dimension N . Let ~X(i, j) · ~X(i+1, j) be the inner product of the ray starting from (i, j) and the ray starting from the pixel below, (i+1, j). Note that the contribution to this inner product from the three cases at the top of Fig. 1c (the crossings) is odd, whereas the contribution from the other two intersections is even. Thus, the parity of ~X(i, j) · ~X(i + 1, j) is the same as the parity of the total number of crosses and determines the insideness of the pixel (i, j), ie.\nS(X)i,j = parity ( ~X(i, j) · ~X(i+ 1, j) ) . (3)\nDilated convolutions, also called atrous convolutions, are convolutions with upsampled kernels, which enlarge the receptive fields of the units but preserve the number of parameters of the kernel (Yu & Koltun, 2016; Chen et al., 2018b). In App. B we prove that equation 3 can be easily implemented with a neural network with dilated convolutions. The demonstration is based on implementing the dot product in equation 3 with multiple layers of dilated convolutions, as they facilitate capturing the information across the ray. The number of dilated convolutional layers is equal to the logarithm in base-2 of the image size, N . The dot product can also be implemented with two convolutional\nlayers, but with the drawback of using a long kernel of size 1×N . The multiple dilated convolutions use kernels of size 3 × 3, and they are equivalent to the long kernel of 1 × N . Finally, the parity function in equation 3 is implemented by adapting the network introduced by Shalev-Shwartz et al. (2017), which yields a two layer convolutional network with 1× 1 kernels. Note that the proof introduces the smallest network we could find that solves the insideness problem with dilated convolutions. Larger networks than the one we introduced can also solve the insideness problem, as the network size can be reduced by setting kernels to zero and layers to implement the identity operation." }, { "heading": "3.2 RECURRENT ARCHITECTURE: CONVOLUTIONAL LSTMS", "text": "Convolutional LSTM (ConvLSTM) (Xingjian et al., 2015) is another architecture designed to handle long-range dependencies. We now show that a ConvLSTM with just one kernel of size 3 × 3 is sufficient to solve the insideness problem. This is achieved by exploiting its internal back-projection of the LSTM, ie. the flow of information from a posterior layer to an anterior layer.\nOur demonstration is inspired by the coloring method (Ullman, 1984; 1996), which is another algorithm for the insideness problem. This algorithm is based on the fact that neighboring pixels not separated by the curve are in the same region. We present a version of this method that will allow us to introduce the network with an LSTM. This method consists of multiple iterations of two steps: (i) expand the outside region from the borders of the image (which by assumption are in the outside region) and (ii) block the expansion when the curve is reached. The blocking operation prevents the outside region from expanding to the inside of the curve, yielding the solution of the insideness problem, as depicted in Fig. 2a. We call one iteration of expanding and blocking coloring routine.\nWe use Et ∈ {0, 1}N×N (expansion) and Bt ∈ {0, 1}N×N (blocking) to represent the result of the two operations after iteration t. A The coloring routine can then be written as (i) Et = Expand ( Bt−1 ) and (ii) Bt = Block (Et,FX). Let Bt−1 maintain a value of 1 for all pixels that are known to be outside and 0 for all pixels whose region is not yet determined or belong to the curve. Thus, we initialize B0 to have value 1 (outside) for all border pixels of the image and 0 for the rest. In step (i), the outside region of Bt−1 is expanded by setting also to 1 (outside) its neighboring pixels, and the result is assigned to Et. Next, in step (ii), the pixels in Et that were labeled with a 1 (outside) and belong to the curve, FX , are reverted to 0 (inside), and the result is assigned to Bt. This algorithm ends when the outside region can not expand anymore, which is at most after N2 iterations (worst case where each iteration expands the outside region by only one pixel). Therefore, we have EN 2 = S(X).\nIn App. D we demonstrate that a ConvLSTM with one kernel applied on an image X can implement the coloring algorithm. In the following we provide a summary of the proof. Let It, F t, Ot, Ct, and Ht ∈ RN×N be the activations of the input, forget, and output gates, and cell and hidden states of a ConvLSTM at step t, respectively. By analyzing the equations of the ConvLSTM (equation 11 and equation 12 in App. D) we can see that the output layer, Ot, back-projects to the hidden layer,\nHt. In the coloring algorithm, Et and Bt are related in a similar manner. Thus, we define Ot = Et (expansion) and Ht = 12B\nt (blocking). The 12 factor is a technicality due to non-linearities, which is compensated in the output gate and has no relevance in this discussion.\nWe initialize H0 = 12B 0 (recall B0 is 1 for all pixels in the border of the image and 0 for the rest). The output gate expands the hidden representations using one 3 × 3 kernel. To stop the outside region from expanding to the inside of the curve, Ht takes the expansion output Ot and sets the pixels at the curve’s location to 0 (inside). This is the same as the element-wise product of Ot and the “Boolean not” of X , which is denoted as ¬X . Thus, the blocking operation can be implemented as Ht = 12 (O\nt ¬X), and can be achieved if Ct is equal to ¬X . In Fig. 2b we depict these computations.\nIn App. D we show that the weights of a ConvLSTM with just one kernel of size 3 × 3 can be configured to reproduce these computations. A key component is that many of the weights use a value that tends to infinity. This value is denoted as q and it is used to saturate the non-linearities of the ConvLSTM, which are hyperbolic tangents and sigmoids. Note that it is common in practice to have weights that asymptotically tend to infinity, e.g. when using the cross-entropy loss to train a network (Soudry et al., 2018). In practice, we found that saturating non-linear units using q = 100 is enough to solve the insideness problem for all curves in our datasets. Note that only one kernel is sufficient for ConvLSTM to solve the insideness problem, regardless of image size. Furthermore, networks with multiple stacked ConvLSTM and more than one kernel can implement the coloring method by setting unnecessary ConvLSTMs to implement the identity operation (App. D) and the unnecessary kernels to 0.\nFinally, we point out that there are networks with a much lower complexity than LSTMs that can solve the insideness problem, although these networks rarely find applications in practice. In App. E, we show that a convolutional recurrent network as small as having one sigmoidal hidden unit per pixel, with a 3× 3 kernel, can also solve the insideness problem for any given curve." }, { "heading": "4 CAN DNNS FOR SEGMENTATION LEARN INSIDENESS?", "text": "After having identified DNNs that have sufficient complexity to solve the insideness problem, we focus on analyzing whether these solutions can be learnt from examples. We report experiments on synthetically generated Jordan curves. The goal of the network is to learn to predict for each pixel in the image whether it is inside or outside of the curve. In the following, we first describe the experimental setup, then analyze the generalization capabilities of the DNNs trained in standard manner and finally, analyse the advantages of the recurrent networks." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Datasets. Given that the number of Jordan curves explodes exponentially with the image size, a procedure that could provide curves without introducing a bias for learning is unknown. We introduce three algorithms to generate different types of Jordan curves. For each dataset, we generate 95K images for training, 5K for validation and 10K for testing. All the datasets are constructed to fulfill the constraints introduced in Sec. 2. In addition, for testing and validation sets, we only use images that are dissimilar to all images from the training set. Two images are considered dissimilar if at least 25% of the pixels of the curve are in different locations. In the following, we briefly introduce each dataset (see App. F for details). Fig. 3a, shows examples of curves for each dataset. - Polar Dataset (32 × 32 pixels): We use polar coordinates to generate this dataset. We randomly select the center of the figure and a random number of vertices that are connected with straight lines. The vertices are determined by their angles and distance with respect to the center of the figure. We generate 5 datasets with different maximum amount of vertices, namely, 4, 9, 14, 19 and 24, and refer to each dataset by this number, e.g. 24-Polar. - Spiral Dataset (42×42 pixels): The curves are generated by growing intervals of a spiral in random directions from a random starting point. The spiral has a random thickness at the different intervals. - Digs Dataset (42× 42 pixels): We generate a rectangle of random size and then, we create “digs” of random thicknesses in the rectangle. The digs are created sequentially a random number of times.\nEvaluation metrics. From the definition of the problem in Sec. 2, the pixels in the Jordan curve FX are not evaluated. For the rest of the pixels, we use the following metrics:\n- Per pixel accuracy (%): It is the average of the accuracy for inside and outside, evaluated separately. In this way, the metric weights the two categories equally, as there is an imbalance of inside and outside pixels. - Per image accuracy (%): We use a second metric which is more stringent. Each image is considered correctly classified if all the pixels in the image are correctly classified.\nArchitectures. We evaluate the network architectures that we analyzed theoretically and also other relevant baselines: - Feed-forward Architectures: We use the dilated convolutional DNN (Dilated) introduced in Sec. 3.1. We also evaluate two variants of Dilated, which are the Ray-intersection network (Rayint.), which uses a receptive field of 1 ×N instead of the dilated convolutions, and a convolutional network (CNN), which has all the dilation factors set to d = 1. Finally, we also evaluate UNet, which is a popular architecture with skip connections and de-convolutions (Ronneberger et al., 2015). - Recurrent Architectures. We test the ConvLSTM (1-LSTM) corresponding to the architecture introduced in Sec. 3.2. We initialize the hidden and cell states to 0 (inside) everywhere except the border of the image which is initialized to 1 (outside), such that the network can learn to color by expanding the outside region. We also evaluate a 2-layers ConvLSTM (2-LSTM) by stacking one 1- LSTM after another, both with the initialization of the hidden and cell states of the 1-LSTM. Finally, to evaluate the effect of such initialization, we test the 2-LSTM without it (2-LSTM w/o init.), ie. with the hidden and cell states initialized all to 0. We use backpropagation through time by unrolling 50 time steps, for both training and testing.\nLearning. The parameters are initialized using Xavier initialization (Glorot & Bengio, 2010). The derived parameters we obtained in the theoretical demonstrations obtain 100% accuracy but we do not use them in this analysis as they are not learned from examples. The ground-truth consists on the insideness for each pixel in the image, as in equation 1. For all experiments, we use the cross-entropy with softmax as the loss function averaged accross pixels. Thus, the networks have two outputs per pixel (note that this does not affect the result that the networks are sufficiently complex to solve insideness, as the second output can be set to a constant threshold of 0.5). We found that the cross-entropy loss leads to better accuracy than other losses. Moreover, we found that using a weighted loss improves the accuracy of the networks. The weight, which we denote as α, multiplies the loss relative to inside, and (1 − α) multiplies the loss relative to outside. This α is a hyperparamter that we tune and can be equal to 0.1, 0.2 and 0.4. We try batch sizes of 32, 256 and 2048 when they fit in the GPUs’ memory (12GB), and we try learning rates from 1 to 10−5 (dividing by 10). We train the networks for all the hyperparameters for at least 50 epochs, and until there is no more improvement of the validation set loss. In the following, we report the testing accuracy for the hyperparameters that achieved the highest per image accuracy at the validation set. We test a large\nset of hyperparameters (we trained several thousands of networks per dataset), which we report them in detail in App. G." }, { "heading": "4.2 RESULTS", "text": "Intra-dataset Evaluation. In Fig.3b and c we show per pixel and per image accuracy for the networks trained on the same Polar dataset that are being tested. Dilated, 2-LSTM and UNet achieve a testing accuracy very close to 100%, but Ray-int. and 1-LSTM perform much worse. Training accuracy of Ray-int. and 1-LSTM is the same as their testing accuracy (Fig. I.6a and b). This indicates an optimization problem similar to the cases reported by Shalev-Shwartz et al. (2017). Note that for the network with ConvLSTMs, we need two LSTMs to achieve an accuracy very close to 100%, even though one LSTM is sufficient to generalize, as we have previously shown. Similarly, both Dilated and Ray-int. are able to generalize, but only Dilated does so. It is an open question to understand why stochastic gradient descend performs so differently in each of these architectures can all generalize in theory. Finally, note that the per pixel accuracy is in most cases very high, and from now on, we only report the per image accuracy.\nCross-dataset Evaluation. We now evaluate if the networks that have achieved very high accuracies (Dilated, 2-LSTM and UNet), have learnt the general solution of insideness that we introduced in Sec. 3. To do so, we train on one dataset and test on the different one. In Fig.3d and e, we observe that Dilated and 2-LSTM do not generalize to Polar datasets with larger amount of vertices than the Polar dataset on which they were trained. Only if the networks are trained in 24-Polar, the networks generalize in all the Polar datasets. The same conclusions can be extracted for UNet (Fig. I.6c).\nWe further test generalization capabilities of these networks beyond the Polar dataset. In this more broad analysis, we also include the CNN and 2-LSTM w/o init, by training them on 24-Polar, Spiral and both 24-Polar and Spiral, and test them on 24-Polar, Spiral and Digs separately. We can see in Fig. 4 that all tested networks generalize to new curves of the same family as the training set. Yet, the networks do not generalize to curves of other families. In Fig. I.12, we show qualitative examples of failed segmentations produced by networks trained on 24-Polar and Spiral and tested on the Digs dataset.\nFurthermore, note that using a more varied training set (“Both”) does not necessarily lead to better cross-dataset accuracy in all cases. For example, for UNet and 2-LSTM w/o init., training on Polar achieves better accuracy in Digs than when training on “Both”. Also, for Dilated, training on “Both” harms its accuracy: the accuracy drops more than 6% in 24-Polar and Spiral. In this case, the training accuracy is close to 100%, which indicates a problem of overfitting. We tried to address this problem by regularizing using weight decay, but it did not improve the accuracy (App. H).\nVisualization. We now visualize the networks to study the representations learnt. In Fig. I.7, we analyze different units of Dilated trained on 24-Polar and Spiral. We display three units of the same kernel from the second and sixth layers, by showing the nine images in the testing set that produce the unit to be most active across all images (Zeiler & Fergus, 2014). For each image, we indicate the unit location by a gray dot. The visualizations suggest that units of the second layer are tuned to local features (e.g. Unit 19 is tuned to close parallel lines), while in layer 6 they are tuned to global ones (e.g. Unit 27 captures the space left in the center of a spiral). These features seem to capture characteristics of the curves in the training set. This is quite different from the representations that\nwe derived theoretically, which accumulate the number of crossings in a ray from each pixel. This is further supported by visualizing the feature maps in Fig. I.9.\nIn Fig. I.11, we display the feature maps of 2-LSTM trained on 24-Polar and Spiral. The figure shows the feature maps of the layers at different time steps. We can see that the network expands the borders of the image, which have been initialized to outside. Yet, it also expands the curve, which is not what our analytical solution does (Fig. I.10). This explains why this representation does not generalize to new datasets, because it is not possible to know the direction where to expand the curve without having a priori knowledge of the curve." }, { "heading": "4.3 LEARNING THE COLORING ROUTINE IN ISOLATION", "text": "We now analyse a property of the coloring method that is relevant for learning: the coloring routine does not contain long-range relationships because it just takes into account 3 × 3 neighbourhoods. The long-range relationships are captured by applying the coloring routine multiple times. The standard training strategy enforces the ground-truth after the last step, and hence, requires learning the full long-range relationships at once. Yet, if we decompose the learning of insideness into learning the coloring routine in isolation, the problem becomes much simpler as it only requires learning an operation in a 3× 3 neighbourhood. The coloring routine can be learned by enforcing to each step the ground-truth produced by the routine, rather than waiting until the last step. The inputs of a step are the image and the hidden state of the previous step. Recall that the coloring routine determines that a pixel is outside if there is at least one neighbor assigned to outside that is not at the curve border. All input cases (64) are depicted in Fig. 5a, leaving the irrelevant inputs for the coloring routine at 0. During learning, such irrelevant pixels are assigned randomly a value of 0 or 1.\nWe have done an architecture search to learn the coloring routine. We could not make any of the previously introduced LSTM networks fit a step of the coloring routine due to optimization problems. Yet, we found a simple network that succeeded: a convolutional recurrent neural network with a sigmoidal hidden layer and an output layer that backprojects to the hidden layer. The kernel sizes are 3×3 and 1×1 for the hidden and output layers, respectively, and we use 5 kernels. We call this network Coloring Net. Observe that this network is sufficiently complex to solve the insideness problem, because it is the network introduced in App. E with an additional layer and connections.\nThe Coloring Net reaches 0 training error about 40% of the times after randomly initializing the parameters. After training the Coloring Net in one step, we unroll it and apply it to images of Jordan curves. In Fig. 5b we report the accuracy of the Coloring Net in the 24-Polar, Spiral and Digs datasets, for different amounts of training examples (generated through adding more variations of the irrelevant inputs). We compare the results with the 2-LSTM and Dilation networks previously\nintroduced, trained on 24-Polar. We can see that with less than 1000 examples the Coloring Net is able to generalize to any of the datasets, while the other networks do not. This demonstrates the great potential of decomposing the learning to facilitate the emergence of the routine." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "We have shown that DNNs with dilated convolutions and convolutional LSTM that are implementable in practice are sufficiently complex to solve the insideness problem for any given curve. When using the standard training strategies, the units in these networks become specialized to detect characteristics of the curves in the training set and only generalize to curves of the same family as the training, even when using large number of training examples. Yet, we found that when simple recurrent networks are supervised to learn the coloring routine, which does not contain long-range relationships, the general solution for the insideness problem emerged using orders of magnitude less data.\nThis raises the question of whether these findings can be translated to improvements of segmentation methods for natural images. The following experiment suggests that state-of-the-art methods for image segmentation suffer from learning general solutions to the insideness problem. We evaluate two off-the-shelf methods, namely DEXTR (Maninis et al., 2018) for instance segmentation and DeepLabv3+ (Chen et al., 2018c) for semantic segmentation, which have been trained on PASCAL VOC 2012 (Everingham et al.) and ImageNet (Russakovsky et al., 2015). These methods fail to determine the insideness for a vast majority of curves, even after fine-tuning in the Both dataset (Deeplabv3+ achieved 36.58% per image accuracy in Both dataset and 2.18% in Digs, see implementation details and qualitative examples in App. J). Thus, extending these methods with recurrent connections and a training strategy that could capture the coloring routine, could help increase their segmentation accuracy, especially for different conditions on which they have been trained." }, { "heading": "A NUMBER OF DIGITAL JORDAN CURVES", "text": "We now introduce a procedure to derive a lower bound of the number of Jordan curves in an image. We represent an image of size N × N pixels by using a grid graph (square lattice) with N × N vertices. We employ 4-adjacency for black pixels and corresponding grid points, and 8-adjacency for white pixels and their counterpart. Then, a curve (the set of black pixels) corresponds to a subgraph of the base grid graph (Fig. A.1a).\nIn this representation, a digital Jordan curve is defined as a subgraph specified by a sequence of vertices (v0, v1, . . . , vL) satisfying the following conditions (Rosenfeld, 1970; Kong, 2001):\n1. L ≥ 4,\n2. vr = vs if and only if r = s, and\n3. vr is 4-adjacent to vs if and only if r ≡ s± 1 (mod L+ 1).\nNote that conditions 1 and 2 defines a cycle (Harary, 1969) in a grid graph. Therefore, any digital Jordan curve is a cycle but not vice versa. Figure A.1b shows examples of cycles that are not digital Jordan curves.\nThe numbers of all cycles in grid graphs of different sizes were computed up to 27 × 27 vertices (Iwashita et al., 2013; A14), and we utilize this result to get lower bounds for the number of digital Jordan curves with the following considerations.\nAlthough a cycle in a grid graph is not necessarily a digital Jordan curve as shown above, we can obtain a digital Jordan curve in a larger image from any cycle by “upsampling” as shown in Fig. A.2a. Note that there are other digital Jordan curves than the ones obtained in this manner (therefore we get a lower bound with this technique). See Fig. A.2b for examples.\nWe also consider “padding” shown in Fig. A.3 to assure that a digital Jordan curve does not contain the border of a image (this is what we assume in the main body of the paper).\nTaking everything into consideration, we can obtain a lower bound of the number of digital Jordan curves in an N × N image that does not contain border pixels utilizing the above-mentioned result (Iwashita et al., 2013; A14), upsampling and padding. Table 1 shows lower bounds obtained in this way. For example, starting with the row 2 of (Karavaev & Iwashita) in (A14) (this represents the number of all cycles in the grid graph with 3× 3 vertices), we get a lower bound 13 for the number of digital Jordan curves in 5 × 5 images by considering the upsampling and get the same number as a lower bound for the number of digital Jordan curves that do not contain border pixels in 7 × 7 images by considering the padding.\nB INSIDENESS WITH DILATED CONVOLUTIONAL NETWORKS\nWe first introduce a feed-forward convolutional DNN for which there exist parameters that reproduce equation 3. Then, we show that one of the layers in this network can be better expressed with multiple dilated convolutions.\nB.1 CONVOLUTIONAL DNN TO IMPLEMENT THE RAY-INTERSECTION METHOD\nThe smallest CNN that we found that implements the ray-intersection method has 4-layers. As we show in the following, the first two layers compute ~X(i, j) · ~X(i + 1, j), and the last two layers compute the parity. We use H(k) ∈ RN×N and [·]+ to denote the activations of the units at the k-th layer, and the ReLU activation function, respectively. Fig. B.4a depicts the architecutre.\nFirst and Second Layer: Inner product. For the sake of simplicity, we only use horizontal rays, but the network that we introduce can be easily adapted to any ray direction. The first layer implements all products needed for the inner products across all rays in the image, ie.Xi,j ·Xi+1,j ,∀(i, j). Note that there is exactly one product per pixel, and each product can be reused for multiple rays. For convenience, H(1)i,j represents the product in pixel (i, j), ie. H (1) i,j = Xi,j · Xi+1,j . Since the input consists of binary images, each product can be reformulated as\nH (1) i,j = { 1 if Xi,j = Xi+1,j = 1 0 otherwise . (4)\nThis equality can be implemented with a ReLU: H(1)i,j = [1 ·Xi,j + 1 ·Xi+1,j − 1]+. Thus, H(1) is a convolutional layer with a 2 × 1 kernel that detects the intersections shown in Fig. 1c. This\nTable 1: Lower bounds (LBs) of the number of digital Jordan curves in N × N images that do not contain border pixels.\nN 5 7 9 · · · 31 33 35 · · · 55\nLB 1 13 213 · · · 1.203× 1047 1.157× 1054 3.395× 1061 · · · 6.71× 10162\nX H(3)H(2)H(1) S(X)\n1 1 1 13C/2\nH (1) 1,1\nH (1) 1,2\n2X\nv=1\nH (1) 1,v\n3X\nv=2\nH (1) 1,v\n4X\nv=3\nH (1) 1,v\n22X\nv=1\nH (1) 1,v\n8X\nv=5\nH (1) 1,v\n⇤22⇤21⇤1\n23X\nv=1\nH (1) 1,v\n(a) (b)\nFigure B.4: The Ray-Intersection Network. (a) The receptive field colored in green has size 1 × N , and it can be substituted by an equivalent network composed of multiple dilated convolutions. (b) The 1×N kernel of the ray-intersection network is equivalent to multiple dilated convolutional layers. The figure shows an horizontal ray of the activations of several layers, starting from the first layer H(1). The green arrows indicate the locations in the ray that lead to the desired sum of the activations, ie. the sum of the ray.\nlayer can also be implemented with a standard convolutional layer with a 3 × 3 kernel, by setting the unnecessary elements of the kernel to 0.\nThe second layer sums over the products of each ray. To do so, we use a kernel of dimension 1×N with weights equal to 1 and bias equal to 0, ie. H(2)i,j = 1 1×N · H (1) i,j =\n~X(i, j) · ~X(i + 1, j), in which 1 I×J denotes the matrix of size I × J with all entries equal to 1. Zero-padding is used to keep the kernel size constant across the image.\nNote that the shape of the kernel, 1×N , is not common in the DNN literature. Here, it is necessary to capture the long-range dependencies of insideness. We show in the next subsection that it can be substituted by multiple layers of dilated convolutions.\nThird and Fourth Layers: Parity. To calculate the parity of each unit’s value in H(2), we borrow the DNN introduced by Shalev-Shwartz et al. (2017) (namely, Lemma 3 in the supplemental material of the paper). This network obtains the parity of any integer bounded by a constant C. The network has 3C/2 hidden ReLUs and one output unit, which is 1 if the input is even, 0 otherwise (see App. C for details).\nWe apply this parity network to all units in H(2) via convolutions, reproducing the network for each unit. Since a ray through a closed curve in an N ×N image can not have more than N crossings, C is upper bounded by N . Thus, the third layer has 3N/2 kernels, and both the third and output layer are convolutions with a 1× 1 kernel. At this point we have shown that the DNN explained above is feasible in practice, as the number of kernels is O(N), and it requires no more than 4 convolutional layers with ReLUs. The network has a layer with a kernel of size 1×N , and next we show that this layer is equivalent to several layers of dilated convolutions of kernel size 3× 3.\nB.2 DILATED CONVOLUTIONS TO IMPLEMENT THE 1×N KERNEL\nWe use ∗d to denote a dilated convolution, in which d is the dilation factor. Let H ∈ RN×N be the units of a layer and let K ∈ Rk×k be a kernel of size k × k. A dilated convolution is defined as follows: (H ∗d K)i,j = ∑ −bk/2c≤v,w≤bk/2cHi+dv,j+dw · Kv,w, in which Hi+dv,j+dw is 0 if i + dv or j + dw are smaller than 0 or larger than N , ie. we abuse notation for the zero-padding. Note that in the dilated convolution the kernel is applied in a sparse manner, every d units, rather than in consecutive units. See Yu & Koltun (2016); Chen et al. (2018b) for more details on dilated convolutions.\nRecall the kernel of size 1 ×N is set to 1 1×N so as to perform the sum of the units corresponding to the ray in the first layer, ie. ∑ 0≤v<N H (1) i,j+v . We can obtain this long-range sum with a series of dilated convolutions using the following 3× 3 kernel:\nK = [ 0 0 0 0 1 1 0 0 0 ] . (5)\nFirst, we apply this K to the H(1) through ∗1 in order to accumulate the first two entries in the ray, which yields: ( H(2)\n) i,j = ( H(1) ∗1 K ) i,j = ∑ 0≤v≤1H (1) i,j+v . As shown in Fig. B.4b, to\naccumulate the next entries of the ray, we can apply K with a dilated convolution of dilation factor d = 2, which leads to ( H(3) ) i,j = ∑ 0≤v<4H (1) i,j+v . To further accumulate more entries of the ray, we need larger dilation factors. It can be seen in Fig. B.4b that these dilation factors are powers of 2, which yield the following expression:\n( H(l) ) i,j = ( H(l−1) ∗2l−2 K ) i,j = ∑\n0≤v<2l−2 H\n(1) i,j+v. (6)\nObserve that when we reach layer l = log2(N) + 2, the units accumulate the entire ray of length N , ie. ∑ 0≤v<N H (1) i,j+v . Networks with dilation factors d = 2\nl are common in practice, e.g. Yu & Koltun (2016) uses these exact dilation factors.\nIn summary, DNNs with dilated convolutions can solve the insideness problem and are implementable in practice, since the number of layers and the number of kernels grow logarithmically and linearly with the image size, respectively." }, { "heading": "C PARITY NETWORK BY SHALEV-SHWARTZ ET AL. (2017)", "text": "To calculate the parity of each unit’s value in H(2), we borrow the DNN introduced by ShalevShwartz et al. (namely, Lemma 3 in the supplemental material of Shalev-Shwartz et al. (2017)). This network obtains the parity of any integer bounded by a constant C. The network has 3C2 hidden units with ReLUs and one output unit, which is 1 if the input is even, 0 otherwise. Since such a network requires an upper bound on the number whose parity is being found, we define C as the maximum number of times that a horizontal ray can cross FX . This number can be regarded as an index to express the complexity of the shape.\nThere is a subtle difference between the network introduced by Shalev-Shwartz et al. (2017) and the network we use in the paper. In Shalev-Shwartz et al. (2017), the input of the network is a string of bits, but in our case, the sum is done in the previous layer, through the dilated convolutions. Thus, we use the network in Shalev-Shwartz et al. (2017) after the sum of bits is done, ie. after the first dot product in the first layer in Shalev-Shwartz et al. (2017).\nTo calculate the parity, for each even number between 0 and C (0 included), {2i | 0 ≤ i ≤ bC/2c}, the network has three hidden units that threshold at (2i − 12 ), 2i and (2i + 1 2 ), ie. −0.5, 0, 0.5, 1.5, 2, 2.5, 3.5, 4, 4.5, . . . The output layer linearly combines all the hidden units and weights each triplet of units by 2, −4 and 2. Observe that when the input is an odd number, the three units in the triplet are either all below or all above the threshold. The triplets that are all below the threshold contribute 0 to the output because the units are inactive, and the triplets that are all above the threshold also contribute 0 because the linear combination is 2(2i − 12 ) − 4(2i) + 2(2i + 12 ) = 0. For even numbers, the triplet corresponding to that even number has one unit below, equal and above the threshold. The unit that is above the threshold contributes 1 to the output, yielding the parity function." }, { "heading": "D COLORING ROUTINE WITH A CONVOLUTIONAL LSTM", "text": "Here we prove that a ConvLSTM can implement the coloring routine, namely, the iteration of the expansion and the blocking operations. A ConvLSTM applied on an image X is defined as the fol-\nlowing set of layers (see Xingjian et al. (2015) for a comprehensive introduction to the ConvLSTM): It = σ ( W xi ∗X +W hi ∗Ht−1 + bi ) , (7)\nF t = σ ( W xf ∗X +W hf ∗Ht−1 + bf ) , (8)\nC̃t = tanh ( W xc ∗X +W hc ∗Ht−1 + bc ) , (9)\nCt = F t Ct−1 + It C̃t, (10) Ot = σ ( W xo ∗X +W ho ∗Ht−1 + bo ) , (11)\nHt = Ot tanh ( Ct ) , (12)\nwhere It, F t, Ct, Ot and Ht ∈ RN×N are the activation of the units of the input, forget, cell state, output and hidden layers at t, respectively. Note that Ct has been decomposed with the help of the auxiliary equation defining C̃t. Note also that each of these layers use a different set of weights that are applied to X and to Ht denoted as W ∈ RN×N with superindices that indicate the connections between layers, e.g. W xi are the weights that connect X to I . Similarly, the biases are denoted as b ∈ RN×N with the superindices indicating the layers. The symbols ∗ and denote the (usual, not dilated) convolution and the element-wise product, respectively. Finally, σ and tanh are the sigmoid and the hyperbolic tangent, which are used as non-linearities.\nWe can see by analyzing equation 11 and equation 12 that the output layer, Ot, back-projects to the hidden layer, Ht. In the coloring algorithm, Et and Bt are related in a similar manner. Thus, we define Ot = Et (expansion) and Ht = 12B\nt (blocking), as depicted in Fig. 2b. The 12 factor will become clear below, and it does not affect the correctness. We initialize H0 = 12B\n0 (recall B0 is 1 for all pixels in the border of the image and 0 for the rest). We now show how to implement the iteration of the expansion and the blocking operations with the ConvLSTM:\n(i) Expansion, Ot: We set the output layer in equation 11 in the following way:\nOt = σ ( 2q1 3×3 ∗Ht−1 − q\n2 1N×N\n) . (13)\nNote that this layer does not use the input, and sets the convolutional layer W ho to use a 3 × 3 kernel that is equal to 2q1 3×3, in which q is a scalar constant, and the bias equal to − q21N×N . For very large values of q, this layer expands the outside region. This can be seen by noticing that for a unit in Ht−1, if at least one neighbor has value 1/2, then Oti,j = limq→∞ σ(q) = 1. Also, when all neighbouring elements of the unit are 0, then no expansion occurs because Oti,j = limq→∞ σ(− q2 ) = 0. (ii) Blocking, Ht: To stop the outside region from expanding to the inside of the curve, Ht takes the expansion output Ot and sets the pixels at the curve’s location to 0 (inside). This is the same as the element-wise product between Ot and the element-wise “Boolean not” of X , which is denoted as ¬X . Thus, the blocking operation can be implemented as Ht = 12 (Ot ¬X). Observe that if Ct = ¬X , this is equal to equation 12 of the LSTM, because tanh(0) = 0 and tanh(1) = 1/2, ie.\nHt = Ot tanh ( Ct ) = 1\n2 Ot ¬X. (14)\nWe can obtain Ct = ¬X , by imposing It = ¬X and Ct = It, as shown in Fig. 2b. To do so, let W xi = −q1 1×1, W hi = 0N×N , and bi = q21N×N , and equation 8 becomes the following expression:\nIt = lim q→∞\nσ ( −q1 1×1 ∗X + q\n2 1N×N\n) . (15)\nObserve that when q tends to infinity, we have Iti,j = limq→∞ σ( q 2 ) = 1 when Xi,j = 0 and Iti,j = limq→∞ σ(− q2 ) = 0 when Xi,j = 1, which means It = ¬X . Next, to obtain Ct = It, we set W xf = W hf = W xc = W hc = 0N×N , bf = −q1N×N and bc = q1N×N . This leads to the desired result:\nF t = lim q→∞ σ (−q1N×N ) = 0N×N , (16)\nC̃t = lim q→∞ tanh (q1N×N ) = 1N×N ,\nCt = 0N×N Ct−1 + It 1N×N = It = ¬X. (17)\nThus, the coloring method can be implemented with a network as small as one ConvLSTM with one kernel. A network with more than one kernel and multiple stacked ConvLSTM can also solve the insideness problem for any given curve. The kernels that are not needed to implement the coloring method can be just set to 0 and the ConvLSTM that are not needed should implement the identity operation, ie. the output layer is equal to the input. To implement the identity operator, equation 11 can be rewritten in the following way:\nOt = lim q→∞\nσ ( q1 1×1 ∗X − q\n2 1N×N\n) (18)\nwhere W ho = 0 1×1 is to remove the connections with the hidden units, and q is the constant that tends to infinity. Observe that if Xi,j = 1, then Ot = limq→∞ σ(q/2) = 1. If Xi,j = 0, then Ot = limq→∞ σ(−q/2) = 0. Thus, the ConvLSTM implements the identity operation." }, { "heading": "E COLORING ROUTINE WITH A SIGMOIDAL CONVOLUTIONAL RNN", "text": "There are other recurrent networks simpler than a ConvLSTM that can also implement the coloring algorithm. We introduce here a convolutional recurrent network that uses sigmoids as nonlinearities. Since it is a convolutional network, for the sake of simplicity we just describe the operations done to obtain an output pixel in a step. The network has only one hidden layer, which also corresponds to the output of the network. Let {htk}k∈Ni,j be the hidden state of the output pixel indexed by i, j and its 4-neighbourhood, at step t. Let Xi,j be the only relevant input image pixel. A necessary condition is that the outputs of the sigmoid should asymptotically be close to 0 or 1, otherwise the coloring routine would fade after many steps. It is easy to check that ht+1i,j = σ ( q (∑ k∈Nij h t k − 5Xi,j − 1/2 )) implements the coloring routine, where q is the factor that ensures saturation of the sigmoid." }, { "heading": "F DATASET GENERATION", "text": "In Fig. F.5, we show more examples of curves in the datasets. In the following we provide a more detailed description of the algorithms to generate the curves:\n- Polar Dataset (32 × 32 pixels): We use polar coordinates to generate this dataset. We randomly select the center of the figure and a random number of vertices that are connected with straight lines. These lines are constrained to follow the definition of digital Jordan curve in Sec. 2 in the main paper (and App. A in this supplementary material). The vertices are determined by their angles, which are randomly generated. The distance with respect to the center of the figure are also randomly generated to be between 3 to 14 pixels away from the center.\nWe generate 5 datasets with different maximum amount of vertices, namely, 4, 9, 14, 19 and 24. We refer to each of these datasets as Polar with a prefix with the amount of vertices.\n- Spiral Dataset (42 × 42 pixels): The curves in these data set are generated from a random walk. First, a starting position is chosen uniformly at random from [10, 20] × [10, 20]. Then, a segment of the spiral is built in the following way: a random direction (up, down, left, right) and a random length r from 3 to 10 are chosen so that the walk is extended by turning r pixels in the given direction. However, such extension can only happen if adding a random thickness t ∈ {1, 2, 3, 4} to both sides of this segment does not cause self intersections. These segments are added repeatedly until there is no space to add a new segment without violating the definition of a Jordan curve.\n- Digs Dataset (42× 42 pixels): We generate a rectangle of random size and then, we create “digs” of random thicknesses in the rectangle. The number of “digs” is a random number between 1 to 10. The digs are created sequentially and they are of random depth (between 1 pixel to the length of the rectangle minus 2 pixels). For each new “dig”, we made sure to not cross previous digs by adjusting the depth of the “dig”." }, { "heading": "G HYPERPARAMETERS", "text": "In this Section we report all the tried hyperparameters for all architectures. In all cases, the convolutional layers use zero-padding.\nExamples of Jordan Curves of Each Dataset\n4- Po\nla r\nSp ira\nl\n4- Po\nla r\nSp ira\nl\n14 -P\nol ar\nSp ira\nl\n14 -P\nol ar\nDi gs\n24 -P\nol ar\nDi gs\n24 -P\nol ar\nDi gs\nFigure F.5: Datasets. Images of the curves used to train and test the DNNs. Each row correspond to a different dataset.\n- Dilated Convolution DNN (Dilated): This network was introduced in Sec. 3.1. We use the same hyperparameters as in Yu & Koltun (2016): 3×3 kernels, a number of kernels equal to 2l×{2, 4, 8}, where l is the number of layers and ranges between 8 to 11, with d = 2l (the first layer and the last two layers d = 1). The number of kernels in the layer that calculates the parity can be {5, 10, 20, 40, 80}. - Ray-intersection network (Ray-int.): This is the architecture introduced in Sec. 3.1, which uses a receptive field of 1 ×N instead of the dilated convolutions. The rest of the hyperparameters are as in Dilated.\n- Convolutional DNN (CNN): To analyze the usefulness of the dilated convolutions, we use the Dilated architecture with all dilation factors d = 1. Also, we try adding more layers than in Dilated, up to 25.\n- UNet: This is a popular architecture with skip connections and de-convolutions. We use similar hyperparameters as in Ronneberger et al. (2015): starting with 64 kernels (3 × 3) at the first layer and doubling this number after each max-pooling; a total of 1 to 3 max-pooling layers in all the network, that are placed after sequences of 1 or 2 convolutional layers.\n- Convolutional LSTM (1-LSTM): This is the architecture with just one ConvLSTM, introduced in Sec. 3.2. The number of time steps is fixed to 50. We initialize the hidden and cell states to 0 (inside) everywhere except the border of the image which is initialized to 1 (outside).\n- 2-layers Convolutional LSTM (2-LSTM): We stack one convolutional LSTM after another. The first LSTM has 64 kernels, and the hidden and cell states are initialized as in the 1-LSTM.\n- 2-layers Convolutional LSTM without initialization (2-LSTM w/o init.): this is the same as the 2-LSTM architecture the hidden and cell states are initialized to 0 (outside)." }, { "heading": "H ADDITIONAL EXPERIMENTS OF FEED-FORWARD NETWORKS", "text": "In Fig. 4, we have observed that Dilated trained on both 24-Polar and Spiral datasets, obtains a test accuracy of less than 95% on these datasets while the accuracy in the training set is very close to 100%. We added weight decay in all the layers in order to regularize the network. We tried values between 10−5 to 1, scaling by a factor of 10. In all these experiments we have observed overfitting except for a weight decay of 1, in which the training never converged.\nAlso, note that the CNN does not have this overfitting problem. Yet, the number of layers needed is 25, which is more than the double than for Dilated, which is 9 layers. We added more layers to Dilated but the accuracy did not improve." }, { "heading": "I ADDITIONAL FIGURES AND VISUALIZATIONS", "text": "4 9 14 19 24 Polar Dataset\n96\n98 100 pe r p ix el a cc .\n(% ) Train Acc. on Polar\nDilated Ray-int. UNet 2-LSTM 1-LSTM\n4 9 14 19 24 Polar Dataset\n0\n50\n100\npe r i\nm ag\ne ac\nc. (%\n) Train Acc. on Polar\nDilated Ray-int. UNet 2-LSTM 1-LSTM\n4 9 14 19 24 Polar - Test Set\n0\n50\n100\npe r i\nm ag\ne ac\nc. (%\n)\nUNet Generalization Acc. on Polar\nTrain Set 4 9 14 19 24\n(a) (b) (c)\nFigure I.6: Training Accuracy in the Polar Dataset. Intra-dataset evaluation using (a) per pixel accuracy and (b) per image accuracy on the training set, which are very similar to the test accuracy reported in Fig. 3b and c. (c) Intra-dataset evaluation of Unet.\nLayer 2 Layer 6 Unit 3 Unit 10 Unit 19 Unit 10 Unit 15 Unit 27\nFigure I.7: Visualization of the Units Learnt by Dilation. Each block are the 9 images that produce the maximum activation of a units in a convolutional across the test set. The gray dot indicates the location of the unit. Fig. I.8 shows more examples.\nLayer 2 Unit 0 Unit 2 Unit 12 Unit 24 Unit 26 Unit 28\nLayer 4 Unit 0 Unit 3 Unit 4 Unit 7 Unit 10 Unit 14\nLayer 6 Unit 2 Unit 4 Unit 8 Unit 16 Unit 17 Unit 21\nFigure I.8: More examples of Visualization of the Units Learnt by Dilation.\nExamples of Feature Maps of the Dilated Conv. DNN H (6 )\nH (4 ) H (2 )\nH (0\n)\nH (6 ) H (4 )\nH (2 ) H (0 )\nH (6 ) H (4 )\nH (2 ) H (0 )\nH (6 ) H (4 )\nH (2 ) H (0 )\nFigure I.9: Visualization of the Feature Maps of the DNN with Dilated Convolutions. We display several feature maps at different layers (each row in each block is a different layer). We can see that the first layers detect lower level features such as edges with specific orientations, while the later layers capture long-range dependencies of the curve relative to insideness.\nt=0 t=4 t=8 t=12 t=16 t=20 t=0 t=4 t=8 t=12 t=16 t=20\nO (1 ) H (1 )\nC (1\n)\nO (1 ) H (1 )\nC (1\n)\nt=0 t=4 t=8 t=12 t=16 t=20 t=0 t=4 t=8 t=12 t=16 t=20\nO (1 ) H (1 )\nC (1\n)\nO (1 ) H (1 )\nC (1\n)\nFigure I.10: Visualization of Convolutional LSTM with the Mathematically Derived Parameters. We can see that only the border of the image (outside) is propagated, and not the curve, as in the learnt solution.\nt=0 t=4 t=8 t=12 t=16 t=20 t=0 t=4 t=8 t=12 t=16 t=20 O (2 )\nH (2 ) C (2 )\nO (1\n) e x. 0 O (1 ) e x. 1\nO (2 ) H (2 )\nC (2 ) O (1 ) e x. 0\nO (1\n) e x. 1 Figure I.11: Activation Maps of the Learnt Representations by 2-LSTM. Each row corresponds to a different layer and each colum to a different time step.\nDilated CNN UNet 2-LSTM Dilated CNN UNet 2-LSTM\nFigure I.12: Qualitative Examples from the Digs dataset. Networks trained in 24-Polar and Spiral dataset fail to segment in the Digs dataset." }, { "heading": "J NETWORKS PRE-TRAINED ON NATURAL IMAGES", "text": "We chose two state-of-the-art networks on Instance Segmentation, DEXTR (Maninis et al., 2018) and DeepLabv3+ (Chen et al., 2018c), to investigate their ability in solving the insideness problem.\nDEXTR. Deep Exteme Cut (DEXTR) is a neural network used for interactive instance segmentation. We use the pre-trained model on PASCAL VOC 2012 (Everingham et al.) and show some of the qualitative results in Fig. J.13.\nDeepLabv3+. This architecture extends DeepLabv3 (Chen et al., 2018b) by utilizing it as an encoder network and adding a decoder network to refine segmentation boundaries. The encoder employs dilated convolution and Atrous Spatial Pyramid Pooling module for feature extraction. we use DeepLabv3+ with Xception backbone pretrained on PASCAL VOC 2012, and fine-tune its last layer with Polar and Spiral datasets for training. The ratio of input image spatial resolution to encoder output image is referred to as output stride and varies according to dilation rates. We use output strides of 8 and 16 as suggested in the paper; loss weight (α) of 0.1, 0.2 and 0.4; and initial learning rates from 0.1 to 10−5 (dividing by 10). We train the network on Polar and Spiral datasets until there is no improvement of the accuracy at the validations set, and we then reduce the learning rate by a ratio of 10 and stop at the next plateau of the validation set accuracy.\nFigure J.13: Qualitative Results with DEXTR on the Polar Dataset. We use the publicly available pre-trained DEXTR model (Maninis et al., 2018). DEXTR uses 4 points marked by the user (indicated with crosses). We report the best found points, two examples of them per image.\nPolar Spiral Digs\nFigure J.14: Results of DeepLabv3+ on Polar, Spiral and Digs Datasets. The network is fine-tuned on Polar and Spiral. The results show that the network predicts well most of the pixels except in the borders. For the cross-dataset evaluations in the Digs dataset, the network is not able to generalize." } ]
2,019
null
SP:92370bd193d2a808b6803cb3a22ab3d690f1e13d
[ "In this paper, the authors investigate non-autoregressive translation (NAT). They specifically look into how using different auto-regressive translation (AT) models for knowledge distillation impacts the quality of NAT models. The paper is well organised and the experiments are sound and interesting, shedding light on an aspect of NAT models that's been rather dismissed as secondary until now: the impact of varying AT knowledge distillation.", "The paper analyses recent distillation techniques for non-autoregressive machine translation models (NAT). These models use a autoregressive teacher (AT), which typically perform better. However, AT models can not be parallelized that easily such as the NAT models. The distillation has the effect of removing modes from the dataset which helps the NAT models as they suffer from the averaging effect of maximum likelihood solutions. The authors analyze why such distillation is needed and what the effect of the complexity of the training set is and further propose 3 methods to adjust the complexity of the teacher to the complexity of the NAT model." ]
Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance. Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear. In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial in NAT training. We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data. Furthermore, a strong correlation is observed between the capacity of an NAT model and the complexity of the distilled data that provides the best translation quality. Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models. We achieve state-of-theart performance for NAT-based models, and close the gap with the autoregressive baseline on the WMT14 En-De benchmark.1
[ { "affiliations": [], "name": "Chunting Zhou" }, { "affiliations": [], "name": "Jiatao Gu" }, { "affiliations": [], "name": "Graham Neubig" } ]
[ { "authors": [ "Nader Akoury", "Kalpesh Krishna", "Mohit Iyyer" ], "title": "Syntactically supervised transformers for faster neural machine translation", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Satanjeev Banerjee", "Alon Lavie" ], "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization", "venue": null, "year": 2005 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "CoRR, abs/1810.04805,", "year": 2018 }, { "authors": [ "Chris Dyer", "Victor Chahuneau", "Noah Smith" ], "title": "A simple, fast, and effective reparameterization of IBM Model 2", "venue": "In NAACL,", "year": 2013 }, { "authors": [ "Tommaso Furlanello", "Zachary Lipton", "Michael Tschannen", "Laurent Itti", "Anima Anandkumar" ], "title": "Born-again neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Marjan Ghazvininejad", "Omer Levy", "Yinhan Liu", "Luke Zettlemoyer" ], "title": "Constant-time machine translation with conditional masked language models", "venue": "arXiv preprint arXiv:1904.09324,", "year": 2019 }, { "authors": [ "Jiatao Gu", "James Bradbury", "Caiming Xiong", "Victor O.K. Li", "Richard Socher" ], "title": "Non-autoregressive neural machine translation", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Hideki Isozaki", "Tsutomu Hirao", "Kevin Duh", "Katsuhito Sudoh", "Hajime Tsukada" ], "title": "Automatic evaluation of translation quality for distant language pairs", "venue": "In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing,", "year": 2010 }, { "authors": [ "Melvin Johnson", "Mike Schuster", "Quoc V. Le", "Maxim Krikun", "Yonghui Wu", "Zhifeng Chen", "Nikhil Thorat", "Fernanda Viégas", "Martin Wattenberg", "Greg Corrado", "Macduff Hughes", "Jeffrey Dean" ], "title": "Google’s multilingual neural machine translation system: Enabling zero-shot translation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Lukasz Kaiser", "Samy Bengio", "Aurko Roy", "Ashish Vaswani", "Niki Parmar", "Jakob Uszkoreit", "Noam Shazeer" ], "title": "Fast decoding in sequence models using discrete latent variables", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yoon Kim", "Alexander M Rush" ], "title": "Sequence-level knowledge distillation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jason Lee", "Elman Mansimov", "Kyunghyun Cho" ], "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Percy Liang", "Hal Daumé III", "Dan Klein" ], "title": "Structure compilation: trading structure for features", "venue": "In ICML, pp", "year": 2008 }, { "authors": [ "Xuezhe Ma", "Pengcheng Yin", "Jingzhou Liu", "Graham Neubig", "Eduard Hovy" ], "title": "Softmax qdistribution estimation for structured prediction: A theoretical interpretation for raml", "venue": "arXiv preprint arXiv:1705.07136,", "year": 2017 }, { "authors": [ "Xuezhe Ma", "Chunting Zhou", "Xian Li", "Graham Neubig", "Eduard Hovy" ], "title": "Flowseq: Nonautoregressive conditional sequence generation with generative flow", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing,", "year": 2019 }, { "authors": [ "Aaron Oord", "Yazhe Li", "Igor Babuschkin", "Karen Simonyan", "Oriol Vinyals", "Koray Kavukcuoglu", "George Driessche", "Edward Lockhart", "Luis Cobo", "Florian Stimberg" ], "title": "Parallel wavenet: Fast high-fidelity speech synthesis", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Myle Ott", "Michael Auli", "David Grangier", "Marc’Aurelio Ranzato" ], "title": "Analyzing uncertainty in neural machine translation", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Maja Popović" ], "title": "chrf: character n-gram f-score for automatic mt evaluation", "venue": "In Proceedings of the Tenth Workshop on Statistical Machine Translation,", "year": 2015 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725, Berlin, Germany, August 2016", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb. org/anthology/P16-1162", "year": 2016 }, { "authors": [ "Chenze Shao", "Yang Feng", "Jinchao Zhang", "Fandong Meng", "Xilin Chen", "Jie Zhou" ], "title": "Retrieving sequential information for non-autoregressive neural machine translation", "venue": null, "year": 1906 }, { "authors": [ "Tianxiao Shen", "Myle Ott", "Michael Auli" ], "title": "Mixture models for diverse machine translation: Tricks of the trade", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Raphael Shu", "Jason Lee", "Hideki Nakayama", "Kyunghyun Cho" ], "title": "Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior", "venue": null, "year": 1908 }, { "authors": [ "Matthew Snover", "Bonnie Dorr", "Richard Schwartz", "Linnea Micciulla", "John Makhoul" ], "title": "A study of translation edit rate with targeted human annotation", "venue": "Proceedings of Association for Machine Translation in the Americas,", "year": 2006 }, { "authors": [ "Milos Stanojevic", "Khalil Simaan. Beer" ], "title": "Better evaluation as ranking", "venue": "In Proceedings of the Ninth Workshop on Statistical Machine Translation,", "year": 2014 }, { "authors": [ "Mitchell Stern", "Noam Shazeer", "Jakob Uszkoreit" ], "title": "Blockwise parallel decoding for deep autoregressive models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Mitchell Stern", "William Chan", "Jamie Kiros", "Jakob Uszkoreit" ], "title": "Insertion transformer: Flexible sequence generation via insertion operations", "venue": "arXiv preprint arXiv:1902.03249,", "year": 2019 }, { "authors": [ "David Talbot", "Hideto Kazawa", "Hiroshi Ichikawa", "Jason Katz-Brown", "Masakazu Seno", "Franz J Och" ], "title": "A lightweight evaluation framework for machine translation reordering", "venue": "In Proceedings of the Sixth Workshop on Statistical Machine Translation,", "year": 2011 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Chunqi Wang", "Ji Zhang", "Haiqing Chen" ], "title": "Semi-autoregressive neural machine translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Yiren Wang", "Fei Tian", "Di He", "Tao Qin", "ChengXiang Zhai", "Tie-Yan Liu" ], "title": "Non-autoregressive machine translation with auxiliary regularization", "venue": null, "year": 1902 }, { "authors": [ "Bingzhen Wei", "Mingxuan Wang", "Hao Zhou", "Junyang Lin", "Xu Sun" ], "title": "Imitation learning for nonautoregressive neural machine translation", "venue": null, "year": 1906 }, { "authors": [ "Vaswani" ], "title": "2017), we list the basic parameters of all the AT model we used: Models tiny small base big dmodel", "venue": "dhidden", "year": 2048 }, { "authors": [ "• iNAT (Lee" ], "title": "2018): following the original paper, we train the iNAT model jointly with 4 iterations of refinement during training. For each iteration, the model has the 50% probability to learn as a denoising autoencoder, and the rest of the probability to learn from the model’s own prediction", "venue": null, "year": 2018 }, { "authors": [ "• InsT (Stern" ], "title": "2019): in this work, we only consider training the Insertion Transformer (InsT) using the slot-loss based on the uniform loss function (Stern et al., 2019)", "venue": null, "year": 2019 }, { "authors": [ "• MaskT (Ghazvininejad" ], "title": "2019): following the original paper, we train the model as a typical masked language model where the ratio of masked tokens is sampled from 0 ∼ 100%. 6https://github.com/pytorch/fairseq/blob/master/examples/translation", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Traditional neural machine translation (NMT) systems (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) generate sequences in an autoregressive fashion; each target token is predicted step-by-step by conditioning on the previous generated tokens in a monotonic (e.g. left-to-right) order. While such autoregressive translation (AT) models have proven successful, the sequential dependence of decisions precludes taking full advantage of parallelism afforded by modern hardware (e.g. GPUs) at inference time. In contrast, non-autoregressive translation (NAT) models (Gu et al., 2018; Lee et al., 2018) predict the whole sequence or multi-token chunks of the sequence simultaneously, alleviating this problem by trading the model’s capacity for decoding efficiency. Such a non-autoregressive factorization assumes that the output tokens are independent from each other. However, this assumption obviously does not hold in reality and as a result NAT models generally perform worse than standard AT models.\nOne key ingredient in the training recipe for NAT models that is used in almost all existing works (Gu et al. (2018); Lee et al. (2018); Stern et al. (2019), inter alia) is creation of training data through knowledge distillation (Hinton et al., 2015). More precisely, sequence-level knowledge distillation (Kim & Rush, 2016) – a special variant of the original approach – is applied during NAT model training by replacing the target side of training samples with the outputs from a pre-trained AT model trained on the same corpus with a roughly equal number of parameters. It is usually assumed (Gu et al., 2018) that knowledge distillation’s reduction of the “modes” (alternative translations for an input) in the training data is the key reason why distillation benefits NAT training. However, this intuition has not been rigorously tested, leading to three important open questions:\n∗Equal Contribution. Most work was done during Chunting’s internship at FAIR. 1Code is released at https://github.com/pytorch/fairseq/tree/master/examples/\nnonautoregressive_translation.\n• Exactly how does distillation reduce the “modes”, and how we could we measure this reduction quantitatively? Why does this reduction consistently improve NAT models?\n• What is the relationship between the NAT model (student) and the AT model (teacher)? Are different varieties of distilled data better for different NAT models?\n• Due to distillation, the performance of NAT models is largely bounded by the choice of AT teacher. Is there a way to further close the performance gap with standard AT models?\nIn this paper, we aim to answer the three questions above, improving understanding of knowledge distillation through empirical analysis over a variety of AT and NAT models. Specifically, our contributions are as follows:\n• We first visualize explicitly on a synthetic dataset how modes are reduced by distillation (§3.1). Inspired by the synthetic experiments, we further propose metrics for measuring complexity and faithfulness for a given training set. Specifically, our metrics are the conditional entropy and KL-divergence of word translation based on an external alignment tool, and we show that these metrics are correlated with NAT model performance (§3.2).\n• We conduct a systematic analysis (§4) over four AT teacher models and six NAT student models with various architectures on the standard WMT14 English-German translation benchmark. These experiments find a strong correlation between the capacity of an NAT model and the optimal dataset complexity that results in the best translation quality.\n• Inspired by these observations, we propose approaches to further adjust the complexity of the distilled data in order to match the model’s capacity (§5). We also show that we can achieve the state-of-the-art performance for NAT models and largely match the performance of the AT model." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 NON-AUTOREGRESSIVE NEURAL MACHINE TRANSLATION", "text": "In order to model the joint probability of the output sequence y, NMT models usually generate each output token conditioned on the previously generated ones p(y|x) = ∏T t=1 p(yt|y<t,x). This is known as the autoregressive factorization. To generate a translation from this model, one could predict one token at a time from left to right and greedily take arg max over each output probability distribution, or use beam search to consider a fixed number of hypotheses. In this work, we study non-autoregressive translation (NAT), a special subset of NMT models with an additional restriction (the zeroth-order Markov assumption) upon the output predictions or a subset thereof. The simplest formulation of an NAT model independently factors the conditional distribution: p(y|x) = ∏T t=1 p(yt|x).\nStandard NAT models (Gu et al., 2018) adopt an architecture similar to the Transformer (Vaswani et al., 2017) and make non-autoregressive predictions for the entire sequence with one forward pass of the decoder. However, because multiple translations are possible for a single input sentence (the so-called multi-modality problem; Gu et al. (2018)), vanilla NAT models can fail to capture the dependencies between output tokens. As a result, they tend to make egregious mistakes such as outputting tokens repeatedly. To improve the model’s ability to handle multi-modality, recent works have incorporated approaches including (1) relaxing the fully non-autoregressive restriction and adopting K decoding passes (instead of just one) to iteratively refine the generated outputs (Lee et al., 2018; Ghazvininejad et al., 2019; Wang et al., 2018; Stern et al., 2018; 2019; Gu et al., 2019); (2) using latent variables (Kaiser et al., 2018; Ma et al., 2019; Shu et al., 2019) or structured information such as syntax trees (Akoury et al., 2019) to capture translation variation; (3) training NAT models with objectives other than maximum likelihood (Wang et al., 2019; Wei et al., 2019; Shao et al., 2019) which ameliorates the effects of multi-modality. However, to achieve competitive performance with the autoregressive model, almost all existing NAT models rely on training using data distilled from a pre-trained AT model instead of the real parallel training set, as described below." }, { "heading": "2.2 SEQUENCE-LEVEL KNOWLEDGE DISTILLATION", "text": "Knowledge distillation (Liang et al., 2008; Hinton et al., 2015) was originally proposed for training a weaker student classifier on the targets predicted from a stronger teacher model. A typ-\nical approach is using the label probabilities produced by the teacher as “soft targets” qi = exp(zi/τ) /∑ j exp(zj/τ) for training the student model, where qi and zi are the probability and the logit of class i respectively and τ is the temperature. Prior work has shown the effectiveness of adopting knowledge distillation in adversarial defense (Papernot et al., 2016), neural network compression (Howard et al., 2017), and fast inference for speech synthesis (Oord et al., 2018).\nIn the context of sequence generation, Kim & Rush (2016) extend knowledge distillation to the sentence level using “hard targets” from a pretrained large teacher model to train a small sequence generation model. More precisely, the teacher distribution q(t|x) is approximated by its mode: q(t|x) ≈ 1{t = arg maxt∈T q(t|x)} with the following objectives:\nLseq-KD = −Ex∼data ∑ t∈T q(t|x) log p(t|x) ≈ −Ex∼data,ŷ=argmax t∈T q(t|x) [log p(t = ŷ|x)] , (1)\nwhere t ∈ T is the space of possible target sequences. This can also be seen as a special case of standard distillation over the sentence space when the temperature τ approaches 0, which is equivalent to taking the arg max over all feasible translations. While the “hard target” ŷ is the most likely translation predicted by the teacher, in practice we use beam search as an approximation. As mentioned earlier, almost all the existing literature trains NAT models using sequence-level knowledge distillation from a pre-trained AT model to achieve competitive performance. Particularly, it is common to train the teacher model as a standard autoregressive Transformer (Vaswani et al., 2017) with a roughly equal number of trainable parameters as the desired NAT model on the real data. Next, we will first study how this knowledge distillation process affects the behavior of NAT models." }, { "heading": "3 HOW DOES DISTILLATION IMPROVE NAT?", "text": "In this section, we start from an introductory example to illustrate how NAT models fail to capture the multi-modality of data. Then we propose a metric to assess the multi-modality of a data set and use it to test our hypothesis about how knowledge distillation affects NAT models." }, { "heading": "3.1 SYNTHETIC EXPERIMENT FOR MULTI-MODALITY", "text": "Dataset. We start by investigating NAT’s difficulties in modeling multi-modality in output data using a synthetic setup where we explicitly include multiple modes in the training data. More specifically, we utilize three language pairs – English-German (En-De), English-French (En-Fr), and English-Spanish (En-Es) – from the Europarl parallel corpus.2 We extract sentences that have aligned sentences for all languages, and create a multi-target En-De/Es/Fr corpus. In this case every English input sentence always corresponds to target sentences in three different languages, which forms three explicit output modes. Notably, this is similar to the one-to-many translation setting in Johnson et al. (2017) but in our case we do not have an explicit signal (e.g. target language tag) to tell the NMT model which target language to translate to.\nModels. We train both the AT and NAT models on this concatenated data set, then compare the distributions of translations with each other. We use the standard Transformer(base) model (Vaswani et al., 2017) as the AT model, and a simplified version of Gu et al. (2018) as the NAT model where the decoder’s inputs are monotonically copied from the encoder embeddings and a length predictor is learned to predict the target sentence length. Both models are trained for 300, 000 steps using maximum likelihood. After training, we use both models to translate the English sentences in the validation and test sets.\nVisualization of AT Outputs. The synthetic setup enables us to better understand and visualize the modes in the outputs more easily. First, we visualize the outputs from the AT model. For every translated sentence, we visualize the estimated probability distribution of language classes as a point in Fig. 1 (a). This probability is calculated as the average of the posterior probability of each token, and it is estimated based on the Bayes’ law:\np(li|y) ≈ 1\nT T∑ t=1 p(li|yt) = 1 T T∑ t=1 p(yt|li)p(li)∑ k p(yt|lk)p(lk)\n(2)\n2https://www.statmt.org/europarl/\nwhere li denotes the language class i, and p(yt|li) is the token frequency of yt in language li. We assume p(li) follows a uniform distribution. As shown in Fig. 1 (a), points of the AT outputs are clustered closely to each vertex of the simplex, indicating that the AT model prefers to generate the whole sequence in one language. This phenomenon verifies our assumption that decoding with the AT model (distillation) is essentially selecting “modes” over the real data.\nVisualization of NAT Outputs. We visualize outputs for the NAT model trained on the same data in Fig. 1 (b). In contrast to the AT results, the NAT points are scattered broadly inside the simplex, indicating that the NAT model fails to capture the mode of language types. Instead, it predicts tokens mixed with multiple languages, which corroborates our hypothesis that the NAT model has trouble consistently selecting a single mode when multiple modes exist.\nNext, we create two datasets that have fewer modes than the original dataset. First, we randomly select a single target sentence from one of the three languages for each source sentence. Second, we perform distillation, decoding from the AT model trained on the combined training set. As noted in the AT results, distillation will also roughly be selecting a language mode, but we conjecture that this selection may be more systematic, selecting a particular language for a particular type of training sentence. As shown in Fig. 1(c) (d), NAT models trained on both of these datasets are more likely to choose one mode (language) when generating translations, showing that training with reduced modes is essential for NAT model. Furthermore, points in Fig. 1 (d) are clearly clustered better than (c) indicating that modes selected by AT models are indeed likely more systematic and easy to capture than those generated by randomly assigning a language for each sentence." }, { "heading": "3.2 QUANTITATIVE MEASURES FOR PARALLEL DATA", "text": "To better study why distillation is crucial for NAT models, in this section, we propose quantitative measures for analyzing the complexity and faithfulness of parallel data, two properties that we hypothesize are important for NAT training.\nMeasure of Complexity. Inspired by the observations in the synthetic experiments, we propose to use a measure of translation uncertainty, specifically operationalized as conditional entropy, as the measurement of complexity C(d) for any given dataset d = {(x1,y1), ..., (xN ,yN )}, where (x,y) is sentence pair instantiation of (X,Y) and X ∈ X ,Y ∈ Y:\nH(Y|X = x) = ∑ y∈Y p(y|x) log p(y|x)\n≈ ∑ y∈Y ( Ty∏ t=1 p(yt|x))( Ty∑ t=1 log p(yt|x)) asm.1: conditional independence ≈ Ty∑ t=1 ∑ yt∈A(x) p(yt|Align(yt)) log p(yt|Align(yt)) asm.2: alignment model\n= Tx∑ t=1 H(y|x = xt)\n(3)\nwhere we use x and y to denote a word in the source and target vocabulary respectively. Tx and Ty denote the length of the source and target sentences. To make the computation tractable, we make two additional assumptions on the conditional distribution p(y|x):\n• Assumption 1: We assume the target tokens are independent given the source sentence. Then the conditional entropy of a sentence can be converted into the sum of entropy of target words conditioned on the source sentence x.\n• Assumption 2: We assume the distribution of p(yt|x) follows an alignment model (Dyer et al., 2013)3 where yt is is generated from the word alignment distribution p(yt|Align(yt)). This makes it possible to simplify the conditional entropy to the sum of entropy of target words conditioned on the aligned source words denotedH(y|x = xt).\nThe corpus level complexityC(d) is then calculated by adding up the conditional entropyH(Y|X = x) of all sentences. To prevent C(d) from being dominated by frequent words, we calculate C(d) by averaging the entropy of target words conditioned on a source word, denoted C(d) = 1 |Vx| ∑ x∈Vx H(y|x).\nTo illustrate that the proposed metric is a reasonable measure of complexity of a parallel corpus, in Tab. 1 we compute C(d) for parallel data from different language pairs, the concatenated data set, and the data distilled from the AT model described in §3.1. We observe that the conditional entropy of the distilled data is much smaller than that of the concatenated or randomly selected data mentioned above. Additionally, we find that the conditional entropy of En-Es and En-Fr are similar but that of En-De is relatively larger, which can also explain why the student NAT model prefers to predict the modes of Es or Fr more often than De as shown in Fig. 1(d).\nMeasure of Faithfulness. C(d) reflects the level of multi-modality of a parallel corpus, and we have shown that a simpler data set is favorable to an NAT model. However, it is not fair to assess the data set only by its complexity; we can trivially construct a simple data set with no variations in the output, which obviously won’t be useful for training. The other important measurement of the data set is its faithfulness to the real data distribution. To measure the faithfulness of a parallel corpus d, we use KL-divergence of the alignment distribution between the real parallel data set r and an altered parallel data set d, denoted F (d):\nF (d) = 1 |Vx| ∑ x∈Vx ∑ y∈Vy pr(y|x) log pr(y|x) pd(y|x)\n(4)" }, { "heading": "4 EMPIRICAL STUDY", "text": "In this section, we perform an extensive study over a variety of non-autoregressive (NAT) models trained from different autoregressive (AT) teacher models to assess how knowledge distillation affects the performance of NAT models." }, { "heading": "4.1 EXPERIMENTAL SETTINGS", "text": "Data. We use the data set commonly used by prior work as our evaluation benchmark: WMT14 English-German (En-De)4. We use newstest2013 as the validation set for selecting the best model, and newstest2014 as the test set. We learn a byte-pair encoding (BPE, Sennrich et al., 2016) vocabulary of 37,000 on the tokenized data.\nAT Models. We set up four Transformer models with different parameter sizes: Transformertiny/small/base/big denoted as tiny, small, base, big respectively. We build base and big models\n3We follow https://github.com/clab/fast_align to compute the alignment given the dataset. 4http://www.statmt.org/wmt14/translation-task.html\nfollowing settings described in Vaswani et al. (2017), and reduce the model sizes for tiny, small to create weaker teacher models. Details of the model architectures can be found in Appendix A.\nAll the models are trained using the Adam optimizer (Kingma & Ba, 2014) with the maximum number of steps set to 300, 000. After training, we use the resulting AT models to decode the whole training set with beam size 5 and replace the real target sentences to create a new parallel corpus.\nNAT Models. We consider the following NAT models, from vanilla to state-of-the-art. All the models are using the Transformer as the basic backbone and are (re-)implemented based on Fairseq5 except for FlowSeq. We briefly outline the methods and parameters here, and describe detailed settings in the Appendix A.\n• Vanilla NAT (Gu et al., 2018): Similarly to §3.1, we use a simplified version where the decoder’s inputs are directly copied from the encoder without considering latent variables.\n• FlowSeq (Ma et al., 2019): FlowSeq adopts normalizing flows (Kingma & Dhariwal, 2018) as the latent variables to model the mappings from source sentences to a latent space.\n• NAT with Iterative Refinement (iNAT, Lee et al., 2018): iNAT extends the vanilla NAT by iteratively reading and refining the translation. The number of iterations is set to 10 for decoding.\n• Insertion Transformer (InsT, Stern et al., 2019): InsT adopts a similar architecture as iNAT while generating the sequence by parallel insertion operations. Here, we only consider InsT trained with uniform loss as described in the original paper.\n• MaskPredict (MaskT, Ghazvininejad et al., 2019): MaskT adopts a masked language model (Devlin et al., 2018) to progressively generate the sequence from an entirely masked input. The number of iterations is set to be 10.\n• Levenshtein Transformer (LevT, Gu et al., 2019): LevT uses similar architectures as in InsT and MaskT while generating based on both insertion and deletion operations. We experiment with a base and big LevT model (LevT and LevT-big in Tab. 2).\nWe also summarize the parameter size, performance and relative decoding speed of the NAT models introduced in Tab. 2. We use the decoding time of vanilla NAT to represent one unit of time, and Iters × Pass represents the relative time units used for each model.\nAs mentioned earlier, we analyze each model by training from both the real and 4 distilled targets. We train the NAT models for the same number of steps as the AT models. For a fair comparison of the actual ability of each NAT-based model, we test all the models based on greedy decoding without any advanced search algorithms (e.g. length beam (Ghazvininejad et al., 2019), noisy parallel decoding (Ma et al., 2019), or re-ranking from the teacher model (Gu et al., 2018)). Notably, the vanilla NAT and FlowSeq output translations with single forward pass, while the remaining models are based on the iterative refinement." }, { "heading": "4.2 ANALYSIS OF THE DISTILLED DATA", "text": "We compare different dimensions of the data generated by the four AT models and the real data set in Fig. 3. First, Fig. 3 (a) shows that as the capacity of the AT model increases, the\ncomplexity C(d) of the distilled data increases, which indicates that the multi-modality increases as well. At the same time, we observe that F (d) defined in §3.2 also decreases, showing that the distilled data more faithfully represents the word-level translation distribution of the original data.\n5https://github.com/pytorch/fairseq\nFor more than 30 years , Josef Winkler has been writing from the heart , telling of the hardships of his childhood and youth .\nJosef Winkler schreibt sich seit mehr als 30 Jahren die Nöte seiner Kindheit und Jugend von der Seele .\nSeit mehr als 30 Jahren schreibt Josef Winkler aus dem Herzen und erzählt von der Not seiner Kindheit und Jugend .\nSource\nDistilled Target\nReal Target\nFigure 2: A sampled pair together with its real target from the distilled data of the base-AT model. Chunks annotated in the same colors are approximately aligned with each other.\ntiny small base big real\n2.9\n3.0\n3.1\n3.2\nF( d)\nConditional Entropy KL divergence\ntiny small base big real 28\n30\n32\n34\nBL EU Training Set BLEU\ntiny small base big real 0.350\n0.375\n0.400\n0.425\n0.450\n0.475\n0.500\nRe or\nde rin\ng\nFuzzy Reordering Score\n1.50\n1.75\n2.00\n2.25\n2.50\n2.75\nC( d)\nFigure 3: Complexity C(d) (↑ more complex), faithfulness F (d) (↓ more faithful), training BLEU, and reordering score (↑ more monotonic alignment) of different distilled sets of WMT14-ENDE.\nSecond, we plot the BLEU score of the distilled data w.r.t to the real data set in (b) and we observe that the BLEU score of the distilled data from a higher-capacity teacher model is higher, which is both intuitive and in agreement with the results on KL divergence.\nWe also investigate how the relative ordering of words in the source and target sentences is changed during distillation. We use the fuzzy reordering score proposed in Talbot et al. (2011). A larger fuzzy reordering score indicates the more monotonic alignments. As shown in Fig 3 (c), the distilled data has significantly less reordering compared to the real parallel sentences, and the distilled data from a weaker AT teacher is more monotonic than a stronger AT teacher. We also show a randomly sampled example in Fig. 2 where compared to the real translation, the AT distilled target is much more monotonically aligned to the source sentence. This has potential benefits in that these simpler reordering patterns may be easier to learn for NAT models, but also disadvantages in that it may prevent NAT models from learning complex reordering patterns." }, { "heading": "4.3 ANALYSIS OF DISTILLATION STRATEGIES", "text": "In §4.2, we have shown that decoding with an AT model reduces the conditional entropy of the parallel data set, which mitigates multi-modality in the output data. But does the decoding method of the AT model affect this change in the data set? We also investigate different decoding strategies when creating distilled data, using the base Transformer model as the teacher and the vanilla NAT model as the student. In Tab. 3, four decoding methods are presented: sampling, sampling within the top-10 candidates, beam search, and greedy decoding. With the same AT model, the performance of the NAT model differs widely depending on the decoding approach, where distillation with beam search results in the best performance.\nWe can see that beam search or greedy decoding can reduce the complexity of the real data the most while maintaining high faithfulness. In contrast, sampling based decoding methods less aggressively reduce the modes in the output sequence. This finding is in concert with Ott et al. (2018), who demonstrate that because beam search approximately selects the most probable translation, it effectively reduces diversity in the output translations compared to sampling or the true distribution." }, { "heading": "4.4 DISTILLED DATA V.S. NAT MODELS", "text": "We next examine the relationship between the NAT students and distilled training data from different AT models. In Fig. 4, we demonstrate results for the NAT models listed in §4.1. We use the test\nset performance on real data as a simple metric to measure the capacity of the NAT model and arrange the subfigures in an increasing order of the performance (left-to-right, top-to-bottom). The results in the figure demonstrate that, interestingly, weaker NAT students prefer distilled data with smaller complexity as measured above in §4.2. The best performance of NAT models – from lower capacity ones to higher capacity ones – is achieved with distilled data of lower complexity to higher complexity, i.e. the vanilla NAT model performs best when using the distilled data from a small Transformer whereas LevT achieves the best performance when training with the distilled data from a big Transformer. Third, and notably, by simply changing the distilled data set upon which the models are trained, we are able to significantly improve the state-of-the-art results for models in a particular class. For example, FlowSeq increased to 22, by simply changing from the distilled data of Transformer(base) to Transformer(small). Finally, we find that by distilling from a big AT model, LevT is able to close the gap with the Transformer (base) with a similar number of parameters. Both LevT and LevT-big achieve the state-of-the-art performance for NAT-based models." }, { "heading": "5 IMPROVEMENTS TO KNOWLEDGE DISTILLATION", "text": "The previous section shows that the optimal complexity of the dataset is highly correlated with the capacity of the NAT model. In this section, we introduce three techniques that can be used to alter the distilled data to match the capacity of NAT model. Specifically, these techniques can be used to simplify the data further (BANs, MoE) for a lower-capacity student model or increase faithfulness of the data set (Interpolation) for a higher-capacity student model.\nBorn-Again Networks. We apply Born-Again neworks (BANs) to create a simplified dataset for NAT models. BANs were originally proposed as a self-distillation technique (Furlanello et al., 2018) that uses the output distribution of a trained model to train the original model. Starting from the real data, we repeatedly train new AT models with decoded sentences from the AT model at the previous iteration. This process is repeated for k times and yields k distilled data sets, upon which we perform NAT training and examine how the k born-again teachers affect the performance of NAT students.\nWe conduct experiments using the vanilla NAT model (Gu et al., 2018) (which achieved the best performance with distilled data from a small Transformer in §4.4) and the base Transformer as the AT model. As shown in Fig. 5, we can make the following observations: (i) The performance of the base AT model almost remains unchanged during the reborn iterations. (ii) The performance of the vanilla NAT model can be improved by 2 BLEU when using the distilled data from reborn iteration 6. (iii) As the reborn iterations continue, the complexity of the distilled data decreases and becomes constant eventually. Meanwhile, the quality of the distilled data compared to the real data decreases.\nMixture-of-Experts. The mixture-of-expert model (MoE; Shen et al. (2019)) learns different experts for diverse machine translation, and different mixture components were shown to capture consistent translation styles across examples. Inspired by this, we use one expert from the mixture model to translate the training data, which is supposed to generate a single style of translation and reduce the diversity in the original data set. Then we use the best single-expert translations as the distilled data to train the vanilla NAT model. Specifically, we follow Shen et al. (2019)’s setup, using the base Transformer model and uniform hard mixture model, varying the number of experts.\nIn Fig. 6, we observe that the performance of the best expert of MoE tends to decrease as the number of experts increases. However, the complexity (C(d)) and faithfulness (F (D)) of distilled data from different MoE models has a relatively large variance. Compared to using the distilled data from a plain base AT model, the performance of NAT model is improved by 1.21 BLEU when using the distilled data from the MoE model with the number of experts of 3 which produces the distilled data with the least complexity.\nSequence-Level Interpolation. §4.4 shows stronger NAT models (e.g. MaskT, LevT) have the ability to learn from the dataset that is closer to the real data, and achieve better performance. We adopt the sequence-level interpolation proposed in Kim & Rush (2016) as a natural way to create a better dataset. Different from distillation, interpolation picks the sentence with the highest sentence-level BLEU score w.r.t. the ground truth from K−best beam search hy-\npotheses. In our experiments, we first run beam search using the base Transformer model with a beam size of 5 then select the sentences with the highest BLEU score from the top-3 candidates.\nTab. 4 compares the performance of LevT trained with distilled data from the AT model with the standard distillation or interpolation. We observe that selection with BLEU score from the base AT model (base-inter) improves the performance of LevT ∼ 0.4 BLEU while the dataset complexity C(d) does not increase much." }, { "heading": "6 CONCLUSION", "text": "In this paper, we first systematically examine why knowledge distillation improves the performance of NAT models. We conducted extensive experiments with autoregressive teacher models of different capacity and a wide range of NAT models. Furthermore, we defined metrics that can quantitatively measure the complexity of a parallel data set. Empirically, we find that a higher-capacity\nNAT model requires a more complex distilled data to achieve better performance. Accordingly, we propose several techniques that can adjust the complexity of a data set to match the capacity of an NAT model for better performance." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "A.1 AT MODELS\nModel All the AT models are implemented based on the Transformer model using fairseq (Ott et al., 2019), and we basically follow the fairseq examples to train the transformers6. Following the notation from Vaswani et al. (2017), we list the basic parameters of all the AT model we used:\nTraining For all experiments, we adopt the Adam optimizer (Kingma & Ba, 2014) using β1 = 0.9, β2 = 0.98, = 1e− 8. The learning rate is scheduled using inverse sqrt with a maximum learning rate 0.0005 and 4000 warmup steps. We set the label smoothing as 0.1. All the models are run on 8 GPUs for 300, 000 updates with an effective batch size of 32, 000 tokens. The best model is selected based on the validation loss except for FlowSeq which uses valid BLEU score.\nDecoding After training, we use beam-search with a fixed beam size 5 for all AT models to create the distilled dataset. We use length normalization without length penalty.\nA.2 NAT MODELS\nModel Tab. 2 also lists all the NAT models we test in this work. In general, all the NAT models except FlowSeq and LevT-big adopts a similar architecture and hyper-parameters as the Transformerbase (see Tab. 5). LevT-big is a naive extension of the original LevT model with a comparable parameter setting as Transformer-big (Tab. 5). For FlowSeq, we use the base model (FlowSeq-base) described in (Ma et al., 2019). We re-implemented the vanilla NAT as a simplified version of Gu et al. (2018) where instead of modeling fertility as described in the original paper, we monotonically copy the encoder embeddings to the input of the decoder. All the models except InsT require the additional module to predict the length of the output sequence, or the number of placeholders to be inserted, which is implemented as a standard softmax classifier over the lengths of [0, 256). For LevT, we also have a binary classifier to predict the deletion of the incorrect tokens.\nTraining Similar to the AT models, all the NAT models are trained using the Adam optimizer with the same learning rate scheduler, in which the warmup steps are set to 10, 000. We train the FlowSeq model on 32 GPUs with a batch size as 2048 sentences, while all the other models are trained on 8 GPUs with an effective batch size of 64, 000 tokens. Note that, the batch sizes for training NAT is typically larger than the AT model, which improves final results. There are also specialized training settings for each models:\n• iNAT (Lee et al., 2018): following the original paper, we train the iNAT model jointly with 4 iterations of refinement during training. For each iteration, the model has the 50% probability to learn as a denoising autoencoder, and the rest of the probability to learn from the model’s own prediction.\n• InsT (Stern et al., 2019): in this work, we only consider training the Insertion Transformer (InsT) using the slot-loss based on the uniform loss function (Stern et al., 2019). That is, we assign equal probabilities to all the insertable tokens inside each slot.\n• MaskT (Ghazvininejad et al., 2019): following the original paper, we train the model as a typical masked language model where the ratio of masked tokens is sampled from 0 ∼ 100%. 6https://github.com/pytorch/fairseq/blob/master/examples/translation.\n• LevT (Gu et al., 2019): in this work, we only consider sequence generation tasks, which means the training of LevT is very similar to InsT. We use sentences with randomly deleted tokens to learn insertion, and learn deletion based on the model’s own prediction.\nDecoding For a fair comparison over all the NAT models, we use greedy decoding for all the models without considering any advanced decoding methods such as searching or re-ranking from a teacher model. For the vanilla NAT and FlowSeq, decoding is quite straight-forward and simply picks the arg max at every position. For iNAT and MaskT, we fix the decoding steps to 10. Both InsT and LevT decode in an adaptive number of iterations, and we set the maximum iterations for both models to be 10. A special EOS penalty that penalizes generating too short sequences is tuned based on the validation set for both InsT and LevT.\nFor all models, final results are calculated using tokenized BLEU score." }, { "heading": "B REAL DATA STATISTICS", "text": "The detailed dataset split for WMT14 En-De is shown in Tab. 6. In Fig. 7, we also plot the histogram of the conditional entropy of each pair of sentences H(y|x) in the real parallel data and different distilled data sets from the big-AT, base-AT, small-AT and tiny-AT respectively. It shows that the distribution of the sentence-level conditional entropy differs widely. The mode ofH(y|x) in the real data is the highest and follows by distilled data from the big-AT, base-AT, small-AT and tiny-AT. This observation aligns with the complexity value C(d) proposed in §3.2." }, { "heading": "C ADDITIONAL METRICS", "text": "In Figure 8, we also showed results with different metrics together with BLEU scores considering that BLEU scores sometimes cannot fully capture the changes in the system. We considered 5 additional metrics in our experiments: METEOR (Banerjee & Lavie, 2005), RIBES (Isozaki et al., 2010), ChrF (Popović, 2015) TER (Snover et al., 2006), and BEER (Stanojevic & Simaan, 2014). Not surprisingly, we find that all the metrics are correlated with the original BLEU scores quite well showing a similar trend as discussed earlier." }, { "heading": "D SYNTHETIC DATA WITH ACCESS TO THE TRUE DISTRIBUTION", "text": "D.1 BACKGROUND: BAYESIAN DECISION THEORY\nBayesian decision theory is a fundamental statistical approach to the problem of pattern classification, which provides a principled rule of finding the optimal classification decision using probability and losses that accompany such decisions.\nIn the problem of structured prediction (Ma et al., 2017), let x denote the input sequence and y denote the output label sequence. Let H denote all the possible hypothesis functions from the input to the output space: H = {h : X → Y}. Let r(y|x) denote the conditional risk on the input x, which is the expected loss of predicting y based on the posterior probabilities:\nr(y|x) = EP (y′|x)[L(y,y′)], (5)\n, where L(y,y′) is the loss function that penalizes predicting the true target y′ as y. The classification task aims to find a hypothesis function h that minimizes the overall risk R given by\nR(h) = EP (x)[r(h(x)|x)] (6)\nThis is known as the Bayes risk. To minimize the overall risk, obviously we need to minimize the conditional risk for each input x. The Bayesian decision rule states that the global minimum of R(h) is achieved when the classifier make predictions that minimize each conditional risk given x and this gives the Bayes optimal classifier:\nh∗(x) = arg min y∈Y r(y|x) (7)\nLet us consider two loss functions defined in Eq. 5. First is the sequence-level loss Lseq(y,y′) = 1− I(y = y′), then in this case the Bayes classifier is:\nh∗seq(x) = arg max y∈Y P (y|x) (8)\n, which is the most probable output label sequence given the input sequence x.\nSecond let us consider the token-level loss Ltok(y,y′) = ∑T\nt=1 1 − I(yt = y′t), i.e the sum of zero-one loss at each time step. We have:\nh∗tok(x) = arg min y∈Y EP (y′|x)[L2(y,y′)]\n= arg max y∈Y\nEP (y′|x)[ ∑T t=1 I(yt = y′t)]\n= arg max y∈Y\n∑T t=1 EP (y′|x)[I(yt = y′t)]\n= arg max y∈Y\n∑T t=1 EP (y′t|x)[I(yt = y ′ t)]\n= arg max y∈Y T∏ t=1 P (yt|x)\n(9)\nThis suggests that the Bayes classifier finds the most probable label at each time step given the input sequence.\nD.2 EXPERIMENTAL SETUPS AND ANALYSIS\nTo study how training data affects the performance of a weaker classifier, we construct a Hidden Markov Model (HMM) by sampling the parameters of the transition and emission probabilities uniformly within (0, a] and (0, b] respectively. A higher value of a and b indicates an HMM model with higher uncertainty. We refer this HMM as the “true HMM” as our real data generator. Next we consider a weaker classifier that uses a low-dimension bidirectional-LSTM (Bi-LSTM) to encode the input sequence and individual softmax functions at each time step to predict labels independently, which is referred as the “Bi-LSTM” classifier. Obviously, the Bi-LSTM classifier is not able to model the dependencies between output labels embedded in the HMM, and it is equivalent to a simplified non-autoregressive generation model.\nWe generate the real training data Dreal = {(x1,y1), · · · , (xN ,yN )} of size N by sampling from the joint probability of the true HMM. Similarly we sample Ntest data points as the test data and Nvalid data points as the validation data. We evaluate the classifier’s token-level accuracy tacc and\nsequence-level accuracy sacc on the test data respectively, where tacc = ∑Ntest i=1 ∑T t=1 I(h(xi) t=yti)\nT×Ntest and sacc = ∑Ntest\ni=1 I(h(xi)=yi) Ntest\n. These two metrics correspond to the token-level loss Ltok and sequence-level loss Lseq on each data point of the test data.\nFirst, we use h∗seq(x) to generate the distillation labels y ′ from the true HMM, which corresponds to applying the Viterbi decoding to each xi in Dreal. The training data set Dseq is created with (xi, y′i). Next, we use h ∗ tok(x) to generate the distillation labels ŷ and create the training data Dtok of (xi, ŷi). To generate ŷ, we apply the forward-backward algorithm to each xi in Dreal and obtain P (yti |xi). We take arg max over the label space L: ŷti = arg max\nyti∈L P (yti |xi).\nWe use these three training data (Dreal, Dtok, Dseq) to train the Bi-LSTM classifier respectively. We repeat the experiment for 50 times by constructing 50 HMM models with different random seeds as the data generator. We find that when evaluating with the token-level accuracy tacc, models trained with Dtok yields the best performance (Bi-LSTM trained with Dtok win 97.6% runs); when evaluating with the sequence-level accuracy sacc, models trained with Dseq yields the best performance (Bi-LSTM trained with Dseq win 98.5% runs). This is because the Bi-LSTM classifier has difficulty modeling the true data distribution defined by an HMM. On the other hand, it is easier for the Bi-LSTM classifier to model the distributions of Dseq and Dtok. Data sets Dseq and Dtok define deterministic conditional distributions over the input data, which are much simpler than the real data distribution. By definition, Dtok is created by the optimal Bayes classifier h∗tok(x), this means that the Bi-LSTM classifier trained with Dtok can better capture the distribution of P (yt|x) = max\nut P (ut|x), which can generalize better to the test data when evaluated with\nthe token-level accuracy. Similarly, Bi-LSTM trained with Dseq performs better on the test data with the sequence-level metric.\nThis corroborates our observation in machine translation task that NAT has difficulty in modeling the real conditional distribution of true sentence pairs. However, when using the distilled data translated\nfrom a pretrained autoregressive model with beam-search decoding, it performs better on the test set when evaluated with the BLEU score metric." } ]
2,020
NON-AUTOREGRESSIVE MACHINE TRANSLATION
SP:3ed52251f2462bc5bb61c384ecafc1cce376ef26
[ "In this paper, the authors propose to use the *same* convolutional layer in every layer of a DNN. The network effectively is converted into repeatedly applying the same convolutional filter at multiple scales. The idea is motivated by wavelet decompositions and related work. The authors show that by repeatedly applying the same filter, the number of parameters that need to be stored for a model reduces proportionally to the depth of the network. At the same time, experimental evidence is provided that the performance of these models is not affected, when compared to the baseline (full) model. ", "This paper proposes to modify a standard CNN by requiring all of its layers to share the same filter set, essentially allowing it to be expressed as an iterative (or recurrent) network. This also has the effect of forcing the same number of feature channels to be used throughout the network. For ResNet-like architectures with bottleneck blocks, sharing occurs at the level of the block (3 conv layers in series that are repeated). Another variant of the sharing pattern inserts unshared 1x1 convolutional layers after shared layers or blocks; this adds some flexibility while still reducing parameters compared to standard CNNs." ]
Deep CNNs have achieved state-of-the-art performance on numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret. Machine learning theory implies that such networks are highly overparameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this. In this paper, we take a further step in this direction by proposing a filter-sharing approach that reformulates deep, complex CNNs as an iterative application of shallower modules (a single convolutional mapping in the simplest case). We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected. At a broader level, our approach represents a way of rethinking neural network architectures so as to leverage the scale-space regularities found in visual signals, resulting in models that are both parsimonious and easier to interpret.
[]
[ { "authors": [ "Alireza Aghasi", "Afshin Abdi", "Nam Nguyen", "Justin Romberg" ], "title": "Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee", "venue": null, "year": 2017 }, { "authors": [ "Arash Ardakani", "Carlo Condo", "Warren J Gross" ], "title": "Activation Pruning of Deep Convolutional Neural Networks", "venue": "In IEEE Global Conference on Signal and Information Processing (GlobalSIP),", "year": 2017 }, { "authors": [ "Lei Jimmy Ba", "Rich Caruana" ], "title": "Do Deep Nets Really Need to be Deep", "venue": "In NIPS, pp", "year": 2014 }, { "authors": [ "S. Bianco", "R. Cadene", "L. Celona", "P. Napoletano" ], "title": "Benchmark analysis of representative deep neural network architectures", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Alexandre Boulch" ], "title": "Reducing parameter number in residual networks by sharing weights", "venue": "Pattern Recognition Letters,", "year": 2018 }, { "authors": [ "Wenlin Chen", "James T Wilson", "Stephen Tyree", "Kilian Q Weinberger", "Yixin Chen" ], "title": "Compressing Neural Networks with the Hashing Trick", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Yu Cheng", "Duo Wang", "Pan Zhou", "Tao Zhang" ], "title": "A survey of model compression and acceleration for deep neural networks", "venue": "arXiv preprint arXiv:1710.09282,", "year": 2017 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Timur Garipov", "Dmitry Podoprikhin", "Alexander Novikov", "Dmitry Vetrov" ], "title": "Ultimate tensorization: compressing convolutional and FC layers alike", "venue": null, "year": 2016 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the Knowledge in a Neural Network", "venue": "In NIPS-W,", "year": 2014 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry" ], "title": "Kalenichenko. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", "venue": null, "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations", "venue": null, "year": 2018 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size", "venue": null, "year": 2016 }, { "authors": [ "Saumya Jetley", "Michael Sapienza", "Stuart Golodetz", "Philip H.S. Torr" ], "title": "Straight to shapes: Realtime detection of encoded shapes", "venue": null, "year": 2017 }, { "authors": [ "Saumya Jetley", "Nicholas Lord", "Philip Torr" ], "title": "With friends like these, who needs adversaries", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Seong Tae Kim", "Jae-Hyeok Lee", "Hakmin Lee", "Yong Man Ro" ], "title": "Visually interpretable deep network for diagnosis of breast masses on mammograms", "venue": "Physics in Medicine & Biology,", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Yann LeCun", "John S. Denker", "Sara A. Solla" ], "title": "Optimal brain damage", "venue": "In NIPS, pp", "year": 1990 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip H S Torr" ], "title": "SNIP: Single-shot Network Pruning based on Connection Sensitivity", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ming Liang", "Xiaolin Hu" ], "title": "Recurrent convolutional neural network for object recognition", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Qianli Liao", "Tomaso Poggio" ], "title": "Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex", "venue": null, "year": 2016 }, { "authors": [ "Shaohui Lin", "Rongrong Ji", "Xiaowei Guo", "Xuelong Li" ], "title": "Towards convolutional neural networks compression via global error reconstruction", "venue": "In IJCAI,", "year": 2016 }, { "authors": [ "Zhenhua Liu", "Jizheng Xu", "Xiulian Peng", "Ruiqin Xiong" ], "title": "Frequency-domain dynamic pruning for convolutional neural networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Raphael Gontijo Lopes", "Stefano Fenu", "Thad Starner" ], "title": "Data-Free Knowledge Distillation for Deep", "venue": "Neural Networks", "year": 2017 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu", "Weiyao Lin" ], "title": "ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Stephane G Mallat" ], "title": "A theory for multiresolution signal decomposition: the wavelet representation", "venue": "TPAMI, 7:674–693,", "year": 1989 }, { "authors": [ "Daniela Massiceti", "N. Siddharth", "Puneet K. Dokania", "Philip H.S. Torr" ], "title": "Flipdial: A generative model for two-way visual dialogue", "venue": null, "year": 2018 }, { "authors": [ "Alexander Novikov", "Dmitry Podoprikhin", "Anton Osokin", "Dmitry Vetrov" ], "title": "Tensorizing Neural Networks", "venue": "In NIPS, pp", "year": 2015 }, { "authors": [ "Ozan Oktay", "Jo Schlemper", "Loic Le Folgoc", "Matthew Lee", "Mattias Heinrich", "Kazunari Misawa", "Kensaku Mori", "Steven McDonagh", "Nils Y Hammerla", "Bernhard Kainz" ], "title": "Attention u-net: Learning where to look for the pancreas", "venue": null, "year": 2018 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": null, "year": 2016 }, { "authors": [ "Guillermo Valle Pérez", "Ard A Louis", "Chico Q Camargo" ], "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", "venue": "arXiv preprint arXiv:1805.08522,", "year": 2018 }, { "authors": [ "Pedro HO Pinheiro", "Ronan Collobert" ], "title": "Recurrent convolutional neural networks for scene labeling", "venue": "In 31st International Conference on Machine Learning (ICML),", "year": 2014 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolo9000: Better, faster, stronger", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Adriana Romero", "Nicolas Ballas", "Samira Ebrahimi Kahou", "Antoine Chassang", "Carlo Gatta", "Yoshua Bengio" ], "title": "FitNets: Hints for Thin Deep Nets", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": null, "year": 2015 }, { "authors": [ "Bharat Bhusan Sau", "Vineeth N Balasubramanian" ], "title": "Deep Model Compression: Distilling Knowledge from Noisy Teachers", "venue": null, "year": 2016 }, { "authors": [ "Pedro Savarese", "Michael Maire" ], "title": "Learning Implicitly Recurrent CNNs Through Parameter Sharing", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Suraj Srinivas", "R Venkatesh Babu" ], "title": "Data-free Parameter Pruning for Deep Neural Networks", "venue": "In BMVC, pp", "year": 2015 }, { "authors": [ "Paul Viola", "Michael J Jones" ], "title": "Robust real-time face detection", "venue": "IJCV, 57(2):137–154,", "year": 2004 }, { "authors": [ "Jiaxiang Wu", "Cong Leng", "Yuhang Wang", "Qinghao Hu", "Jian Cheng" ], "title": "Quantized Convolutional Neural Networks for Mobile Devices", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Tien-Ju Yang", "Yu-Hsin Chen", "Vivienne Sze" ], "title": "Designing Energy-Efficient Convolutional Neural", "venue": "Deep Convolutions. In PMLR,", "year": 2018 }, { "authors": [ "Yingzhen Yang", "Nebojsa Jojic", "Jun Huan" ], "title": "Networks using Energy-Aware Pruning", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "2019b. Chen Yunpeng", "Jin Xiaojie", "Kang Bingyi", "Feng Jiashi", "Yan Shuicheng" ], "title": "Sharing Residual Units", "venue": null, "year": 2019 }, { "authors": [ "Lee" ], "title": "2019), filters (Leroux et al., 2016; Luo et al., 2017) or neurons (Ardakani et al., 2017). Notably, reducing the computational cost (rather than just the memory usage) of network architectures that are pruned in an unstructured manner requires the use of suitable sparse inference schemes. Quantization methods keep the number of independent parameters in a network", "venue": null, "year": 2017 }, { "authors": [ "Han" ], "title": "magnitude-based weight pruning method", "venue": "DATASETS CIFAR-10 (Krizhevsky,", "year": 2009 }, { "authors": [ "He" ], "title": "basicblock-x A simple skip connection-based block, used in the ResNet-like architectures", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "Deep CNNs have achieved state-of-the-art performance on numerous machine learning and computer vision tasks in recent years, but as they have become increasingly deep, the number of parameters they use has also increased, making them hard to deploy in memory-constrained environments and difficult to interpret. Machine learning theory implies that such networks are highly overparameterised and that it should be possible to reduce their size without sacrificing accuracy, and indeed many recent studies have begun to highlight specific redundancies that can be exploited to achieve this. In this paper, we take a further step in this direction by proposing a filter-sharing approach that reformulates deep, complex CNNs as an iterative application of shallower modules (a single convolutional mapping in the simplest case). We show, via experiments on CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet that this allows us to reduce the parameter counts of networks based on common designs such as VGGNet and ResNet by a factor proportional to their depth, whilst leaving their accuracy largely unaffected. At a broader level, our approach represents a way of rethinking neural network architectures so as to leverage the scale-space regularities found in visual signals, resulting in models that are both parsimonious and easier to interpret." }, { "heading": "1 INTRODUCTION", "text": "Deep CNNs have achieved state-of-the-art results on a wide range of tasks, from image understanding (Redmon & Farhadi, 2017; Jetley et al., 2017; Kim et al., 2018; Oktay et al., 2018) to natural language processing (Oord et al., 2016; Massiceti et al., 2018). However, these network architectures are often highly overparameterised (Zhang et al., 2016), and thus require the supervision of a large number of input-output mappings and significant training time to adapt their parameters to any given task. Recent studies have discovered several different redundancies in these network architectures (Garipov et al., 2016; Hubara* et al., 2018; Wu et al., 2018; Frankle & Carbin, 2019; Yang et al., 2019a;b) and certain simplicities (Pérez et al., 2018; Jetley et al., 2018) in the functions that they implement. For instance, Frankle & Carbin (2019) showed that a large classification network can be distilled down to a small sub-network that, owing to its lucky initialisation, is trainable in isolation without compromising the original classification accuracy. Jetley et al. (2018) observed that deep classification networks learn simplistic non-linearities for class identification, a fact that might well underlie their adversarial vulnerability, whilst challenging the need for complex architectures. Attempts at knowledge distillation have regularly demonstrated that it is possible to train small student architectures to mimic larger teacher networks by using ancillary information extracted from the latter, such as their attention patterns (Zagoruyko & Komodakis, 2017), predicted soft-target distributions (Hinton et al., 2014) or other kinds of meta-data (Lopes et al., 2017). These works and others continue to expose the high level of parameter redundancy in deep CNNs, and comprise a foundational body of work towards studying and simplifying networks for safe and practical use.\nOur paper experiments with yet another scheme for simplifying CNNs, in the hope that it will not only shrink the effective footprint of these networks, but also open up new pathways for network understanding and redesign. In particular, we propose the use of a common set of convolutional filters at different levels of a convolutional hierarchy to achieve class disentanglement. Mathematically, we formulate a classification CNN as an iterative function in which a small set of learned convolutional mappings are applied repeatedly as different layers of a CNN pipeline (see Figure 1). In doing so, we are able to reduce the parameter count of the network by a factor proportional to its depth, whilst leaving its accuracy largely unaffected. We also investigate the introduction of non-shared linear" }, { "heading": "Input Classifier", "text": "layers before certain shared convolutional layers to enhance the flexibility of the model by allowing it to linearly combine shared filter maps for the disentanglement task." }, { "heading": "2 RELATED WORK", "text": "This work is partly inspired by the classic literature on image processing that has long sought to characterise natural images by collating their responses, at different image scales, to a small, canonical set of hand-crafted visual operators (Mallat, 1989; Viola & Jones, 2004). Modern CNN architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015) effectively still implement hierarchical feature extraction, but with the difference that there are thousands of such operators (i.e. convolutional filters) at each scale level, all of which are individually adaptable and learned via backpropagation. Our work can thus be seen as an effort to reconcile the above two non-contemporaneous approaches to image processing, in which we aim to identify a common set of visual operators for all the different scales by learning them in an end-to-end manner.\nOur approach bears some high-level resemblance to previous approaches (e.g. Pinheiro & Collobert (2014); Liang & Hu (2015); Liao & Poggio (2016); Savarese & Maire (2019)) that have attempted to implement, interpret and potentially improve convolutional neural networks through an iterative use of simpler modules For example, Liao & Poggio (2016) share convolutional mappings in ResNets in an attempt to approximate biological visual systems using feedback loops and recurrence, although their experimental analysis is limited to the CIFAR dataset. By contrast, our work applies the convolution-sharing paradigm to both plain feed-forward and residual constructs, and investigates the effectiveness of using only a single shared convolutional mapping for the entire network pipeline. An additional contribution of our approach is the flexibility we add to the model by coupling learned linear layers with shared convolutions while still limiting the total parameter count. Experimentally, we evaluate the accuracy vs. model size tradeoff induced by our approach on a realistic set of datasets that include Tiny ImageNet and ImageNet.\nA steady increase in the size of datasets and the availability of computational resources has enabled neural networks to grow deeper (Simonyan & Zisserman, 2015; He et al., 2016), denser (Huang et al., 2017) and wider (Zagoruyko & Komodakis, 2016). In doing so, concerns regarding their over-parameterisation have often been ignored in favour of better test set generalisation.1 More\n1“Despite previous arguments that depth gives regularization effects and width causes network to overfit, we successfully train networks with 5 times more parameters than ResNet-1001, . . . and outperform ResNet-1001 by a significant margin.” (Excerpt from Zagoruyko & Komodakis (2016).)\nrecently, as their performance on some benchmarks (He et al., 2016; Oord et al., 2016) has reached near-human levels, real-world deployment of these models is being considered. This deployment has been hugely impeded by the memory requirements, latency and energy demands of their heavy computational machinery (Bianco et al., 2018).\nOur approach contributes to the (extensive) literature on network compression that is focused on making these machine learning models more usable in practical scenarios. Existing compression methods can be divided into seven categories – pruning, quantisation, tensorization/tensor decomposition, knowledge distillation, custom architectures, sharing-based and hybrid methods. Many of these works are beyond the scope of this paper, but for completeness, we present a brief review in §A.1 (a more exhaustive survey can be found in Cheng et al. (2017)). Our own work falls within the realm of sharing-based methods that seek to equate some of a network’s weights or filters to reduce the number of independent parameters in the network. There are various ways of deciding which weights/filters to share, from somewhat arbitrary (if effective) approaches such as the hashing trick (Chen et al., 2015; Liu et al., 2018), to more principled approaches such as k-means clustering (Wu et al., 2018). A few recent works have turned their attention to sharing convolutional weight matrices in a more structured manner. Of these, LegoNet (Yang et al., 2019b) shares filter groups across sets of channels, whilst FSNet (Yang et al., 2019a) shares filter weights across spatial locations. In both cases, sharing is restricted to a single layer at a time. ShaResNet (Boulch, 2018) reuses convolutional mappings, but within the same scale level (i.e. between two max-pooling steps). The novelty of our work lies in extending this filter-sharing paradigm to an entire convolutional pipeline. We instantiate a single convolutional layer that is applied iteratively to mimic a deep convolutional feature extractor, and analyse the accuracy vs. memory tradeoff for different widths of this layer." }, { "heading": "3 METHOD", "text": "A standard feed-forward classification CNN can be formulated as\nF = C Fconv = C (RL fL · · · R1 f1), (1)\nwhere the overall function F is a composition of the convolutional feature extractor Fconv followed by a fully-connected classifier C. The convolutional sub-model Fconv consists of a sequence of convolutional layers [fi : 1 ≤ i ≤ L], interspersed with non-linearities (ReLUs, Max-Pooling) or regularisers (dropout, BatchNorm) or some combination thereof, denoted by Ri. The function performed by each convolutional layer fi is completely specified by a set of weights and biases that we denote using Wi. Crucially, the weights and biases for each different layer are independent. The number of parameters in layer fi is then simply the size of Wi, calculated as\n|Wi| = nini × nouti × k2i + nouti = vi × k2i + nouti ≈ vi × k2i , (2)\nwhere nini in the number of input channels to fi, n out i is the number of output channels, vi = nini ×nouti is the volume of fi, and ki is the size of its (square) convolutional filters. In practice, the nouti term for the biases is dominated by that for the weights, and so we disregard it in what follows. Letting Wconv = ⋃L i=1Wi denote all the parameters in Fconv (i.e. disregarding the comparatively small contributions from the non-convolutional layers), the total parameter count is given by\n|Wconv | = L∑\ni=1\n|Wi| ≈ L∑\ni=1\nvi × k2i . (3)\nNote that for many common architectures, there exists some k such that ∀i, ki = k (e.g. for VGGNet, k = 3). For such architectures, Equation 3 can then be further simplified to |Wconv| ≈ L× v̄ × k2, in which v̄ = L−1 ∑L i=1 vi is the mean volume per network layer.\nOur method proposes a crude simplification to such architectures, namely to instantiate a single convolutional layer f , and apply it L successive times in order to implement a convolutional pipeline of equivalent depth to the original model. In particular, we enforce the following constraint:\nW1 = W2 = · · · = WL = W ⇔ f1 = f2 = · · · = fL = f. (4)\nThis simplifies the CNN architecture in Equation 1 to\nF̃ = C F̃conv = C (RL f · · · R1 f). (5)\nWhilst our analysis focuses purely on the convolutional layers, it is interesting to note that when the Ri layers are all the same, the CNN architecture simplifies further to the following iterative form:\nF̃ = C F̃conv = C (R f)L. (6)\nThe convolutional layer f in our architecture expects an input tensor with a predetermined number of channels, which we will call n. Meanwhile, theRi layers between the convolutional layers leave the number of channels unchanged. Thus, given the iterative application of f , the layer f must also output a tensor with n channels. (In practice, f is called for the first time on the input image itself, which for colour images would normally only have 3 channels. To avoid artificially limiting n to 3, we pad the input image with empty channels to produce a tensor with n channels.) We deduce that |W |, the number of parameters for f , must satisfy |W | ≈ n2 × k2 = v × k2, where v = n2 is the volume of f . Furthermore, since W is shared between all L convolutional layers, the total number of independent parameters in F̃conv must also just be |W |. The compression factor between the original architecture and its shared counterpart can thus be quantified as\nC = |Wconv | |W | = ∑L i=1 |Wi| |W | ≈ L× v̄ × k 2 v × k2 = L v/v̄ . (7)\nThis is proportional to the depth L of the original network, and is down-weighted by any (multiplicative) increase in the average per-layer volume in going from the original to the shared architecture.\nWe now turn to examine the convolutional operation in our architecture. Each layer f , the operation of which is completely specified by the weights and biases in W , takes an input tensor X of size n× h× w, where n, h and w denote the number of channels, height and width respectively. Based on X and W , we can conceptually define 2D matrices Φ(X) and Γ(W ) as follows:\nΦ(X) = x > 11 · · · x>1n 1 · · · · · ·\nx>m1 · · · x>mn 1\n , Γ(W ) = w11 w12 · · · w1n · · · · · ·\nwn1 wn2 · · · wnn b1 b2 · · · bn . (8) In this, m = h×w, and each xij is a rasterisation of a k× k patch of input tensor centred at spatial location i in channel j. Each wij is a similar rasterisation of the k × k convolutional kernel that maps the input channel i ∈ {1, 2, . . . , n} to the output channel j ∈ {1, 2, . . . , n}, and each bj is the bias for output channel j. Then f can be defined concisely as f(X) = Ψ(Φ(X)×Γ(W )), in which Ψ reshapes the m× n tensor Φ(X)× Γ(W ) back to one of size n× h× w. In practice, this simple formulation could be seen as being too restrictive, in the sense that irrespective of the convolutional iteration, each filter wij in Γ(W ) only ever operates on patches from input channel i (for example, the w1j filters only ever operate on patches from channel 1). For this reason, we decided to investigate whether adding a way of allowing the input channels to be reorganised at various points in the overall pipeline would improve performance. In principle, one way of achieving this would be to add n × n permutation matrices at appropriate points in the pipeline, e.g. just before each pooling operation. In practice, however, to make the operations differentiable, we implement them using linear layers (i.e. 1× 1 convolutions), thus implementing blending of the input channels rather than simply permuting them. The weights of these layers are separate for each instantiation and are learned as part of the end-to-end pipeline.\nIt would be reasonable to expect this added flexibility to yield a significant increase in performance, and indeed our results in §5 show this to be the case. Nevertheless, it is notable that even without this added flexibility, our shared architectures already achieve extremely good performance on the datasets on which we tested, demonstrating that our underlying approach of sharing filters between layers makes sense even in the absence of permutation/blending." }, { "heading": "4 DATASETS AND ARCHITECTURES", "text": "We evaluate our filter-sharing approach on four well-known image classification benchmarks: CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet. Details of these datasets can be found in §A.2. For this study, we work with two different architectures, one closely inspired by VGGNet (Simonyan & Zisserman, 2015), and the other by ResNet (He et al., 2016).\nVGGNet-like Architectures. We base our VGGNet-like architectures on VGG-16, which consists of 5 convolutional blocks followed by 3 linear layers. Each block is followed by a max-pooling step and contains several convolutional layers with different channel counts (in order: 2 layers with 64 channels, 2 with 128, 3 with 256, 3 with 512 and 3 layers with 512 channels). By contrast, in our case, we define a single convolutional layer with a fixed number of input and output channels n, and then use it repeatedly in the same arrangement as above (see Table 3 in §A.3 for more details). We define four variants of this convolutional feature extractor for our study. E-VGGNet is our equivalent of VGGNet, with n channels per layer and no sharing between the layers: we use this as a baseline. Its shared counterpart, S-VGGNet, has the same structure, but iteratively applies a single convolutional layer. SL-VGGNet is an extended version of S-VGGNet that introduces linear layers (i.e. 1× 1 convolutions) before each max-pooling operation to allow the input channels to be blended at those points in the pipeline. Finally, since all the convolutional layers in SL-VGGNet are the same (these exclude what we call the linear layers), we define a further variant of our architecture that simplifies the network design by setting the number of layers per block to a scalar `. We experiment with ` ∈ {2, 3}, and name the corresponding networks SL`-VGGNet. Note that the predetermined number of channels n is a parameter of our architecture: we test several variants to find the best ones. We perform experiments on CIFAR-10/100 and Tiny ImageNet. For CIFAR-10, the 3 fully-connected layers that follow the feature extractor have 512, 512 and 10 output channels, respectively. For CIFAR-100, we use the same VGGNet-like architectures as for CIFAR-10, but the fully-connected layers have 1024, 1024 and 100 output channels, respectively. For Tiny ImageNet, we use a sequence of two fully-connected layers, with 2048 and 200 output channels respectively.\nResNet-like Architectures. We base our ResNet-like architectures on the models proposed in He et al. (2016). The simpler variants of these are built using ‘basic’ blocks that essentially consist of two equally-sized 3 × 3 convolutional layers and a skip connection (see Fig. 6). The deeper variants, meanwhile, are built using ‘bottleneck’ blocks, which similarly have a skip connection, but sandwich a single 3×3 convolutional layer between two 1×1 convolutional layers that decrease and then restore the number of channels to limit the number of free parameters. The network pipeline begins with a standalone convolutional layer that outputs a predetermined number of channels p. This is followed by a sequence of b blocks at a number of different scale levels (generally 4, but 3 for CIFAR variants). In the original architectures, each scale level (except the first) began with a strided convolutional layer that downsampled the image and doubled the number of channels. Since we want the convolutional layers in our architectures to have the same numbers of input and output channels (to facilitate sharing), we define an equivalent architecture, E-ResNet, that instead doubles the number of channels and performs downsampling using (respectively) a linear layer (i.e. 1 × 1 convolutions) and a max-pooling step at the end of each scale level. Note that, as in the original ResNet, the final scale level in our architecture ends with average pooling rather than max-pooling. Despite these modifications, the predictive performances of our E-ResNets closely match those of the original architectures. The shared variant of this architecture uses n channels for all scale levels and shares the weights across all the convolutional layers (excluding the linear layers). Since the architecture already contains the linear layers we were previously adding to allow blending of the input channels, we refer to it as SL-ResNet.\nFor CIFAR-10/100, the standalone convolutional layer uses a kernel size of 3× 3, and a p of 16 and 32 for each dataset, respectively. We experiment with b ∈ {3, 5, 7} ‘basic’ blocks per scale level, and terminate the network with a 10-way linear classifier for CIFAR-10 and a 100-way classifier for CIFAR-100. See Table 4 in §A.3 for details. For Tiny ImageNet and ImageNet, we base our ResNet-like architectures on ResNet-34 and ResNet-50. ResNet-34 is built using ‘basic’ blocks, whilst ResNet-50 uses ‘bottleneck’ blocks. For the latter, it is clearly not possible to share filters between the layers within a block, since they are of different dimensions, so we instead use multiple shared copies of a single block. Note that the shared variants of both these models, SL-ResNet34/50, keep the standalone convolutional layer unshared, since its kernel size is adjusted according to the dataset (3× 3 for Tiny ImageNet and 7× 7 for ImageNet). See Table 5 in §A.3 for details." }, { "heading": "5 RESULTS AND DISCUSSION", "text": "Earlier, Fig. 2 showed the accuracy vs. compression trade-off for S-VGGNet, relative to the original VGGNet (Simonyan & Zisserman, 2015), for different widths n of the shared convolutional layer. Here, Fig. 3 illustrates the improvements in accuracy due to the learned linear layers (i.e. the blend-\ning layers) on CIFAR-10, CIFAR-100 and Tiny ImageNet. Observably, the use of the linear layers provides greater benefit for datasets that involve discriminating between a larger number of classes, such as CIFAR-100 and Tiny ImageNet.\nFor CIFAR-10, CIFAR-100 and Tiny ImageNet we compare the accuracies of the best-performing ‘SL’ variants of VGGNet with those of the baseline architecture (and competing compression methods for these datasets, where available) in Table 1. For CIFAR-10 (see Table 1b), we are able to achieve comparable classification accuracy to the VGGNet baseline using only n = 256 channels for our shared convolutional layer, which yields a compression factor of ≈ 17×. For CIFAR-100 (Table 1c), which has 10× more classes, we had to use n = 512 channels to achieve comparable accuracy, but this still yields a significant compression factor of 4.3. Higher compression factors can be achieved by reducing the number of channels, in exchange for some loss in accuracy. Evaluating our shared architecture on Tiny ImageNet (in Table 1d) evidences a similar trend in the results, with SL2-VGGNet (n = 512 channels) achieving an accuracy comparable to the non-shared baseline, whilst using only 23% of its parameters. Detailed accuracy and memory usage numbers for E-VGGNet, S-VGGNet and SL-VGGNet, for CIFAR-10, are in Table 1a, while the results for CIFAR-100 and Tiny Imagenet can be found in the appendix (see Table 6 in §A.5) We also evaluate our shared ResNet architecture (SL-ResNet) on Tiny ImageNet and ImageNet, with the results shown in Table 2 (the corresponding results for CIFAR-10 and CIFAR-100 can be found in the appendix, see Table 7 in §A.5). For Tiny ImageNet, our SL-ResNet34 (n = 512) variant is able to achieve a compression rate of 8.4 with only a negligible loss in accuracy. For ImageNet, the same variant similarly achieves a compression rate of 8.4 with respect to ResNet-50 and 21.6 with respect to Shared Wide ResNet (SWRN) by Savarese & Maire (2019). Whilst there is\nan accuracy trade-off, we achieve a greater compression rate than competing methods that achieve similar accuracies. Note that SWRN is able to achieve state-of-the-art levels of accuracy, but does not provide savings in the number of parameters." }, { "heading": "5.1 INTERPRETATION THROUGH VISUALISATION", "text": "Visualising the weights of the blending layers that we learn for the SL-variants of our approach reveals interesting patterns in the way in which these layers blend (or use) the input channels (see Fig. 4). For each layer, the continuous blue vertical lines signify that a subset of the input feature maps are barely used by any of the output channels, thus effectively suppressing the information they carry. (Interestingly, the location of the vertical blue lines changes from one scale to the next, thus showing that different subsets of input channels go unused at different scales.) This is significant, because it implies that the weights associated with the unused channels can be selectively pruned without affecting performance. Our next experiment with the pruning method of Han et al. (2015) shows how we can exploit this observation to significantly reduce the size of our shared networks." }, { "heading": "5.2 COMPLEMENTARITY WITH OTHER COMPRESSION SCHEMES", "text": "Our best-performing SL variants have a relatively small number of parameters in the convolutional layers, but a relatively high number of parameters in the linear layers. Tables 2a and 2b show how the parameter count for these variants increases with the number of channels n and the depth (34 to 50). Notably, using bottleneck blocks, as we do for our SL-ResNet50 variants, also significantly increases the parameter count. As implied by our visualisations in the previous section, we would expect serious reductions in the number of parameters in the linear layers to be possible without significantly reducing accuracy. We thus experiment with applying the magnitude-based weight pruning approach of Han et al. (2015) to the linear layers to see whether this expectation is borne out in practice. We first select a proportion of the parameters to prune, then identify those weights that have the lowest absolute magnitude and set them to 0. We then evaluate on the validation split of the dataset. Note that we do not retrain the network after pruning. Our results (see Figure 5) show that we can remove a significant fraction of these blending weights before starting to see a noticeable drop in the accuracy of the network." }, { "heading": "6 CONCLUSION", "text": "In this paper, we leverage the regularities in visual signals across different scale levels to successfully extend the filter-sharing paradigm to an entire convolutional pipeline for feature extraction. In particular, we instantiate a single convolutional layer and apply it iteratively to simulate conventional VGGNet-like and ResNet-like architectures. We evaluate our shared architectures on four standard benchmarks – CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet – and achieve compression rates that are higher than existing sharing-based methods that have equivalent performance. We further show that even higher compression rates, with little additional loss in performance, can be achieved by combining our method with the magnitude-based weight pruning approach of Han et al. (2015). Study of our complementarity to more structured pruning techniques targeting complete filters and channels is reserved for future work. We conclude with two final observations. Firstly, our use of blending layers and a parameter to tune the width of the shared convolutional layer n makes it easy to adjust the architecture so as to achieve a desired trade-off between compression rate C and accuracy. Secondly, there are interesting connections between our work and the idea of energy-based pruning explored in (Yang et al., 2017), where the authors note that a significant fraction of the energy demands of deep network processing come from transferring weights to and from the file system. Our approach bypasses this bottleneck by using the same compact set of weights in an iterative manner. We aim to further investigate this aspect of our method in subsequent work." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ADDITIONAL RELATED WORK", "text": "Pruning methods seek to reduce the size of a network by removing (either physically or implicitly) some of a network’s weights (LeCun et al., 1990; Srinivas & Babu, 2015; Yang et al., 2017; Aghasi et al., 2017; Lee et al., 2019), filters (Leroux et al., 2016; Luo et al., 2017) or neurons (Ardakani et al., 2017). Notably, reducing the computational cost (rather than just the memory usage) of network architectures that are pruned in an unstructured manner requires the use of suitable sparse inference schemes.\nQuantization methods keep the number of independent parameters in a network the same, but reduce the bit-depth of the parameters and activations (Wu et al., 2016; Hubara* et al., 2016; 2018) to limit the memory requirements of the network.\nTensorization/tensor decomposition methods propose low-rank approximations to high-dimensional neural matrices in order to downsize trained models. Early CNN architectures such as AlexNet (Krizhevsky et al., 2012) and VGGNet (Simonyan & Zisserman, 2015) contained the bulk of their weights in the fully-connected layers. As a result, various rank reduction approaches exclusively targeted the matrices in these layers (Lin et al., 2016; Novikov et al., 2015). The deeper/wider (He et al., 2016; Zagoruyko & Komodakis, 2016) these networks have become, the more the balance of weights has shifted towards the convolutional layers, giving rise to more generalised tensor decomposition schemes (Garipov et al., 2016).\nKnowledge distillation (‘teacher/student’) methods aim to transfer the knowledge present in a cumbersome teacher model to a lightweight student model, without losing the teacher’s ability to generalise well. An early approach by Bucilă et al. (2006) used a heavyweight ensemble to label a large set of unlabelled data, and then used this to train a compact model. Much later, Ba & Caruana (2014) proposed an alternative method that trains a shallow network to directly mimic the logits of a deep model. Subsequent methods have independently shown that training the student using temperature-scaled softmax scores (Hinton et al., 2014) or Gaussian-blurred logits (Sau & Balasubramanian, 2016) of the teacher can help with regularisation. Other methods in this line of work have proposed to train deep, thin neural networks using auxiliary or intermediate cues such as hidden layer outputs (Romero et al., 2015) or post-hoc attention maps (Zagoruyko & Komodakis, 2017).\nCustom architecture methods, rather than trying to compress or distil knowledge from existing networks, propose entirely new network architectures that are smaller than existing models but still capable of providing excellent performance. Good examples include SqueezeNet (Iandola et al., 2016) and MobileNets (Howard et al., 2017). SqueezeNet tries to use 1× 1 rather than 3× 3 filters to reduce the parameter count, and tries to limit the number of input channels to those 3 × 3 filters it does use. MobileNets follow a similar tack and factorise traditional convolutional mappings into a depth-wise separable convolution (to process the spatial context) followed by a 1 × 1 convolution (to process the channels jointly). Two adjustable hyperparameters, α and ρ, pertaining to the intermediate feature resolution and the input spatial resolution, allow further resizing of the network.\nHybrid methods implement some combination of the compression schemes discussed above (Han et al., 2016; Yunpeng et al., 2017). Whilst our approach belongs to the category of filter-sharing schemes elaborated above, we also demonstrate its complementarity and compatibility with the magnitude-based weight pruning method of Han et al. (2015)." }, { "heading": "A.2 DATASETS", "text": "CIFAR-10 (Krizhevsky, 2009) consists of 60, 000 32×32 colour images, each labelled as belonging to one of 10 mutually exclusive classes. Each class contains 6, 000 images, of which 5, 000 are earmarked for training, and 1, 000 for testing (i.e. there are 50, 000 train images and 10, 000 test images overall). CIFAR-100 consists of the same 60, 000 32× 32 images that are in CIFAR-10, but this time they are evenly split into 100 classes, each containing 500 training images and 100 testing images. Tiny ImageNet2 is essentially a smaller, lower-resolution variant of the ImageNet (Russakovsky et al., 2015) dataset. It consists of 120, 000 64× 64 images, evenly split into 200 classes. Each class contains 500 training images, 50 validation images and 50 test images. ImageNet (Russakovsky et al., 2015) was introduced as a large-scale image classification benchmark consisting of high-resolution photographs in 1, 000 visual categories from an even larger ontology of natural concepts (WordNet). It consists of approximately 1M training images, divided into 1, 000 disjoint object categories. Another set of 50, 000 images, evenly split into 1, 000 classes, forms the validation set. The accuracy results we report for ImageNet were obtained on this validation set." }, { "heading": "A.3 NETWORK ARCHITECTURES", "text": "Table 3 details the structure of our VGGNet-like architectures, whilst Tables 4 and 5 show our ResNet-like architectures (respectively used for CIFAR-10/100 and Tiny ImageNet/ImageNet). The notation is common to the tables and is as follows:\n2https://tiny-imagenet.herokuapp.com\nconv1-x A 1 × 1 convolutional layer with x output feature channels. The core of our SL variants. We use this layer to allow shared convolutions at different scale levels to observe different blends of the feature channels output by the previous scale. Its number of input feature channels is equal to x, except in E-ResNet-50, where we use it to increase the number of channels between scale levels, and in the first scale of SL-ResNet-50, where we use it to increase the number of channels from x to 4x, to account for the expansion factor of the bottleneck blocks.\nconv3-x A 3 × 3 convolutional layer with x output feature channels. The number of input feature channels depends on the specific network variant: for the baselines it is equivalent to the number of output feature channels of the previous layer (or 3 for the very first layer), whilst for the E/S/SL-variants, it is equivalent to the number of output feature channels x. The stride is 1 unless otherwise specified.\nconv7-x A 7 × 7 convolutional layer with x output feature channels. As this layer is only used as the first layer in the ResNet variants of our architectures, it always has 3 input channels. Its stride is 1 when training a ResNet-like architecture for Tiny ImageNet, and 2 when training for ImageNet.\nbasicblock-x A simple skip connection-based block, used in the ResNet-like architectures. As in He et al. (2016), it consists of two 3 × 3 convolutional layers and a skip connection. In our shared architectures, the two convolutional layers share the same parameters. See Figures 6a and 6b for details of the internal architectures of the non-shared and shared block variants.\nbottleneck-x A skip connection-based block with a bottleneck architecture, consisting of a 1 × 1 convolution (used to reduce the number of feature channels), followed by a 3 × 3 convolutional layer, and finally by another 1 × 1 convolution (restoring the original number of feature channels). For this reason it has 4x input and output channels. Figures 6c and 6d detail the internal architectures of the standard and shared variants of the bottleneck blocks (respectively). Crucially, as mentioned in the main paper – and unlike the basicblock architectures described above – the bottleneck block is shared as a single entity, owing to the presence of differently-shaped convolutions.\navgpool-x An average pooling layer operating on patches of size x× x. maxpool-x A max-pooling layer operating on patches of size x× x. FC-x A fully-connected layer with x output channels. The number of its input channels is equal to\nthe number of outputs of the previous layer (flattened in the case the previous layer was a convolutional layer).\nEach spatial convolution (conv3 and conv7) is always followed by a BatchNorm layer and a ReLu. We denote in bold the convolutional layers or blocks that are shared in our S and SL architectures. The parameters of the normalisation layers are never shared, even when the corresponding convolutional weights are shared as part of an S or SL architecture. Fully-connected layers (except the very last one in each architecture) are followed by a Dropout layer." }, { "heading": "A.4 TRAINING PROTOCOL", "text": "To train our networks on the CIFAR datasets, we perform some basic data augmentation steps: (1) we randomly decide whether or not to flip the input images horizontally, (2) we pad the 32 × 32 images with 4 pixels and then select a random crop of size 32 × 32, and finally (3) we normalise the RGB values to have zero mean and unit norm. During the evaluation phase, we just perform the normalisation step. We train our networks for 200 epochs, using the SGD optimiser with momentum 0.9 and weight decay 5e−4. We use an initial learning rate of 0.05 and decrease it by a factor of 2 when the error plateaus.\nTo train our networks on the Tiny ImageNet and ImageNet datasets, we perform a similar data augmentation: (1) we first extract a crop of a random size that is then resized to the input resolution of our network (56× 56 for Tiny ImageNet and 224× 224 for ImageNet), (2) we randomly decide whether or not to perform a horizontal flip of the crop, and finally (3) we normalise the crop. During the evaluation phase, we (1) resize the image to a standard resolution (64×64 for Tiny ImageNet and 256×256 for ImageNet), (2) extract ten crops (of size 56×56 for Tiny ImageNet and 224×224 for ImageNet) from the corners, the centre and their horizontally-mirrored variants (as in Krizhevsky et al. (2012)), and finally (3) normalise the crops. We train our networks for 100 epochs, using the SGD optimiser with momentum 0.9 and weight decay 5e−4. We use an initial learning rate of 0.01 for the VGGNet-like architectures on Tiny ImageNet, 0.05 for the ResNet-like architectures on Tiny ImageNet, and 0.1 for the experiments on ImageNet. Regardless of the initial value, we decrease it by a factor of 10 when the error plateaus." }, { "heading": "A.5 ADDITIONAL RESULTS", "text": "" }, { "heading": "A.5.1 EVALUATION ON CLASSIFICATION BENCHMARKS", "text": "Table 6 presents detailed accuracy and memory usage numbers for E-VGGNet, S-VGGNet and SLVGGNet architectures trained on CIFAR-100 and Tiny ImageNet (results for CIFAR-10 can be found in the main paper, in Table 1a in §5). Similar results for the ‘E’ and ‘SL’ variants of ResNet trained on CIFAR-10 and CIFAR-100 can be found in Table 7. Finally, an accuracy and compression rate comparison of our top-performing SL3-ResNet variant with existing baselines and competing compression methods for CIFAR-10 is shown in Table 8.\nA.5.2 INTERPRETATION THROUGH VISUALISATION\nIn Fig. 7, we show the linear layers for our different variants of VGGNet, trained on three different datasets – CIFAR-10, CIFAR-100 and Tiny ImageNet. As highlighted by the continuous blue vertical lines, it is notable that in each layer, some of the input channels barely contribute towards any of the output channels. Given this, we posit that a significant proportion of the weights in the linear layers (those that apply to the least important input channels) can be pruned without affecting the accuracy in any significant manner. Preliminary results, verifying this conjecture, are discussed in §5.1. Interestingly, the changing locations of these blue lines reflects the changing importance of different input channels at different scale levels.\nSimilar results for four different ‘SL’ variants of ResNet, trained on three different datasets – CIFAR10, CIFAR-100 and Tiny ImageNet – are presented in Fig. 8. As with our visualisations for ‘SLVGGNet’, the continuous blue vertical lines in Figs. 8b, 8c and 8d highlight that some input channels make only a minimal contribution to any of the output channels in each layer. Once again, we believe that the weights that are applied to these less-important input channels can be pruned without affecting the accuracy in any significant manner. Some indicative results that support this hypothesis can be found in §5.1. By contrast, the linear layers in Fig. 8a exhibit somewhat less regularity. From Table 7a, SL7-ResNet yields both the highest accuracy (93.2%), and the highest compression rate (3.8) for that accuracy amongst all the variants. Thus, one possible explanation for this regular distribution of linear layer weights is that the model is operating at full capacity and is using all the channels in a balanced way to achieve an optimal performance." } ]
2,019
null
SP:9a703a4562558d32a372047cd46cfe57b3695d38
[ "This paper introduces a new method for uncertainty estimation which utilizes randomly initialized networks. Essentially, instead of training a single predictor that outputs means and uncertainty estimates together, authors propose to have two separate models: one that outputs means, and one that outputs uncertainties. The later one consists of two networks: a randomly initialized “prior” which is fixed and is not trained, and a “predictor”, which is then trained to predict the output of the randomly initialized “prior” applied to the training samples. ", "This work introduces a simple technique to obtain uncertainty estimates for deep neural networks. This is achieved by having a set of random networks (i.e. neural networks where their parameters are randomly initialized) and then computing an uncertainty value based on the difference in the predictions between those random networks and networks that are trained to mimic them on a finite collection of points. The authors further show that this method results into uncertainties that are conservative, meaning that they are higher than the uncertainty of a hypothetical posterior, and concentrate, i.e. they converge towards zero when we get more and more data. The authors further draw connections to ensemble methods and discuss how such a method can be effectively realized in practice. They then evaluate their approach on an out-of-distribution detection task, they measure the calibration of their uncertainty estimates and finally perform a small ablation study for their concentration result. " ]
Obtaining high-quality uncertainty estimates is essential for many applications of deep neural networks. In this paper, we theoretically justify a scheme for estimating uncertainties, based on sampling from a prior distribution. Crucially, the uncertainty estimates are shown to be conservative in the sense that they never underestimate a posterior uncertainty obtained by a hypothetical Bayesian algorithm. We also show concentration, implying that the uncertainty estimates converge to zero as we get more data. Uncertainty estimates obtained from random priors can be adapted to any deep network architecture and trained using standard supervised learning pipelines. We provide experimental evaluation of random priors on calibration and out-of-distribution detection on typical computer vision tasks, demonstrating that they outperform deep ensembles in practice.
[ { "affiliations": [], "name": "Kamil Ciosek" }, { "affiliations": [], "name": "Vincent Fortuin" }, { "affiliations": [], "name": "Ryota Tomioka" }, { "affiliations": [], "name": "Katja Hofmann" }, { "affiliations": [], "name": "Richard Turner" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "Nicolas Brosse", "Alain Durmus", "Eric Moulines" ], "title": "The promises and pitfalls of stochastic gradient langevin dynamics", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Ashwin Mark Carvalho" ], "title": "Predictive Control under Uncertainty for Safe Autonomous Driving: Integrating DataDriven Forecasts with Control Design", "venue": "PhD thesis, UC Berkeley,", "year": 2016 }, { "authors": [ "Zezhou Cheng", "Matheus Gadelha", "Subhransu Maji", "Daniel Sheldon" ], "title": "A bayesian perspective on the deep image prior", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Amit Daniely", "Roy Frostig", "Yoram Singer" ], "title": "Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Bruno De Finetti" ], "title": "La prévision: ses lois logiques, ses sources subjectives", "venue": "In Annales de l’institut Henri Poincaré,", "year": 1937 }, { "authors": [ "Simon Du", "Jason Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Bradley Efron", "Robert J. Tibshirani" ], "title": "An Introduction to the Bootstrap", "venue": "SIAM Review,", "year": 1994 }, { "authors": [ "Andrew YK Foong", "David R Burt", "Yingzhen Li", "Richard E Turner" ], "title": "Pathologies of factorised gaussian and mc dropout posteriors in bayesian neural networks", "venue": null, "year": 1909 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yarin Gal", "Jiri Hron", "Alex Kendall" ], "title": "Concrete dropout. In Advances in Neural Information Processing Systems", "venue": "Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Adri Garriga-Alonso", "Carl Edward Rasmussen", "Laurence Aitchison" ], "title": "Deep convolutional networks as shallow gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Tamir Hazan", "Tommi Jaakkola" ], "title": "Steps toward deep kernel methods from infinite neural networks", "venue": "arXiv preprint arXiv:1508.05133,", "year": 2015 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "Vime: Variational information maximizing exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jiri Hron", "Alexander G. de G. Matthews", "Zoubin Ghahramani" ], "title": "Variational bayesian dropout: pitfalls and fixes", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Edwin T Jaynes" ], "title": "Probability theory: The logic of science", "venue": "Cambridge university press,", "year": 2003 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Nicolas Le Roux", "Yoshua Bengio" ], "title": "Continuous neural networks", "venue": "In Artificial Intelligence and Statistics,", "year": 2007 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Jaehoon Lee", "Jascha Sohl-dickstein", "Jeffrey Pennington", "Roman Novak", "Sam Schoenholz", "Yasaman Bahri" ], "title": "Deep neural networks as gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Christian Leibig", "Vaneeda Allken", "Murat Seçkin Ayhan", "Philipp Berens", "Siegfried Wahl" ], "title": "Leveraging uncertainty information from deep neural networks for disease detection", "venue": "Scientific reports,", "year": 2017 }, { "authors": [ "Ulrike von Luxburg", "Olivier Bousquet" ], "title": "Distance-based classification with lipschitz functions", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Predictive uncertainty estimation via prior networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "AGDG Matthews", "M Rowland", "J Hron", "RE Turner", "Z Ghahramani" ], "title": "Gaussian process behaviour in wide deep neural networks", "venue": "In Proceedings of the 6th International Conference on Learning Representations.,", "year": 2018 }, { "authors": [ "Rhiannon Michelmore", "Marta Kwiatkowska", "Yarin Gal" ], "title": "Evaluating uncertainty quantification in end-to-end autonomous driving control", "venue": "arXiv preprint arXiv:1811.06817,", "year": 2018 }, { "authors": [ "Mehryar Mohri", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Foundations of machine learning", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Kevin P Murphy" ], "title": "Machine learning: a probabilistic perspective", "venue": "MIT press,", "year": 2012 }, { "authors": [ "Eric Nalisnick", "Jose Miguel Hernandez-Lobato", "Padhraic Smyth" ], "title": "Dropout as a structured shrinkage prior", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian learning for neural networks", "venue": "Phd Thesis,", "year": 1996 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "Roman Novak", "Lechao Xiao", "Yasaman Bahri", "Jaehoon Lee", "Greg Yang", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-dickstein" ], "title": "Bayesian deep convolutional networks with many channels are gaussian processes", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "K. Osawa", "S. Swaroop", "A. Jain", "R. Eschenhagen", "R.E. Turner", "R. Yokota", "M.E. Khan" ], "title": "Practical deep learning with bayesian principles", "venue": "In The 33rd Conference on Neural Information Processing Systems (NeurIPS 2019),", "year": 2019 }, { "authors": [ "Ian Osband", "John Aslanides", "Albin Cassirer" ], "title": "Randomized prior functions for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ian Osband", "Benjamin Van Roy", "Daniel J. Russo", "Zheng Wen" ], "title": "Deep exploration via randomized value functions", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "Proceedings of the 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Christian Robert" ], "title": "The Bayesian choice: from decision-theoretic foundations to computational implementation", "venue": "Springer Science & Business Media,", "year": 2007 }, { "authors": [ "Donald B Rubin" ], "title": "The bayesian bootstrap", "venue": "The annals of statistics, pp", "year": 1981 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The journal of machine learning research,", "year": 1929 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Christopher KI Williams" ], "title": "Computing with infinite networks", "venue": "In Advances in neural information processing systems,", "year": 1997 }, { "authors": [ "Christopher KI Williams", "Carl Edward Rasmussen" ], "title": "Gaussian processes for machine learning", "venue": null, "year": 2006 }, { "authors": [ "Greg Yang" ], "title": "Wide feedforward or recurrent neural networks of any architecture are gaussian processes", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has achieved huge success in many applications. In particular, increasingly often, it is used as a component in decision-making systems. In order to have confidence in decisions made by such systems, it is necessary to obtain good uncertainty estimates, which quantify how certain the network is about a given output. In particular, if the cost of failure is large, for example where the automated system has the capability to accidentally hurt humans, the availability and quality of uncertainty estimates can determine whether the system is safe to deploy at all (Carvalho, 2016; Leibig et al., 2017; Michelmore et al., 2018). Moreover, when decisions are made sequentially, good uncertainty estimates are crucial for achieving good performance quickly (Bellemare et al., 2016; Houthooft et al., 2016; Ostrovski et al., 2017; Burda et al., 2018).\nBecause any non-Bayesian inference process is potentially sub-optimal (De Finetti, 1937), these uncertainty estimates should ideally be relatable to Bayesian inference with a useful prior. Deep ensembles (Lakshminarayanan et al., 2017), one of the most popular methods available for uncertainty estimation in deep networks today, struggle with this requirement. While deep ensembles can be related (Rubin, 1981) to Bayesian inference in settings where the individual models are trained on subsets of the data, this is not how they are used in practice. In order to improve data efficiency, all ensembles are typically trained using the same data (Lakshminarayanan et al., 2017), resulting in a method which does not have a theoretical justification. Moreover, deep ensembles can give overconfident uncertainty estimates in practice. On the other hand, Monte-Carlo dropout can be viewed (Gal & Ghahramani, 2016) as a certain form of Bayesian inference. However, doing so requires requires either a limit to be taken or a generalization of variational inference to a quasi-KL divergence (Hron et al., 2018). In practice, MC dropout can give arbitrarily overconfident estimates (Foong et al., 2019). More broadly, a category of approaches, known as Bayesian Neural Networks (Blundell et al., 2015; Welling & Teh, 2011; Neal, 1996), maintains a distribution over the weights of the neural network. These methods have a sound Bayesian justification, but training them is both difficult and carries an accuracy penalty, particularly for networks with convolutional architectures (Osawa et al., 2019). Moreover, tuning BNNs is hard and achieving a good approximation to the posterior is difficult (Brosse et al., 2018).\nWe use another way of obtaining uncertainties for deep networks, based on fitting random priors (Osband et al., 2018; 2019). Random priors are easy to train and were found to work very well in practice (Burda et al., 2018). To obtain the uncertainty estimates,\nAffiliations: 1. Microsoft Research Cambridge; 2. ETH Zurich; 3. University of Cambridge. The second author was an intern at Microsoft when contributing to this work.\nwe first train a predictor network to fit a prior. Two examples of prior-predictor pairs are shown in the top two plots of Figure 1.Faced with a novel input point, we obtain an uncertainty (Figure 1, bottom plot) by measuring the error of the predictor network against this pattern. Intuitively, these errors will be small close to the training points, but large far from them. The patterns themselves are drawn from randomly initialized (and therefore untrained) neural networks. While this way of estimating uncertainties was known before (Osband et al., 2019), it did not have a theoretical justification beyond Bayesian linear regression, which is too limiting for modern applications.\nContributions We provide a sound theoretical framework for obtaining uncertainty estimates by fitting random priors, a method previously lacking a principled justification. Specifically, we justify estimates in the uncertainty of the output of neural networks with any architecture. In particular, we show in Lemma 1 and Proposition 1 that these uncertainty estimates are conservative, meaning they are never more certain than a Bayesian algorithm would be. Moreover, in Proposition 2 we show concentration, i.e. that the uncertainties become zero with infinite data. Empirically, we evaluate the calibration and out-of-distribution performance of our uncertainty estimates on typical computer vision tasks, showing a practical benefit over deep ensembles and MC dropout." }, { "heading": "2 PRELIMINARIES", "text": "We are going to reason about uncertainty within the formal framework of stochastic processes. We now introduce the required notations.\nA stochastic process is a collection of random variables {f(x)}. We consider processes where x ∈ RK and the random-variable f(x) takes values in RM . A stochastic process has exchangeable outputs if the distribution does not change when permuting theM entries in the output vector. Allowing a slight abuse of notation, we denote the finite-dimensional distribution of the process {f(x)} for the set X = {xi}i=1,...,N as f(x1, . . . , xN ) = f(X). In practice, the finite-dimensional distribution reflects the idea of restricting the process to points x1, . . . , xN and marginalizing over all the other points. Inference can be performed on stochastic processes similarly to probability distributions. In particular, we can start with some prior process {f(x)}, observe a set of N training points X = {xi}i=1,...,N and labels y = {yi}i=1,...,N and then consider the posterior process {fXy(x)}, whose finite-dimensional distributions are given by fXy(x?1 . . . x ? N ′) = f(x ? 1 . . . x ? N ′ |x1, . . . , xN , y1, . . . , yN ) for any set of testing points x?1 . . . x ? N ′ . We use subscripts to denote conditioning on the dataset throughout the paper. We denote the variance of fXy(x?) with σ2Xf (x?). A stochastic process is called Gaussian if if all its finite-dimensional distributions are Gaussian. Given a test point x?, we denote the posterior GP mean with µXy(x?) and posterior GP variance with σ2X(x?). We provide more background on GPs in Appendix D." }, { "heading": "3 ESTIMATING UNCERTAINTY FROM RANDOM PRIORS", "text": "Intuition Uncertainties obtained from random priors have an appealing intuitive justification. Consider the networks in the top part of Figure 1. We start with a randomly initialized prior network, shown in red. Whenever we see a datapoint, we train the predictor network (green) to match this\nprior. Uncertainties can then be obtained by considering the squared error between the prior and the predictor at a given point. An example uncertainty estimate is shown as the shaded blue area in the bottom of Figure 1. While it may at first seem that the squared error is a poor measure of uncertainty because it can become very small by random chance, we formally show in Section 4.1 that this is very improbable. In Section 4.2, we show that this error goes down to zero as we observe more data. Similarly to GP inference, uncertainty estimation in our framework does not depend on the regression label. The prediction mean (blue curve in the bottom part of Figure 1) is obtained by fitting a completely separate neural network. In section 6, we discuss how this framework avoids the overconfidence characteristic of deep ensembles (Lakshminarayanan et al., 2017).\nPrior The process of obtaining network uncertainties involves randomly initialized prior networks, which are never trained. While this may at first appear very different from they way deep learning is normally done, these random networks are a crucial component of our method. We show in Section 4.1 that the random process that corresponds to initializing these networks can be interpreted as a prior of a Bayesian inference procedure. A prior conveys the information about how the individual data points are related. The fact that we are using random networks has both practical and theoretical benefits. Practically, since the prior does not depend on the data, there is no way that it can overfit. The use of random priors also has strong empirical support – randomly initialized networks have been recently used as priors to obtain state-of-the-art performance on computer vision tasks (Ulyanov et al., 2018; Cheng et al., 2019). Theoretically, using random priors satisfies the likelihood principle (Robert, 2007). Moreover, random priors can be viewed as a safe choice since they make the minimum reasonable assumption that the network architecture is appropriate for the task. In fact, whenever deep learning is used, with or without uncertainty estimates, practitioners are already implicitly making that assumption.\nAlgorithm 1 Training the predictors. function TRAIN-UNCERTAINTIES(X)\nfor i = 1 . . . B do f i ∼ {f(x)} . random prior hXfi ← FIT(X, f i(X)) end for return fi, hXfi\nend function\nfunction FIT(X, f i(X)) L(h) , ∑ x∈X ‖f\ni(x)− h(x)‖2 hXfi ← OPTIMIZE(L) . SGD or similar return hXfi . return trained predictor\nend function\nAlgorithm The process of training the predictor networks is shown in Algorithm 1. The function TRAIN-UNCERTAINTIES first generates random priors, i.e. neural networks with random weights. In our notation, it corresponds to sampling functions from the prior process {f(x)}. These priors, evaluated at points from the dataset X = {xi}i=1,...,N are then used as labels for supervised learning, performed by the function FIT. After training, when we want to obtain an uncertainty estimate φ at a given test point x?, we use the formula\nσ̂2(x?) = max(0, σ̂ 2 µ(x?) + βv̂σ(x?)− σ2A). (1)\nHere, the quantity σ̂2µ is the sample mean of the squared error. We will show in Section 4 that it is an unbiased estimator of a variable that models the uncertainty. On the other hand, v̂σ is the samplebased estimate of the standard deviation of squared error across bootstraps, needed to quantify our uncertainty about what the uncertainty is. The hyper-parameter β controls the degree to which this uncertainty is taken into account. Formally, the quantities are defined as\nσ̂2µ(x?) , ∑B i=1 1 MB ‖f(x?)− hXfi(x?)‖ 2, (2)\nv̂σ(x?) , √∑B i=1 1 B (σ̂ 2 µ(x?)− 1M ‖f(x?)− hXfi(x?)‖2)2. (3)\nIn the above equations, B is the number of prior functions and each prior and predictor network has M outputs. Because the predictors are trained independently, uncertainty estimates obtained from each of the B predictor-prior pairs are independent. We defer the discussion of details of network architecture to Section 5. Our experiments (Section 7) show that it is often sufficient to use B = 1 in practice." }, { "heading": "4 THEORETICAL RESULTS", "text": "In Section 3, we introduced a process for obtaining uncertainties in deep learning. We now seek to provide a formal justification. We define the expected uncertainties as\nσ̃2µ(x?) , Ef [ σ̂2µ(x?) ] = Ef [ 1 M ‖f(x?)− hXf (x?)‖ 2 ] . (4)\nIn other words, σ̃2µ is the expected version of the sample-based uncertainties σ̂ 2 µ(x?) introduced in equation 2. Since Bayesian inference is known to be optimal (De Finetti, 1937; Jaynes, 2003; Robert, 2007), the most appealing way of justifying uncertainty estimates σ̃2µ and σ̂ 2 µ is to relate them to a Bayesian posterior σ2Xf (x?). We do this in two stages. First, in Section 4.1, we prove that the obtained uncertainties are larger than ones arrived at by Bayesian inference. This means that our uncertainties are conservative, ensuring that our algorithm is never more certain than it should be. Next, in Section 4.2, we show that uncertainties concentrate, i.e., they become small as we get more and more data. These two properties are sufficient to justify the use of our uncertainties in many applications." }, { "heading": "4.1 UNCERTAINTIES FROM RANDOM PRIORS ARE CONSERVATIVE", "text": "From the point of view of safety, it is preferable to overestimate the ground truth uncertainty than to underestimate it. We now show that this property holds for uncertainties obtained from random priors. We first justify conservatism for the expected uncertainty σ̃2µ defined in equation 4 and then for the sampled uncertainty σ̂2µ defined in equation 2.\nAmortized Conservatism We first consider a weak form of this conservatism, which we call amortized. It guarantees that σ̃2µ is never smaller than the average posterior uncertainty across labels sampled from the prior. Formally, amortized conservatism holds if for any test point x? we have\nσ̃2µ(x?) ≥ Ef(X) [ σ2Xf (x?) ] . (5)\nHere σ2Xf corresponds to the second moment of the posterior process {fXf (x)}. We will introduce a stronger version of conservatism, which does not have an expectation on the right-hand side, later in this section (eq. 8). For now, we concentrate on amortized conservatism. In Lemma 1 (proof in appendix), we show that it holds under very general conditions.\nLemma 1. For any function h : RN×(K+1) → RM , for any test point x? ∈ RK and for any stochastic process {f(x)}x∈RK with all second moments finite and exchangeable outputs\nσ̃2µ(x?) = Ef(X) [ σ2Xf (x?) + 1 M ‖µXf (x?)− hXf (x?)‖ 2 ] . (6)\nRelation to a GP Lemma 1 holds for any prior process {f(x)}. However, the prior process used by Algorithm 1 is not completely arbitrary. The fact that prior samples are obtained by initializing neural networks with independently sampled weights gives us additional structure. In fact, it can be shown that randomly initialized neural networks become close to GPs as the width of the layers increases. While the original result due to Neal (1996) held for a simple network with one hidden layer, it has been extended to a wide class of popular architectures, including to CNNs and RNNs of arbitrary depth (Matthews et al., 2018; Lee et al., 2018; Novak et al., 2019; Williams, 1997; Le Roux & Bengio, 2007; Hazan & Jaakkola, 2015; Daniely et al., 2016; Garriga-Alonso et al., 2019). Recently, it has been shown to hold for a broad class of functions trainable by gradient descent (Yang, 2019). While the precise statement of these results involves technicalities which fall beyond the scope of this paper, we recall the key insight. For a family of neural networks {fW (x)}, where the weights are sampled independently and W is the width of the hidden layers, there exists a limiting kernel function k∞ such that\nlim W→∞\n[{fW (x)}] = GP(0, k∞). (7)\nIn other words, as the size of the hidden layers increases, the stochastic process obtained by initializing networks randomly converges in distribution to a GP. In the context of our uncertainty estimates, this makes it reasonable for W large enough to consider the prior to be a GP. We stress that the GP assumption has to hold only for the prior network, which is never trained. We do not make any assumptions about connections between the predictor training process and GPs.\nStrict Conservatism Denoting the posterior GP variance with σ2X(x?), we define uncertainty estimates to be strictly conservative when\nσ̃2µ(x?) ≥ σ2X(x?). (8)\nThis statement is stronger than the amortized conservatism in equation 5. Intuitively, equation 8 can be interpreted as saying that our uncertainty estimates are never too small. This confirms the intuition expressed by Burda et al. (2018) that random priors do not overfit. Below, in Proposition 1, we outline how to guarantee strict conservatism formally. It is proved in Appendix F.1.\nProposition 1 (Strict Conservatism in Expectation). Assume that f is a GP. Then for any function h : RN×K → RM , we have\nσ̃2µ(x?) = σ 2 X(x?) + Ef(X) [ 1 M ‖µXf (x?)− hXf (x?)‖ 2 ]︸ ︷︷ ︸\n≥0\n. (9)\nMoreover, equality holds if and only if hXf (x?) = µXf (x?).\nConservatism with Finite Bootstraps Lemma 1 above shows conservatism for expected uncertainties, i.e. σ̃2µ introduced in equation 5. However, in practice we have to estimate this expectation using a finite number of bootstraps, and use the sampled uncertainties σ̂2µ defined in equation 2. We now state a conservatism guarantee that holds even in the case of just one bootstrap (B = 1). The proof is deferred to Appendix F.1.\nCorollary 1 (Strict Conservatism for Finite Bootstraps). Assume that f is a GP. Assume that the random variable σ̂2µ(x?) has finite variance upper bounded by vUB. Then with probability 1− δ, for any function h : RN×K → RM , we have\nσ̂2µ(x?) + 1√ δ vUB ≥ σ̃2µ(x?) ≥ σ2X(x?). (10)\nHowever, applying Corollary 1 requires the knowledge of vUB. We now provide an upper bound.\nLemma 2. Assume that the GP {f(x)} is zero mean with exchangeable outputs and the function hXf takes values in [−U,U ]M . Assume that permuting the outputs of f produces the same permutation in the outputs of hXf . With probability 1− δ, we have\nVarf1,...,fB [ σ̂2µ(x?) ] ≤ vUB, (11)\nwhere vUB is expressible in terms of observable quantities.\nThe proof and the explicit formula for vUB is deferred to Appendix F.1. In cases where conservatism is desired, but not absolutely essential, we can avoid the torturous calculation of Lemma 2 and replace vUB with the sample-based estimate v̂σ(x?), defined in equation 2. In this case, the conservatism guarantee is only approximate. This is how we obtained equation 1, used by the algorithm in practice." }, { "heading": "4.2 UNCERTAINTIES FROM RANDOM PRIORS CONCENTRATE", "text": "While the conservatism property in Proposition 1 is appealing, it is not sufficient on its own for the uncertainty estimates to be useful. We also need concentration, i.e. a guarantee that the uncertainties σ̂2 become small with more data. We can gurantee this formally by assuming that the class of neural networks being fitted is Lipschitz-continuous and bounded. Intuitively, by assumption of Lipschitz continuity, the predictors hXf cannot behave very differently on points from the training and test sets, since both come from the same data distribution. We can then show concentration by using standard Rademacher tools to obtain a bound on the expected uncertainty in terms of the squared error on the training set. This process is formalized in Proposition 2.\nProposition 2. If the training converges, i.e. the training loss 1MN ∑N i=1 ‖f(xi)−hXf (xi)‖2 = σ2A for arbitrarily large training sets, then assuming the predictors hXf are bounded and Lipschitz continuous with constant L, then under technical conditions the uncertainties concentrate, i.e. σ̂2(x?)→ 0 as N →∞ and B →∞ with probability 1.\nThe proof and the technical conditions are given in Appendix F. Proposition 2 assumes that the training error is zero for arbitrarily large training sets, which might at first seem unrealistic. We argue that this assumption is in fact reasonable. The architecture of our predictor networks (Figure 2, right diagram) is a superset of the prior architecture (Figure 2, left diagram), guaranteeing the existence of weight settings for the predictor that make the training loss zero. Recent results on deep learning optimization (Du et al., 2019; Allen-Zhu et al., 2019) have shown that stochastic gradient descent can in general be expected to find representable functions." }, { "heading": "5 PRACTICAL CONCLUSIONS FROM THE THEORY", "text": "We now re-visit the algorithm we defined in Section 3, with the aim of using the theory above to obtain practical improvements in the quality of the uncertainty estimates.\nArchitecture and Choosing the Number of Bootstraps Our conservatism guarantee in Proposition 1 holds for any architecture for the predictor hXf . In theory, the predictor could be completely arbitrary and does not even have to be a deep network. In particular, there is no formal requirement for the predictor architecture to be the same as the prior. On the other hand, to show concentration in Proposition 2, we had to ensure that the prior networks are representable by the predictor. In practice, we use the architecture shown in Figure 2, where the predictor mirrors the prior, but has additional layers, giving it more representational power. Moreover, the architecture requires choosing the number of bootstraps B. Our experiments in Section 7 show that even using B = 1, i.e. one bootstrap, produces uncertainty estimates of high quality in practice.\nModeling Epistemic and Aleatoric Uncertainty Proposition 1 and Proposition 2 hold for any Gaussian Process prior. By choosing the process appropriately, we can model both epistemic and aleatoric uncertainty. Denote by {n(x)} a stochastic process obtained by randomly initializing neural networks and denote by { (x)σ2A} the noise term, modeling the aleatoric (observation) noise, where samples are obtained from (x) ∼ N (0, 1) at each x independently (see Appendix D for more background on aleatoric noise). We can now choose the prior process as a sum {f(x)} = {n(x) + (x)σ2A} of epistemic component {n(x)} and the noise term. The amount of aleatoric uncertainty can be adjusted by choosing σ2A.\nPrior Choice, Weight Copying and Conservatism One question that can be asked about our architecture (Figure 2) is whether it is possible for the predictor to exactly copy the prior weights, giving zero uncertainty everywhere. A useful edge case to consider here is when we are solving a one-dimensional regression problem, σ2A = 0 and the both the priors and predictors are linear functions. In this case, after training on two points, the predictors will agree with the priors everywhere and uncertainty estimates will be zero. However, this is still consistent with our conservatism guarantee The reason for this is once we assume such a linear prior, we are comparing to a GP with a linear kernel. But a GP with that kernel will also have zero uncertainty after seeing two samples.\nIn practice, this means that we have to choose the architecture of the prior networks be expressive enough, which is no different from choosing a reasonable prior for Bayesian inference. Empirically, the tested network architecture did not show weight copying." }, { "heading": "6 PRIOR WORK", "text": "Randomized Prior Functions (RPFs) Our work was inspired by, and builds on, Randomised Prior Functions (Osband et al., 2019; 2018), but it is different in two important respects. First, the existing theoretical justification for RPFs only holds for Bayesian linear regression (Osband et al., 2018, equation 3) with non-zero noise1 added to the priors. In contrast, our results are much more general and hold for any deep network with or without added aleatoric noise. Second, we are targeting a different setting. While RPFs were designed as a way of sampling functions from the posterior, we provide estimates of posterior uncertainty at a given test point. Our algorithm is based on the work by Burda et al. (2018), who applied RPFs to exploration in MDPs, obtaining state-of-the art results, but without justifying their uncertainty estimates formally. Our paper provides this missing justification, while also introducing a way of quantifying the error in estimating the uncertainty itself. Moreover, since Burda et al. (2018) focused on the application of RPFs to Reinforcement Learning, they only performed out-of-distribution evaluation on the relatively easy MNIST dataset (LeCun, 1998). In contrast, in Section 7 we evaluate the uncertainties on more complex vision tasks. The term prior networks has also been used (Malinin & Gales, 2018) to denote deep networks that output the parameters of a prior distribution, an approach fundamentally different from our work.\nDeep Ensembles The main alternative approach for obtaining uncertainties in deep learning are deep ensembles (Lakshminarayanan et al., 2017). Building on the bootstrap (Efron & Tibshirani, 1994), deep ensembles maintain several models and quantify epistemic uncertainty by measuring how their outputs vary. Crucially, deep ensembles use representations trained on regression labels, and tend to learn similar representations for different inputs with similar labels, which can lead to over-fitting the uncertainty estimates. A useful edge case to consider is if the each of the models in the ensemble is convex in the weights. In this case, models in a deep ensemble will all converge to the same weights and produce zero uncertainty. While deep learning models used in practice aren’t normally convex, we show empirically in section 7 that deep ensembles can give overconfident uncertainty estimates in practical vision tasks, particularly on points that have the same label as points in the training set. Since our method avoids overconfidence, it can be understood as complementary to deep ensembles, to be used in situations where obtaining conservative estimates is more important than the representational benefit of using labels. In practice, deep ensembles also require using more bootstraps to achieve the same OOD performance. Moreover, they do not have theoretical support in the case when all the members of the ensemble are trained on the same data, which is how they are used in practice (Lakshminarayanan et al., 2017).\nDropout In cases where it is not economical to train more than one network, uncertainties can be obtained with dropout (Srivastava et al., 2014; Gal & Ghahramani, 2016). Monte-Carlo dropout can be viewed (Gal & Ghahramani, 2016) as a form of approximate Bayesian inference. However, to do so requires a rather unnatural approximating family from the perspective of approximate inference. Also, one has then either to take a limit or generalize variational inference to a quasi-KL (Hron et al., 2018) divergence. In addition, dropout can be interpreted in terms of MAP inference (Nalisnick et al., 2019). Another alternative view of MC dropout is as an ensemble method in which the ensemble members have shared parameters (which means they are trained together) and where the ensembling is applied at test time too. This latter view is arguably as natural as the Bayesian interpretation. For this reason we discuss MC dropout separately from BNNs. Since dropout implicitly approximates non-Gaussian weight distribution with Gaussians, it exhibits spurious patterns in the obtained uncertainties, which can lead to arbitrarily overconfident estimates (Foong et al., 2019). In contrast, due to the conservatism property, random priors avoid such overconfidence.\nBayesian Neural Networks (BNNs) Bayesian Neural Networks (Blundell et al., 2015; Kingma & Welling, 2014; Rezende et al., 2014; Welling & Teh, 2011; Brosse et al., 2018) explicitly model the\n1The existing justification of RPFs (Osband et al., 2019, Section 5.3.1) involves a division by the noise variance.\ndistribution over weights of a neural network. While BNNs provide a link between deep learning and Bayesian inference, they are very slow to train. Even recent tuned implementations of BNNs (Osawa et al., 2019) are several times slower than supervised learning. This happens despite using a battery of technical optimizations, including distributed training and batch normalization. Moreover, modern convolutional BNNs still carry a significant accuracy penalty when deployed with realistic settings of prior variance.2" }, { "heading": "7 EXPERIMENTS", "text": "Encouraged by the huge empirical success of random priors in Reinforcement Learning (Burda et al., 2018), we wanted to provide an evaluation in a more typical supervised learning setting. We tested the uncertainties in two ways. First, we investigated calibration, i.e. whether we can expect a higher accuracy for more confident estimates. Next, we checked whether the uncertainties can be used for out-of-distribution detection. We compared to two competing approaches for uncertainty detection: deep ensembles (Lakshminarayanan et al., 2017) and spatial concrete dropout (Gal et al., 2017). The same ResNet architecture served as a basis for all methods. Details of the implementation are provided in Appendix A.\nOut-Of-Distribution Detection We evaluated the uncertainty estimates on out-ofdistribution detection. To quantify the results, we evaluated the area under the ROC curve (AUROC) for the task of deciding whether a given image comes from the same distribution or not. All methods were trained on four classes from the CIFAR-10 (Krizhevsky et al., 2009) dataset (training details are provided in Appendix A). We then tested the resulting networks on images from withheld classes and on the SVHN dataset (Netzer et al., 2011), which contains completely different images. Results are shown in Table 1. Considering the statistical errors (see Appendix B), random priors performed slightly better than deep ensembles with adversarial training for B = 1 and about the same for B = 10. For dropout, B refers to the number of dropout samples. Dropout per-\nformed worse, but was cheaper to train. In order to gain a more finely-grained insight into the quality of the uncertainties, we also show uncertainty histograms in Figure 3. The figure shows the distribution of uncertainty estimates for seen data (top row) vs. unseen data (bottom row) for bootstrap sizesB = {1, 5, 10}. The main conclusion is that uncertainties obtained from random priors are already well-separated with B = 1, while deep ensembles need more bootstraps to achieve the full separation between test and train examples. We provide additional experimental results, showing OOD accuracy and an evaluation on CIFAR 100 in Appendix B.\nCalibration Good uncertainty estimates have the property that accuracy increases as we become more certain, a property known as calibration. We measured it by evaluating average accuracy on the subset of images with uncertainty smaller than a given value. We trained on four classes from the CIFAR-10 (Krizhevsky et al., 2009) dataset. We then tested the resulting networks on the whole dataset, which included both the seen and unseen classes. Results are shown in Figure 4. Ideally, in a calibrated method, these curves should be increasing, indicating that a method always becomes more accurate as it becomes more confident. In coarse terms, Figure 4 confirms that all methods except a degenerate deep ensemble with only one bootstrap are roughly monotonic. However, uncertainty estimates from random priors are more stable, showing monotonicity on a finer scale as well as on a large scale. Interestingly, calibration improved only slightly when increasing the number of bootstraps B.\n2See appendix E of the paper by Osawa et al. (2019).\nSubsampling Ablation In the previous experiment, we kept the architectural and optimization choices fixed across algorithms. This ensured a level playing field, but meant that we were not able to obtain zero training error on the predictor networks used by random priors. However, we also wanted to evaluate random priors in the setting of near-zero training error. To do this, we used a smaller set of training images, while still keeping the network architecture the same. This allowed us to obtain nearcomplete convergence (details in Appendix A).\nResults of this ablation are shown in Figures 5 and 6, as well as Table 2, analogous to our results on the full dataset presented above. In this sub-sampled regime, the random prior method easily outperformed competing approaches, showing better calibration (Fig. 5). The histograms in Figure 6 also demonstrate good separation between seen and unseen data. In the out-of-distribution benchmarks reported in Table 2, the random prior method has comfortably outperformed the baselines. While this training regime is not practical for real-life tasks, it demonstrates the potential performance of random priors when trained to full convergence.\nSensitivity to Initialization Scale We performed an ablation to test the robustness of our algorithm to the scaling of the weight initialization in the prior. Results are shown in Figure 7, where we plot the relationship between initialization scale (taken from the set {0.01, 0.1, 1.0, 2.0, 5.0, 10.0}) and AUROC performance on the CIFAR-10 task. OOD performance is relatively robust with respect to the weight initialization within one order of magnitude.\nSummary of experiments We have shown that uncertainties obtained from random priors achieve competitive performance with fewer bootstraps in a regime where the network architecture is typical for standard supervised learning workloads. Random priors showed superior performance in a regime where the predictors can be trained to near-zero loss." }, { "heading": "8 CONCLUSIONS", "text": "We provided a theoretical justification for the use of random priors for obtaining uncertainty estimates in the context of deep learning. We have shown that the obtained uncertainties are conservative and that they concentrate for any neural network architecture. We performed an extensive empirical comparison, showing that random priors perform similarly to\ndeep ensembles in a typical supervised training setting, while outperforming them in a regime where we are able to accomplish near-zero training loss for the predictors." }, { "heading": "APPENDIX A REPRODUCIBILITY AND DETAILS OF EXPERIMENTAL SETUP", "text": "" }, { "heading": "APPENDIX A.1 SYNTHETIC DATA", "text": "For the 1D regression experiment on synthetic data (Fig 1), we used feed-forward neural networks with 2 layers of 128 units each and a 1-dimensional output layer. We used an ensemble size of 5. The network was trained on 20 points sampled from the negative domain of a sigmoid function and tested on 20 points sampled from the positive domain." }, { "heading": "APPENDIX A.2 EXPERIMENTAL SETUP", "text": "Model architecture For the CIFAR-10 experiments, we adapted the setup from the cifar10-fast model.3 For the network predicting the mean, we used the exact same architecture as in this model. For the prior networks in our uncertainty estimators, the architecture for the prior network was the same as the mean network, but using a final linear layer instead of the softmax layer. We used squared error on that last layer to get the uncertainties. For the predictor networks in the uncertainty estimators, we added two additional layers at the end to make sure the prior functions are learnable (see Fig. 2).\nWe followed Burda et al. (2018) in choosing the output size to be M = 512 and using the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.0001. We optimized the initialization scale of our networks as a hyperparameter on the grid {0.01, 0.1, 1.0, 2.0, 10.0} and chose 2.0. We chose a scaling factor of β = 1.0 for the uncertainty bonus of the random priors and fixed it for all experiments.\nData For the CIFAR-10 experiment, we trained on the classes {bird, dog, frog, horse} and excluded {cat, deer, airplane, automobile, ship, truck}. For the small CIFAR-10 ablation experiment, we trained on 75 images sampled from the classes {ship, truck} and excluded the remaining classes.\nTraining Error The training error was 0.57 ± 0.20 on the CIFAR experiment and 0.03 ± 0.02 on the sub-sampled ablation (the symbol ± denotes 90% confidence intervals).\nOut-of-distribution classification For computing the areas under the receiver-operator characteristic curves (AUROC) in the OOD classification tables, we used the roc auc score function from the Python package sklearn (Pedregosa et al., 2011), using the predicted uncertainties as predicted label scores and binary labels for whether or not the samples were from the training set." }, { "heading": "APPENDIX B ADDITIONAL RESULTS", "text": "" }, { "heading": "APPENDIX B.1 CONFIDENCE INTERVALS FOR AUROCS", "text": "We provide confidence intervals for AUROC measurements in Table 3.\n3https://github.com/davidcpage/cifar10-fast" }, { "heading": "APPENDIX B.2 OOD CLASSIFICATION ACCURACIES", "text": "In addition to AUROC results, we also provide accuracy figures on the same OOD tasks. The thresholding for classification was obtained by cross-validation.\nThey are in Table 4 and 5." }, { "heading": "APPENDIX B.3 SUPERVISED IN-DISTRIBUTION CLASSIFICATION ACCURACIES", "text": "" }, { "heading": "APPENDIX B.4 CIFAR100 EXPERIMENT", "text": "As additional empirical support for our method, we ran experiments on another data set, namely CIFAR-100 (Krizhevsky et al., 2009). Again, we include 5 classes in the training set and exclude the remaining classes. The results are reported in the following (Figs. 8, 9; Tabs. 7, 8). They qualitatively and quantitatively support the same conclusions as our previous experiments." }, { "heading": "APPENDIX C BACKGROUND ON BAYES RISK", "text": "For completeness, we recall the definition of Bayes Risk. We are often interested in minimizing the Mean Squared Error Ef [ (f(x?)− w)2 ] , where x? is a given test point and w is a variable we are\nallowed to adjust. A known result of Bayesian decision theory (Robert, 2007; Murphy, 2012) is that the minimizer of the MSE is given by the expected value of f , i.e.\nargmin w\nEf [ (f(x?)− w)2 ] = Ef [f(x?)]. (12)\nEquation 12 holds for any stochastic process f , including when f is a posterior process obtained by conditioning on some dataset. A consequence of equation 12 is that it is impossible to obtain a MSE lower than the one obtained by computing the posterior mean of f ." }, { "heading": "APPENDIX D GAUSSIAN PROCESSES", "text": "A stochastic process is Gaussian (Williams & Rasmussen, 2006), if all its finite-dimensional distributions are Gaussian. The main advantage of GPs is that the posterior process can be expressed in a tractable way. GPs are often used for regression, where we are learning an unknown function4 φ : RK → R from noisy observations. Since a Gaussian distribution is completely identified by its first two moments, a GP can be defined by a mean function and a covariance function. Formally, the notation GP(µ, k) refers to a GP with with mean function µ : RK → R, a positive-definite kernel function k : RK × RK → R. GPs can be used to model two kinds of uncertainty: epistemic uncertainty, which reflects lack of knowledge about unobserved values of φ and aleatoric uncertainty, which reflects measurement noise. When performing regression, we start with a zero-mean prior GP(0, k) and then observe N training points X = {xi}i=1,...,N and labels y = {yi}i=1,...,N where\n4We depart from standard notation, which uses f , because we will be using f to denote a sample from the prior process.\nyi = φ(xi) + i. Here, the i.i.d. random variables i ∼ N (0, σ2A) model the aleatoric noise. We obtain the posterior process on GP(µXy, kX). For GPs, the mean and covariance of the posterior GP on y evaluated at x? can be expressed as\nµXy(x?) = k > ? (K + σ 2 AI) −1y and (13)\nσ2X(x?) , kX(x?, x?) + σ 2 A = k?? − k>? (K + σ2AI)−1k? + σ2A. (14)\nIn particular, the posterior covariance does not depend on y. In the formula above, we use the kernel matrix K ∈ RN × RN defined as Kij = k(xi, xj), where xi and xj are in the training set. We also use the notation k? ∈ RN for the vector of train-test correlations {k?}i = k(xi, x?), where xi is in the training set and k(x?, x?) is similarly defined. The shorthand σ2X(x?) introduced in equation 14 denotes the posterior variance at a single point." }, { "heading": "APPENDIX E LIST OF SYMBOLS DENOTING VARIANCE", "text": "Below, we give a list of symbols used for variance of various random variables.\nσ2Xf posterior variance of stochastic process σ2X posterior variance of Gaussian process σ20 prior variance of stochastic process σ̂20 sample-based estimate of prior GP variance σ̂2 combined uncertainty estimate (see equation 1) σ̂2µ sample-based mean part of uncertainty estimate (see equation 2) σ̃2µ Ef [ σ̂2µ ] v̂σ sample-based variance part of uncertainty estimate (see equation 3) vUB upper bound on variance of σ̂2µ σ2A aleatoric variance (observation noise)" }, { "heading": "APPENDIX F PROOFS", "text": "We now give formal proofs for the results in the paper." }, { "heading": "APPENDIX F.1 PROOFS RELATING TO CONSERVATISM", "text": "Lemma 1. For any function h : RN×(K+1) → RM , for any test point x? ∈ RK and for any stochastic process {f(x)}x∈RK with all second moments finite and exchangeable outputs\nσ̃2µ(x?) = Ef(X) [ σ2Xf (x?) + 1 M ‖µXf (x?)− hXf (x?)‖ 2 ] . (6)\nProof. We prove the statement by re-writing the expression on the left.\nσ̃2µ(x?) = 1 M Ef(X),f(x?) [ ‖f(x?)− hXf (x?)‖2 ] (15)\n= 1M Ef(X) [ Ef(x?|f(X)) [ ‖f(x?)− hXf (x?)‖2 ]] (16)\n= 1M Ef(X) [ Ef(x?|f(X)) [∑M m=1(f m(x?)− hmXf (x?))2 ]]\n(17)\n= 1M Ef(X) [ Ef(x?|f(X)) [∑M m=1(f m(x?)) 2 − 2fm(x?)hmXf (x?) + (hmXf (x?))2 ]] (18)\n= 1M Ef(X) [∑M m=1 σ 2 Xfm(x?) + (µXfm(x?)) 2 − 2µXfm(x?)hmXf (x?) + (hmXf (x?))2 ]\n(19)\n= 1M Ef(X) [∑M m=1 σ 2 Xfm(x?) + (µXfm(x?)− hmXf (x?))2 ] (20)\n= Ef(X) [ σ2Xf (x?) + 1 M ‖µXf (x?)− hXf (x?)‖ 2 ]\n(21)\nHere, the equality in (16) holds by definition of conditional probability. The equality in (19) holds by definition of posterior mean and the equality 21 follows by assumption that the process has exchangeable outputs. While this argument follows a similar pattern to a standard result about Bayesian Risk (see Appendix Appendix C), it is not identical because the function hXf depends on f .\nProposition 1 (Strict Conservatism in Expectation). Assume that f is a GP. Then for any function h : RN×K → RM , we have\nσ̃2µ(x?) = σ 2 X(x?) + Ef(X) [ 1 M ‖µXf (x?)− hXf (x?)‖ 2 ]︸ ︷︷ ︸\n≥0\n. (9)\nMoreover, equality holds if and only if hXf (x?) = µXf (x?).\nProof. We instantiate Lemma 1 by setting f to be a GP. By equation 14, the posterior covariance of a GP does not depend on the target values, i.e. σ2Xf (x?) = σ 2 X(x?). The first part of the result can be shown by pulling σ2X(x?) out of the expectation. Moreover, since ‖ · ‖ is a norm and hence positive semi-definite, equality holds if and only if hXf (x?) = µXf (x?).\nLemma 3. Assume that the random variable σ̂2µ(x?) has finite variance upper bounded by vUB. With probability 1− δ, we have σ̂2µ(x?) + 1√δvUB ≥ σ̃ 2 µ(x?).\nProof. The proof is standard, but we state it in our notation for completeness. Applying Chebyshev’s inequality to the random variable σ̂2µ(x?), we have that Prob ( |σ̃2µ(x?)− σ̂2µ(x?)| ≥ 1√δvUB ) ≤ δ, implying the statement.\nCorollary 1 (Strict Conservatism for Finite Bootstraps). Assume that f is a GP. Assume that the random variable σ̂2µ(x?) has finite variance upper bounded by vUB. Then with probability 1− δ, for any function h : RN×K → RM , we have\nσ̂2µ(x?) + 1√ δ vUB ≥ σ̃2µ(x?) ≥ σ2X(x?). (10)\nProof. Combine Lemma 3 and Proposition 1.\nLemma 2. Assume that the GP {f(x)} is zero mean with exchangeable outputs and the function hXf takes values in [−U,U ]M . Assume that permuting the outputs of f produces the same permutation in the outputs of hXf . With probability 1− δ, we have\nVarf1,...,fB [ σ̂2µ(x?) ] ≤ vUB, (11)\nwhere vUB is expressible in terms of observable quantities.\nProof. We seek to decompose the variance of σ̂2µ(x?) into the part that comes from the prior and the part that comes from the fitted function hXfm .\nVarf1,...,fB [ σ̂2µ(x?) ] (22)\n= Varf1,...,fB [∑B i=1 1 MB ‖f(x?)− hXfi(x?)‖ 2 ]\n(23) = 1B Varf [ 1 M ‖f(x?)− hXfi(x?)‖ 2 ] (24)\n= 1B 1 M2 Varf [ ( ∑M m=1(f m(x?)− hXfm(x?))2 ]\n(25)\n= 1B 1 M2 ∑M m=1 ∑M l=1 Covf [ (fm(x?)− hXfm(x?))2, (f l(x?)− hXf l(x?))2 ] (26) ≤ 1B 1 M2M 2 Varf [ (fm(x?)− hXfm(x?))2 ] (27)\n= 1B Varf [ (fm(x?)− hXfm(x?))2 ] (28)\n≤ 1B Ef [ (fm(x?)− hXfm(x?))4 ] (29)\n= 1B (Ef [ (fm(x?)) 4 ] − 4Ef [ (fm(x?)) 3hXfm(x?) ] + 6Ef [ (fm(x?)) 2(hXfm(x?)) 2 ]\n− 4Ef [ fm(x?)(hXfm(x?)) 3 ] + Ef [ (hXfm(x?)) 4 ] ) (30)\nHere, line 27 holds by exchangeability of outputs and the Cauchy-Schwarz inequality.\nSince hXfm(x?) is has support in [−U,U ], we have Ef [ (hXfm(x?)) 2) ] ≤ U2, Ef [ (hXfm(x?)) 4) ] ≤ U4, Ef [ (hXfm(x?)) 6) ] ≤ U6. (31)\nMoreover, since f(x?) is Gaussian and zero mean, we can write out the moments explicitly. Ef [ (fm(x?)) 4) ] = 3(Ef [ (fm(x?)) 2) ] )2\nEf [ (fm(x?)) 6) ] = 15(Ef [ (fm(x?)) 2) ] )3 (32)\nSince f(x?) is Gaussian, we can use a sample-based estimate of the prior variance and obtain an probabilistic confidence interval. In particular, we know that Ef [ (fm(x?)) 2) ] ≤ σ̂20(x?)B0−1χ2I(δ) with probability 1 − δ, where χ2I denotes the inverse CDF of the Chi-Squared distribution with B0 − 1 degrees of freedom. We denote this upper bound with wUB = σ̂20(x?)\nB0−1 χ2I(δ) .\nWe proceed by bounding the individual terms in equation 30 separately. Ef [ (fm(x?)) 4 ] = 3(Ef [ (fm(x?)) 2 ] )2\n−Ef [ (fm(x?)) 3hXfm(x?) ] ≤ √ Ef [(fm(x?))6] Ef [(hXfm(x?))2]\nEf [ (fm(x?)) 2(hXfm(x?)) 2 ] ≤ √ Ef [(fm(x?))4] Ef [(hXfm(x?))4]\n−Ef [ fm(x?)(hXfm(x?)) 3 ] ≤ √ Ef [(fm(x?))2] Ef [(hXfm(x?))6]\nCombining the above, equation 30 and the bounds on individual moments in equations 31 and 32, we obtain\nVarf1,...,fB [ σ̂2µ(x?) ] ≤ 1B ( 3w2UB + 4 √ 15w3UBU 2 + 6 √ 3w2UBU 4 + 4 √ wUBU6 + U 4 )\n︸ ︷︷ ︸ vUB . (33)\nHere, wUB = σ̂20(x?) B0−1 χ2I(δ) , σ̂20(x?) is a sample-based estimate of the prior variance obtained with B0 samples, where χ2I denotes the inverse CDF of the Chi-Squared distribution with B0−1 degrees of freedom." }, { "heading": "APPENDIX F.2 PROOFS RELATING TO CONCENTRATION", "text": "We now proceed to the proofs showing concentration. We begin by formally defining a class of predictor networks.\nDefinition 1 (Class HU of Lipschitz networks). Consider functions h : RK → RM . Let j, j′ = 1, . . . ,M , index the outputs of the function. We define HU so that each h ∈ HU has the following properties for each j, j′. (P1) hj is Lipschitz continuous with constant L, i.e. ‖hj(x) − hj(x′)‖2 ≤ L‖x − x‘‖2 for all x, x′ with ‖x‖∞ ≤ 1 and ‖x′‖∞ ≤ 1, (P2) outputs are exchangeable, i.e. {hj : h ∈ HU} = {hj′ : h ∈ HU}, (P3) the class is symmetric around zero, i.e. hj ∈ {hj : h ∈ HU} implies −hj ∈ {hj : h ∈ HU}. (P4) hj is bounded, i.e. max‖x‖∞≤1 |hj(x)| ≤ U .\nWhile the conditions in Definition 1 look complicated, they are in fact easy to check for predictor networks that follow the architecture in Figure 2. In particular, Lipschitz continuity (P1) has to hold in practice because its absence would indicate extreme sensitivity to input perturbations. Output exchangeability (P2) holds since reordering the outputs does not change our architecture. Symmetry around zero (P3) holds by flipping the sign in the last network layer. Boundedness (P4) is easy to ensure by clipping outputs. In the following Lemma, we obtain a bound on the expected uncertainty.\nLemma 4. Consider a target function f : RK → RM , where j = 1, . . . ,M , with the domain restricted to ‖x‖∞ ≤ 1. Introduce a constant U such that max‖x‖∞≤1 |fj(x)| ≤ U . Denote the data distribution with support on {x : ‖x‖∞ ≤ 1} asD. Moreover, assumeK ≥ 3. For hXf ∈ HU , with probability 1− δ we have\nEx?∼D[ 1 M ‖f(x?)− hXf (x?)‖ 2] ≤ 1MN ∑N i=1 ‖f(xi)− hXf (xi)‖2 + LU O ( 1 K√ N √ log(1/δ) N ) .\n(34)\nProof. The proof uses standard Rademacher tools. To avoid confusion across several conventions, we explicitly define the Rademacher complexity of a function class G as:\nR̂N (G) , Eui [ supg∈G 1 N ∑N i=1 uig(xi) ] = Eui [ supg∈G 1 N ∣∣∣∑Ni=1 ujig(xi)∣∣∣]. (35) Here, the random variables ui are sampled i.i.d. using a discrete distribution with Prob(ui = −1) = Prob(ui = 1) = 12 and the the second equality follows by using property (P3). We start by applying the generic Rademacher bound (Mohri et al., 2018) to the function class M = {x1, . . . , xN , t1 . . . , tN → 1U2 1 M ‖ti − h(xi)‖\n2, h ∈ HU}, which contains the possible errors of the predictor.\nEx?∼D[ 1 B2 1 M ‖f(x?)− hXf (x?)‖ 2]\n≤ 1MN 1 B2 ∑N i=1 ‖f(xi)− hXf (xi)‖2 + R̂N (M) +O\n(√ log(1/δ)\nN\n) . (36)\nWe now introduce the function classM′ = {x1, . . . , xN , t1 . . . , tN → 1B2 (t j i−hj(xi))2, h ∈ HU}, which models the per-output squared error. Because of property (P2),M′ does not depend on the output index j. By pulling out the sum outside the supremum in equation 35, we get\nR̂N (M) ≤ R̂N (M′). (37)\nby Talagrand’s Lemma (Mohri et al., 2018; Duchi, 2009), we also have\nR̂N (M′) ≤ 4R̂N (H1). (38)\nHere, H1 = { 1U h j : h ∈ HU}. By property (P1), functions in H1 are Lipschitz continuous with constant L/U . Instantiating a known bound for Lipschitz-continuous functions (Luxburg & Bousquet, 2004, Theorem 18 and Example 4), and using the assumption K ≥ 3, we get R̂N (H1) ≤ L U O ( 1 K√ N ) . The Lemma follows by combining this with equation 37 and equation 38, plugging into equation 36 and re-scaling by U2.\nLemma 4 allowed us to relate the error on the training set to the expected error on the test set. It also shows that the two will be closer for small values of the Lipschitz constant L. We now use this Lemma to show our main concentration result (Proposition 2).\nProposition 2. If the training converges, i.e. the training loss 1MN ∑N i=1 ‖f(xi)−hXf (xi)‖2 = σ2A for arbitrarily large training sets, then assuming the predictors hXf are bounded and Lipschitz continuous with constant L, then under technical conditions the uncertainties concentrate, i.e. σ̂2(x?)→ 0 as N →∞ and B →∞ with probability 1.\nProof. We are assuming the technical conditions of Lemma 4. Instantiating Lemma 4, setting the training loss to σ2A in the RHS of equation 34 and letting N → ∞, we obtain the following with probability 1:\nlim N→∞\nEx?∼D[σ̂ 2 µ(x?)] = σ 2 A. (39)\nThis implies:\nlim N→∞\nEx?∼D[max(0, σ̂ 2 µ(x?)− σ2A)] = 0. (40)\nFrom the continuity of f and hXf we have that σ̂2µ is continuous in x?. Together with the property that the expression under the expectation is non-negative, this gives that for every x?.\nlim N→∞\nmax(0, σ̂2µ(x?)− σ2A) = 0. (41)\nSince the right-hand side does not depend on B, we also have\nlim B→∞ lim N→∞\nmax(0, σ̂2µ(x?)− σ2A) = 0. (42)\nFrom the definition of v̂σ , we have that\nlim B→∞ lim N→∞ v̂σ = 0. (43)\nWe show the Lemma by combining equation 42 and equation 43 with equation 1." } ]
2,020
CONSERVATIVE UNCERTAINTY ESTIMATION BY FITTING PRIOR NETWORKS
SP:1707af8ace423f653ad0355d3a363fa1af8c7daf
[ "Paper summary: This paper proposes a new normalization technique specially designed for settings with small mini-batch sizes (where previous methods like BatchNorm are known to suffer). The approach aggregates mean/variance statistics from previous iterations, weighted based on the Taylor expansion, to get a better estimate of population statistics. The authors evaluate their approach on ImageNet classification, and object detection and instance segmentation on the COCO dataset.", "This paper proposes a novel Cross-Iteration Batch Normalization (CBN) to address the limitation of BN in the case of small mini-batch sizes. Different from existing methods, CBN exploits the statistics cross different iterations to obtain more accurate estimates of the data statistics. Specifically, the proposed CBN uses Taylor polynomials to approximate the statistics using the information of multiple recent iterations. The experiments on both image classification and object detection tasks demonstrate the effectiveness of the proposed method." ]
A well-known issue of Batch Normalization is its significantly reduced effectiveness in the case of small mini-batch sizes. When a mini-batch contains few examples, the statistics upon which the normalization is defined cannot be reliably estimated from it during a training iteration. To address this problem, we present Cross-Iteration Batch Normalization (CBN), in which examples from multiple recent iterations are jointly utilized to enhance estimation quality. A challenge of computing statistics over multiple iterations is that the network activations from different iterations are not comparable to each other due to changes in network weights. We thus compensate for the network weight changes via a proposed technique based on Taylor polynomials, so that the statistics can be accurately estimated and batch normalization can be effectively applied. On object detection and image classification with small mini-batch sizes, CBN is found to outperform the original batch normalization and a direct calculation of statistics over previous iterations without the proposed compensation technique.
[]
[ { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "arXiv preprint arXiv:1512.03385,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Tong He", "Zhi Zhang", "Hang Zhang", "Zhongyue Zhang", "Junyuan Xie", "Mu Li" ], "title": "Bag of tricks for image classification with convolutional neural networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Sergey Ioffe" ], "title": "Batch renormalization: Towards reducing minibatch dependence in batch-normalized models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Genevieve B Orr", "Klaus-Robert Müller" ], "title": "Efficient backprop. In Neural Networks: Tricks of the Trade, this book is an outgrowth", "venue": "NIPS workshop,", "year": 1996 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollar", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Ping Luo", "Jiamin Ren", "Zhanglin Peng" ], "title": "Differentiable learning-to-normalize via switchable normalization", "venue": "arXiv preprint arXiv:1806.10779,", "year": 2018 }, { "authors": [ "Hyeonseob Nam", "Hyo-Eun Kim" ], "title": "Batch-instance normalization for adaptively style-invariant neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chao Peng", "Tete Xiao", "Zeming Li", "Yuning Jiang", "Xiangyu Zhang", "Kai Jia", "Gang Yu", "Jian Sun" ], "title": "Megdet: A large mini-batch object detector", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Tim Salimans", "Diederik P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Wenqi Shao", "Tianjian Meng", "Jingyu Li", "Ruimao Zhang", "Yudian Li", "Xiaogang Wang", "Ping Luo" ], "title": "Ssn: Learning sparse switchable normalization via sparsestmax", "venue": null, "year": 1903 }, { "authors": [ "J Sola", "Joaquin Sevilla" ], "title": "Importance of input data normalization for the application of neural networks to complex industrial problems", "venue": "IEEE Transactions on nuclear science,", "year": 1997 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Guangrun Wang", "Ping Luo", "Xinjiang Wang", "Liang Lin" ], "title": "Kalman normalization: Normalizing internal representations across network layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Batch Normalization (BN) (Ioffe & Szegedy, 2015) has played a significant role in the success of deep neural networks. It was introduced to address the issue of internal covariate shift, where the distribution of network activations changes during training iterations due to the updates of network parameters. This shift is commonly believed to be disruptive to network training, and BN alleviates this problem through normalization of the network activations by their mean and variance, computed over the examples within the mini-batch at each iteration. With this normalization, network training can be performed at much higher learning rates and with less sensitivity to weight initialization.\nIn BN, it is assumed that the distribution statistics for the examples within each mini-batch reflect the statistics over the full training set. While this assumption is generally valid for large batch sizes, it breaks down in the small batch size regime (Peng et al., 2018; Wu & He, 2018; Ioffe, 2017), where noisy statistics computed from small sets of examples can lead to a dramatic drop in performance. This problem hinders the application of BN to memory-consuming tasks such as object detection (Ren et al., 2015; Dai et al., 2017), semantic segmentation (Long et al., 2015; Chen et al., 2017) and action recognition (Wang et al., 2018b), where batch sizes are limited due to memory constraints.\nTowards improving estimation of statistics in the small batch size regime, alternative normalizers have been proposed. Several of them, including Layer Normalization (LN) (Ba et al., 2016), Instance Normalization (IN) (Ulyanov et al., 2016), and Group Normalization (GN) (Wu & He, 2018), compute the mean and variance over the channel dimension, independent of batch size. Different channel-wise normalization techniques, however, tend to be suitable for different tasks, depending on the set of channels involved. On the other hand, synchronized BN (SyncBN) (Peng et al., 2018) yields consistent improvements by processing larger batch sizes across multiple GPUs. These gains in performance come at the cost of additional overhead needed for synchronization across the devices.\nA seldom explored direction for estimating better statistics is to compute them over the examples from multiple recent training iterations, instead of from only the current iteration as done in previous techniques. This can substantially enlarge the pool of data from which the mean and variance are obtained. However, there exists an obvious drawback to this approach, in that the activation values from different iterations are not comparable to each other due to the changes in network weights. As shown in Figure 1, directly calculating the statistics over multiple iterations, which we refer to as Naive CBN, results in lower accuracy.\nIn this paper, we present a method that compensates for the network weight changes among iterations, so that examples from preceding iterations can be effectively used to improve batch normalization. Our method, called Cross-Iteration Batch Normalization (CBN), is motivated by the observation that network weights change gradually, instead of abruptly, between consecutive training iterations, thanks to the iterative nature of Stochastic Gradient Descent (SGD). As a result, the mean and variance of examples from recent iterations can be well approximated for the current network weights via a low-order Taylor polynomial, defined on gradients of the statistics with respect to the network weights. The compensated means and variances from multiple recent iterations are averaged with those of the current iteration to produce better estimates of the statistics.\nIn the small batch size regime, CBN leads to appreciable performance improvements over the original BN, as exhibited in Figure 1. The superiority of our proposed approach is further demonstrated through more extensive experiments on ImageNet classification and object detection on COCO. These gains are obtained with negligible overhead, as the statistics from previous iterations have already been computed and Taylor polynomials are simple to evaluate. With this work, it is shown that cues for batch normalization can successfully be extracted along the time dimension, opening a new direction for investigation." }, { "heading": "2 RELATED WORK", "text": "The importance of normalization in training neural networks has been recognized for decades (LeCun et al., 1998). In general, normalization can be performed on three components: input data, hidden activations, and network parameters. Among them, input data normalization is used most commonly because of its simplicity and effectiveness (Sola & Sevilla, 1997; LeCun et al., 1998).\nAfter the introduction of Batch Normalization (Ioffe & Szegedy, 2015), the normalization of activations has become nearly as prevalent. By normalizing hidden activations by their statistics within each mini-batch, BN effectively alleviates the vanishing gradient problem and significantly speeds up the training of deep networks. To mitigate the mini-batch size dependency of BN, a number of variants have been proposed, including Layer Normalization (LN) (Ba et al., 2016), Instance Normalization (IN) (Ulyanov et al., 2016), Group Normalization (GN) (Wu & He, 2018), and Batch Instance Normalization (BIN) (Nam & Kim, 2018). The motivation of LN is to explore more suitable statistics for sequential models, while IN performs normalization in a manner similar to BN but with statistics only for each instance. GN achieves a balance between IN and LN, by dividing features into multiple groups along the channel dimension and computing the mean and variance within each group for normalization. BIN introduces a learnable method for automatically switching between normalizing and maintaining style information, enjoying the advantages of both BN and IN on style transfer tasks. Cross-GPU Batch Normalization (CGBN or SyncBN) (Peng et al., 2018) extends BN across multiple GPUs for the purpose of increasing the effective batch size. Though providing higher accuracy, it introduces synchronization overhead to the training process. Kalman Normalization (KN) (Wang et al., 2018a) presents a Kalman filtering procedure for estimating the statistics for a network layer from the layer’s observed statistics and the computed statistics of previous layers.\nBatch Renormalization (BRN) (Ioffe, 2017) is the first attempt to utilize the statistics of recent iterations for normalization. It does not compensate for the statistics from recent iterations, but rather it down-weights the importance of statistics from distant iterations. This down-weighting heuristic, however, does not make the resulting statistics “correct\", as the statistics from recent iterations are not of the current network weights. BRN can be deemed as a special version of our Naive CBN baseline (without Taylor polynomial approximation), where distant iterations are down-weighted.\nRecent work have also investigated the normalization of network parameters. In Weight Normalization (WN) (Salimans & Kingma, 2016), the optimization of network weights is improved through a reparameterization of weight vectors into their length and direction. Weight Standardization (WS) (Qiao et al., 2019) instead reparameterizes weights based on their first and second moments for the purpose of smoothing the loss landscape of the optimization problem. To combine the advantages of multiple normalization techniques, Switchable Normalization (SN) (Luo et al., 2018) and Sparse Switchable Normalization (SSN) (Shao et al., 2019) make use of differentiable learning to switch among different normalization methods.\nThe proposed CBN takes an activation normalization approach that aims to mitigate the mini-batch dependency of BN. Different from existing techniques, it provides a way to effectively aggregate statistics across multiple training iterations." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 REVISITING BATCH NORMALIZATION", "text": "The original batch normalization (BN) (Ioffe & Szegedy, 2015) whitens the activations of each layer by the statistics computed within a mini-batch. Denote θt and xt,i(θt) as the network weights and the feature response of a certain layer for the i-th example in the t-th mini-batch. With these values, BN conducts the following normalization:\nx̂t,i(θt) = xt,i(θt)−µt(θt)√\nσt(θt)2 + ε , (1)\nwhere x̂t,i(θt) is the whitened activation with zero mean and unit variance, ε is a small constant added for numerical stability, and µt(θt) and σt(θt) are the mean and variance computed for all the examples from the current mini-batch, i.e.,\nµt(θt) = 1 m\nm\n∑ i=1 xt,i(θt), (2)\nσt(θt) = √ 1 m m ∑ i=1 (xt,i(θt)−µt(θt))2 = √ νt(θt)−µt(θt)2, (3)\nwhere νt(θt) = 1m ∑ m i=1 xt,i(θt)2, and m denotes the number of examples in the current mini-batch. The whitened activation x̂t,i(θt) further undergoes a linear transform with learnable weights, to increase its expressive power:\nyt,i(θt) = γ x̂t,i(θt)+β , (4)\nwhere γ and β are the learnable parameters (initialized to γ = 1 and β = 0 in this work).\nWhen the batch size m is small, the statistics µt(θt) and σt(θt) become noisy estimates of the training set statistics, thus degrading the effects of batch normalization. In the ImageNet classification task\nfor which the BN module was originally designed, a batch size of 32 is typical. However, for other tasks requiring larger models and/or higher image resolution, such as object detection, semantic segmentation and video recognition, the typical batch size may be as small as 1 or 2 due to GPU memory limitations. The original BN becomes considerably less effective in such cases." }, { "heading": "3.2 LEVERAGING STATISTICS FROM PREVIOUS ITERATIONS", "text": "To address the issue of BN with small mini-batches, a naive approach is to compute the mean and variance over the current and previous iterations. However, the statistics µt−τ(θt−τ) and νt−τ(θt−τ) of the (t−τ)-th iteration are computed under the network weights θt−τ , making them obsolete for the current iteration. As a consequence, directly aggregating statistics from multiple iterations produces inaccurate estimates of the mean and variance, leading to significantly worse performance.\nWe observe that the network weights change smoothly between consecutive iterations, due to the nature of gradient-based training. This allows us to approximate µt−τ(θt) and νt−τ(θt) from the readily available µt−τ(θt−τ) and νt−τ(θt−τ) via a Taylor polynomial, i.e.,\nµt−τ(θt) = µt−τ(θt−τ)+ ∂ µt−τ(θt−τ)\n∂θt−τ (θt −θt−τ)+O(||θt −θt−τ ||2), (5)\nνt−τ(θt) = νt−τ(θt−τ)+ ∂νt−τ(θt−τ)\n∂θt−τ (θt −θt−τ)+O(||θt −θt−τ ||2), (6)\nwhere ∂ µt−τ(θt−τ)/∂θt−τ and ∂νt−τ(θt−τ)/∂θt−τ are gradients of the statistics with respect to the network weights, and O(||θt −θt−τ ||2) denotes higher-order terms of the Taylor polynomial, which can be omitted since the first-order term dominates when (θt −θt−τ) is small. In Eq. (5) and Eq. (6), the gradients ∂ µt−τ(θt−τ)/∂θt−τ and ∂νt−τ(θt−τ)/∂θt−τ cannot be precisely determined at a negligible cost because the statistics µ lt−τ(θt−τ) and ν lt−τ(θt−τ) for a node at the l-th network layer depend on all the network weights prior to the l-th layer, i.e., ∂ µ lt−τ(θt−τ)/∂θ rt−τ 6= 0 and ∂ν lt−τ(θt−τ)/∂θ rt−τ 6= 0 for r ≤ l, where θ rt−τ denotes the network weights at the r-th layer. Only when r = l can these gradients be derived in closed form efficiently.\nEmpirically, we find that as the layer index r decreases (r≤ l), the partial gradients ∂ µ l t (θt ) θ rt and ∂ν l t (θt ) θ rt rapidly diminish. These reduced effects of network weight changes at earlier layers on the activation distributions in later layers may perhaps be explained by the reduced internal covariate shift of BN. Motivated by this phenomenon, which is studied in Appendix C, we propose to truncate these partial gradients at layer l.\nThus, we further approximate Eq. (5) and Eq. (6) by\nµ lt−τ(θt)≈ µ lt−τ(θt−τ)+ ∂ µ lt−τ(θt−τ)\n∂θ lt−τ (θ lt −θ lt−τ), (7)\nν lt−τ(θt)≈ ν lt−τ(θt−τ)+ ∂ν lt−τ(θt−τ)\n∂θ lt−τ (θ lt −θ lt−τ). (8)\nA naive implementation of ∂ µ lt−τ(θt−τ)/∂θ lt−τ and ∂ν lt−τ(θt−τ)/∂θ lt−τ involves computational overhead of O(Cl ×Cl ×Cl−1×K), where Cl and Cl−1 denote the channel dimension of the l-th layer and the (l−1)-th layer, respectively, and K denotes the kernel size of θ lt−τ . Here we find that the operation can be implemented efficiently in O(Cl×Cl−1×K), thanks to the averaging over feature responses of µ and ν . See Appendix B for the details." }, { "heading": "3.3 CROSS-ITERATION BATCH NORMALIZATION", "text": "After compensating for network weight changes, we aggregate the statistics of the k−1 most recent iterations with those of the current iteration t to obtain the statistics used in CBN:\nµ̄ lt,k(θt) = 1 k\nk−1 ∑ τ=0 µ lt−τ(θt), (9)\nν̄ lt,k(θt) = 1 k\nk−1 ∑ τ=0 max [ ν lt−τ(θt),µ l t−τ(θt) 2], (10)\nσ̄ lt,k(θt) = √ ν̄ lt,k(θt)− µ̄ lt,k(θt)2, (11)\nwhere µ lt−τ(θt) and ν lt−τ(θt) are computed from Eq. (7) and Eq. (8). In Eq. (10), ν̄ lt,k(θt) is determined from the maximum of ν lt−τ(θt) and µ lt−τ(θt)2 in each iteration because ν lt−τ(θt)≥ µ lt−τ(θt)2 should hold for valid statistics but may be violated by Taylor polynomial approximations in Eq. (7) and Eq. (8). Finally, µ̄ lt,k(θt) and σ̄ l t,k(θt) are applied to normalize the corresponding feature responses {xlt,i(θt)}mi=1 at the current iteration:\nx̂lt,i(θt) = xlt,i(θt)− µ̄ lt,k(θt)√\nσ̄ lt,k(θt)2 + ε . (12)\nWith CBN, the effective number of examples used to compute the statistics for the current iteration is k times as large as that for the original BN. In training, the loss gradients are backpropagated to the network weights and activations at the current iteration, i.e., θ lt and xlt,i(θt). Those of the previous iterations are fixed and do not receive gradients. Hence, the computation cost of CBN in back-propagation is the same as that of BN.\nReplacing the BN modules in a network by CBN leads to only minor increases in computational overhead and memory footprint. For computation, the additional overhead mainly comes from computing the partial derivatives ∂ µt−τ(θt−τ)/∂θ lt−τ and ∂νt−τ(θt−τ)/∂θ lt−τ , which is insignificant in relation to the overhead of the whole network. For memory, the module requires access to the statistics ({µ lt−τ(θt−τ)}k−1τ=1 and {ν lt−τ(θt−τ)} k−1 τ=1) and the gradients ({∂ µt−τ(θt−τ)/∂θ lt−τ} k−1 τ=1 and {∂νt−τ(θt−τ)/∂θ lt−τ}k−1τ=1) computed for the most recent k− 1 iterations, which is also minor compared to the rest of the memory consumed in processing the input examples. The additional computation and memory of CBN is reported for our experiments in Table 6.\nA key hyper-parameter in the proposed CBN is the temporal window size, k, of recent iterations used for statistics estimation. A broader window enlarges the set of examples, but the example quality becomes increasingly lower for more distant iterations, since the differences in network parameters θt and θt−τ become more significant and are compensated less well using a low-order Taylor polynomial. Empirically, we found that CBN is effective with a window size up to k = 8 in a variety of settings and tasks. The only trick is that the window size should be kept small at the beginning of training, when the network weights change quickly. Thus, we introduce a burn-in period of length Tburn-in for the window size, where k = 1 and CBN degenerates to the original BN. In our experiments, the burn-in period is set to 25 epochs on ImageNet image classification and 3 epochs on COCO object detection by default. Ablations on this parameter are presented in the Appendix.\nTable 1 compares CBN with other feature normalization methods. The key difference among these approaches is the axis along which the statistics are counted and the features are normalized. The previous techniques are all designed to exploit examples from the same iteration. By contrast, CBN explores the aggregation of examples along the temporal dimension. As the data utilized by CBN lies in a direction orthogonal to that of previous methods, the proposed CBN could potentially be combined with other feature normalization approaches to further enhance statistics estimation in certain challenging applications." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 IMAGE CLASSIFICATION ON IMAGENET", "text": "Experimental settings. ImageNet (Russakovsky et al., 2015) is a benchmark dataset for image classification, containing 1.28M training images and 50K validation images from 1000 classes. We\nfollow the standard setting in (He et al., 2015) to train deep networks on the training set and report the single-crop top-1 accuracy on the validation set. Our preprocessing and augmentation strategy strictly follows the GN baseline (Wu & He, 2018). We use a weight decay of 0.0001 for all weight layers, including γ and β . We train standard ResNet-18 for 100 epochs on 4 GPUs, and decrease the learning rate by the cosine decay strategy (He et al., 2019). We use the average over 5 trials for all results. All hyper-parameters, e.g. group size of GN, are carefully tuned via cross-validation. ResNet-18 with BN is our base model. To compare with other normalization methods, we directly replace BN with IN, LN, GN, BRN, and our proposed CBN.\nComparison of feature normalization methods. We compare the performance of each normalization method with a normal batch size, 32, in Table 2. With sufficient data for reliable statistics, BN easily reaches the highest top-1 accuracy. Similar as the results in previous papers\n(Wu & He, 2018), IN and LN achieve significantly worse performance than BN. GN works well on image classification, but still has a small degradation of 1.2% compared with BN. Over all the methods, our CBN is the only one that is able to achieve comparable accuracy with BN, as it converges to the procedure of BN as the batch size becomes larger.\nSensitivity to batch size. We compare the behavior of CBN, original BN (Ioffe & Szegedy, 2015), GN (Wu & He, 2018), and BRN (Ioffe, 2017) at the same number of images per GPU on ImageNet classification. For CBN, the recent iterations are utilized so as to ensure that the number of effective examples is no fewer than 16. For BRN, the settings strictly follow the original paper. We adopt a learning rate of 0.1 for the batch size of 32, and linearly scale the learning rate by N/32 for a batch size of N.\nThe results are shown in Table 3. For the original BN, its accuracy drops noticeably as the number of images per GPU is reduced from 32 to 2. BRN suffers a significant performance drop as well. GN maintains its accuracy by utilizing the channel dimension but not batch dimension. For CBN, its accuracy holds by exploiting the examples of recent iterations. Also, CBN outperforms GN by 0.9% on average top-1 accuracy with different batch sizes. This is reasonable, because the statistics computation of CBN introduces uncertainty caused by the stochastic batch sampling like in BN, but this uncertainty is missing in GN which results in some loss of regularization ability." }, { "heading": "4.2 OBJECT DETECTION AND INSTANCE SEGMENTATION ON COCO", "text": "Experimental settings. COCO (Lin et al., 2014) is chosen as the benchmark for object detection and instance segmentation. Models are trained on the COCO 2017 train split with 118k images, and evaluated on the COCO 2017 validation split with 5k images. Following the standard protocol in (Lin et al., 2014), the object detection and instance segmentation accuracies are measured by the mean average precision (mAP) scores at different intersection-over-union (IoU) overlaps at the box and the mask levels, respectively.\nFollowing (Wu & He, 2018), Faster R-CNN (Ren et al., 2015) and Mask R-CNN (He et al., 2017) with FPN (Lin et al., 2017) are chosen as the baselines for object detection and instance segmentation, respectively. For both, the 2fc box head is replaced by a 4conv1fc head for better use of the normalization mechanism (Wu & He, 2018). The backbone networks are ImageNet pretrained ResNet-50 (default) or ResNet-101, with specific normalization. Finetuning is performed on the COCO train set for 12 epochs on 4 GPUs by SGD, where each GPU processes 4 images (default). Note that the mean and variance statistics in CBN are computed within each GPU. The learning rate is initialized to be 0.02∗N/16 for a batch size per GPU of N, and is decayed by a factor of 10 at the 9-th and the 11-th epochs. The weight decay and momentum parameters are set to 0.0001 and 0.9, respectively. We use the average over 5 trials for all results. All hyper-parameters, e.g. group size of GN, are carefully tuned via cross-validation.\nAs done in (Wu & He, 2018), we experiment with two settings where the normalizers are activated only at the task-specific heads with frozen BN at the backbone (default), or the normalizers are activated at all the layers except for the early conv1 and conv2 stages in ResNet.\nNormalizers at backbone and task-specific heads. We further study the effect of different normalizers on the backbone network and task-specific heads for object detection on COCO. CBN, original BN, syncBN, and GN are included in the comparison.\nTable 4 presents the results. When BN is frozen in the backbone and no normalizer is applied at the head, the APbbox score is 36.9%. When the original BN is applied at the head only and at both the backbone and the head, the accuracy drops to 36.3% and 35.5%, respectively. For CBN, the accuracy is 37.7% and 37.3% at these two settings, respectively. Without any synchronization across GPUs, CBN can achieve comparable performance with syncBN and GN, showing the superiority of the proposed approach. Unfortunately, due to the accumulation of approximation error, CBN observes a 0.4% decrease in APbbox when replacing frozen BN with CBN in the backbone. Even so, CBN still outperforms the variant with unfrozen BN in backbone by 1.8%.\nInstance segmentation and stronger backbones. Results of object detection (Faster R-CNN (Ren et al., 2015)) and instance segmentation (Mask R-CNN (He et al., 2017)) with ResNet-50 and ResNet-101 are presented in Table 5. We can observe that our proposed CBN achieves performance comparable to syncBN and GN with R50 and R101 as the backbone on both Faster R-CNN and Mask R-CNN, which demonstrates that CBN is robust and versatile to various deep models and tasks." }, { "heading": "4.3 ABLATION STUDY", "text": "Effect of temporal window size k. We conduct this ablation on ImageNet image classification and COCO object detection, with each GPU processing 4 images. Figure 3 presents the results. When k = 1, only the batch from the current iteration is utilized; therefore, CBN degenerates to the original BN. The accuracy suffers due to the noisy statistics on small batch sizes. As the window size k gradually increases, more examples from recent iterations are utilized for statistics estimation, leading to greater accuracy. Accuracy saturates at k = 8 and even drops slightly. For more distant iterations, the network weights differ more substantially and the Taylor polynomial approximation becomes less accurate.\nOn the other hand, it is empirically observed that the original BN saturates at a batch size of 16 or 32 for numerous applications (Peng et al., 2018; Wu & He, 2018), indicating that the computed statistics become accurate. Thus, a temporal window size of k = min(d 16bs per GPUe,8) is suggested.\nEffect of compensation. To study this, we compare CBN with 1) a naive baseline where statistics from recent iterations are directly aggregated without compensation via Taylor polynomial, referred to as Naive CBN; and 2) the original BN applied with the same\neffective example number as CBN (i.e., its batch size per GPU is set to the product of the batch size per GPU and the temporal window size of CBN), which does not require any compensation and serves as an upper performance bound.\nThe experimental results are also presented in Figure 3. CBN clearly surpasses Naive CBN when the previous iterations are included. Actually, Naive CBN fails when the temporal window size grows to k = 8 as shown in Figure 3(a), demonstrating the necessity of compensating for changing network weights over iterations. Compared with the original BN upper bound, CBN achieves similar accuracy at the same effective example number. This result indicates that the compensation using a low-order Taylor polynomial by CBN is effective.\nFigure 4 presents the train and test curves of CBN, Naive CBN, BN-bs4, and BN-bs16 on ImageNet, with 4 images per GPU and a temporal window size of 4 for CBN, Naive CBN, and BN-bs4, and 16 images per GPU for BN-bs16. The train curve of CBN is close to BN-bs4 at the beginning, and approaches BN-bs16 at the end. The reason is that we adopt a burn-in period to avoid the disadvantage of rapid statistics change at beginning of training. The gap between the train curve of Naive CBN and CBN shows that\nNaive CBN cannot even reach a good convergence on the training set. The test curve of CBN is close to BN-bs16 at the end, while Naive CBN exhibits considerable jitter. All these phenomena indicate the effectiveness of our proposed Taylor compensation.\nAdditional computational overhead and memory footprint. As the inference stage of CBN is the same as BN, we only need to compare the computational overhead and memory footprint at the training time, shown in Table 6. The extra computational overhead mainly includes calculations of the statistics’ respective gradients, Taylor compensations, and averaging operations. For the extra memory, the statistics (µ and ν), their respective gradients, and the network parameters (θt−1 · · ·θt−(k−1)) of previous iterations are all stored when applying CBN.\nFrom these results, the additional overhead of CBN is seen to be minor." }, { "heading": "A ALGORITHM OUTLINE", "text": "Algorithm 1 presents an outline of our proposed Cross-Iteration Batch Normalization (CBN).\nAlgorithm 1: Cross-Iteration Batch Normalization(CBN) Input: Feature responses of a network node of the l-th layer at the t-th iteration {xlt,i(θt)}mi=1,\nnetwork weights {θ lt−τ}k−1τ=0, statistics {µ lt−τ(θt−τ)} k−1 τ=1 and {ν lt−τ(θt−τ)} k−1 τ=1, and gradients {∂ µt−τ(θt−τ)/∂θ lt−τ}k−1τ=1 and {∂νt−τ(θt−τ)/∂θ lt−τ} k−1 τ=1 from most recent k−1 iterations\nOutput: {ylt,i(θt) = CBN(xlt,i(θt))} 1 µt(θt)← 1m ∑ m i=1 xt,i(θt), νt(θt)← 1m ∑ m i=1 x 2 t,i(θt) //statistics on the current iteration 2 for τ ∈ {1, . . . ,k} do 3 µ lt−τ(θt)← µ lt−τ(θt−τ)+\n∂ µ lt−τ (θt−τ ) ∂θ lt−τ (θ lt −θ lt−τ) //approximation from recent iterations\n4 ν lt−τ(θt)← ν lt−τ(θt−τ)+ ∂ν lt−τ (θt−τ )\n∂θ lt−τ (θ lt −θ lt−τ) //approximation from recent iterations\n5 end 6 µ̄ lt,k(θt)← 1 k ∑ k−1 τ=0 µ l t−τ(θt) //averaging over recent iterations 7 ν̄ lt,k(θt)← 1 k ∑ k−1 τ=0 max [ ν lt−τ(θt),µ lt−τ(θt)2 ] //validation and averaging over recent iterations 8 σ̄ lt,k(θt) 2← ν̄ lt,k(θt)− µ̄ lt,k(θt)2 9 x̂lt,i(θt) = xlt,i(θt )−µ̄ l t,k(θt )√\nσ̄ lt,k(θt ) 2+ε\n//normalize\n10 ylt,i(θt)← γ x̂lt,i(θt)+β //scale and shift\nB EFFICIENT IMPLEMENTATION OF ∂ µ lt−τ(θt−τ)/∂θ lt−τ AND ∂ν lt−τ(θt−τ)/∂θ lt−τ\nLet Cl and Cl−1 denote the channel dimension of the l-th layer and the (l−1)-th layer, respectively, and K denotes the kernel size of θ lt−τ . µ lt−τ and ν lt−τ are thus of Cl dimensions in channels, and θ lt−τ is a Cl ×Cl−1×K dimensional tensor. A naive implementation of ∂ µ lt−τ(θt−τ)/∂θ lt−τ and ∂ν lt−τ(θt−τ)/∂θ lt−τ involves computational overhead of O(Cl×Cl×Cl−1×K). Here we find that the operations of µ and ν can be implemented efficiently in O(Cl−1×K) and O(Cl ×Cl−1×K), respectively, thanks to the averaging of feature responses in µ and ν .\nHere we derive the efficient implementation of ∂ µ lt−τ(θt−τ)/∂θ lt−τ . That of ∂ν lt−τ(θt−τ)/∂θ lt−τ is about the same. Let us first simplify the notations a bit. Let µ l and θ l denote µ lt−τ(θt−τ) and θ lt−τ respectively, by removing the irrelevant notations for iterations. The element-wise computation in the forward pass can be computed as\nµ lj = 1 m\nm\n∑ i=1 xli, j, (13)\nwhere µ lj denotes the j-th channel in µ l , and xli, j denotes the j-th channel in the i-th example. xli, j is computed as\nxli, j = Cl−1\n∑ n=1\nK\n∑ k=1 θ lj,n,k · yl−1i+offset(k),n, (14)\nwhere n and k enumerate the input feature dimension and the convolution kernel index, respectively, offset(k) denotes the spatial offset in applying the k-th kernel, and yl−1 is the output of the (l−1)-th layer.\nThe element-wise calculation of ∂ µ l/∂θ l ∈RCl×Cl×Cl−1×K is as follows, taking Eq. (13) and Eq. (14) into consideration:\n[ ∂ µ l\n∂θ l ] j,q,p,η = ∂ µ lj ∂θ lq,p,η\n= ∂ 1m ∑ m i=1 x l i, j\n∂θ lq,p,η\n= ∂ 1m ∑ m i=1 ∑ Cl−1 n=1 ∑ K k=1 θ lj,n,k · y l−1 i+offset(k),n\n∂θ lq,p,η\n=\n{ 1 m ∑ m i=1 y l−1 i+offset(η),p , j = q\n0 , j 6= q .\n(15)\nThus, [ ∂ µ l\n∂θ l ] j,q,p,η takes non-zero values only when j = q. This operation can be implemented efficiently in O(Cl−1×K). Similarly, the calculation of ∂ν l/∂θ l can be obtained in O(Cl×Cl−1×K)." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "CIFAR-10 is selected for the experiments in this section. It consists of 50k training images and 10k test images from 10 classes. We train the standard ResNet-18 for 160 epochs on one GPU by SGD. The momentum and weight decay parameters are set to 0.9 and 0.0001, respectively. We experiment with batch sizes of 32, 16, 8, 4, and 2 images per GPU. The learning rate is scaled linearly to different batch sizes, following the practice in (Peng et al., 2018). The initial learning rate is 0.025∗N/32 for a batch size per\niteration of N. The learning rate is divided by 10 at epochs 80 and 120. The images are of 32×32 pixels with per-image standardization in both training and inference. Random flipping is applied in training.\nWe report the results of BN, BRN, GN, CBN with five trials on CIFAR-10, as shown in Table 7. CBN has the smallest gap with BN-bs16 compared to BN-bs4, BRN, and GN. This result is consistent with previous experiments on ImageNet and COCO. Also the std is tiny, indicating that performance on CIFAR-10 is stable enough for some empirical studies.\nOn the burn-in period length Tburn-in. We further study the influence of varying the burn-in period length Tburn-in, at 4 images per GPU on both CIFAR-10 image classification (ResNet-18) and COCO object detection (Faster R-CNN with FPN and ResNet-50).\nFigure 5(a) and 5(b) present the results. When the burn-in period is too short, the accuracy suffers. This is because at the beginning of training, the network weights change rapidly, causing the\ncompensation across iterations to be less effective. On the other hand, the accuracy is stable for a wide range of burn-in periods Tburn-in that are not too short.\nOn the effect of using more than one layer. The efficient implementation is no longer applicable when more than one layer of compensation is adopted. Therefore, we only conduct a two-layer experiment of ResNet-18 on CIFAR-10 in consideration of the heavy extra memory and computational overhead. CBN using two layers for compensation achieves 95.0 on CIFAR-10 (batch size=4, k=4), which is comparable to CBN using only one layer. As using more layers does not further improve performance but consumes more FLOPs, we adopt one-layer compensation on CBN in practice.\nOn the gradients from different layers. The key assumption in Eq. (7) and Eq. (8) is that for a node at the l-th layer, the gradient of its statistics with respect to the network weights at the l-th layer is larger than that of weights from the prior layers, i.e., || ∂ µ l t−τ (θt−τ ) ∂θ lt−τ ||F || ∂ µ lt−τ (θt−τ ) ∂θ rt−τ ||F and || ∂ν l t−τ (θt−τ ) ∂θ lt−τ ||F || ∂ν lt−τ (θt−τ ) ∂θ rt−τ ||F for r < l, where || · ||F denotes the Frobenius norm. Here we examine this assumption empirically for networks trained on CIFAR-10 image recognition.\nFigure 6 presents the computed ratio of || ∂ µ l t−τ (θt−τ ) ∂θ rt−τ ||F/|| ∂ µ lt−τ (θt−τ ) ∂θ lt−τ ||F and || ∂ν l t−τ (θt−τ ) ∂θ rt−τ ||F/|| ∂ν lt−τ (θt−τ ) ∂θ lt−τ ||F for r ≤ l, at different training epochs. The results suggest that || ∂ µ l t−τ (θt−τ ) ∂θ lt−τ ||F || ∂ µ lt−τ (θt−τ ) ∂θ rt−τ ||F and || ∂ν lt−τ (θt−τ ) ∂θ lt−τ ||F || ∂ν lt−τ (θt−τ ) ∂θ rt−τ ||F hold for r < l, thus validating the approximation in Eq. (7) and Eq. (8).\nWe also study the gradients of non-ResNet models. || ∂ µ l t−τ (θt−τ ) ∂θ l−2t−τ ||F/|| ∂ µ lt−τ (θt−τ ) ∂θ lt−τ ||F and || ∂ν l t−τ (θt−τ ) ∂θ l−2t−τ ||F/|| ∂ν lt−τ (θt−τ ) ∂θ lt−τ ||F on VGG-16 and InceptionV3 are (0.22 and 0.46) and (0.17 and 0.38), respectively, which is similar to ResNet-18 (0.13 and 0.40), indicating that the assumption should also hold for VGG and the Inception series." } ]
2,019
CROSS-ITERATION BATCH NORMALIZATION
SP:a037146bb5c073f2764346596ec1f13c7391d894
[ "This paper presents an empirical study of the attention mechanism in the graph attention networks (GAT). The study reveals that the attention patterns largely depend on the dataset, on some datasets they are sharp, but on others the attention patterns are almost uniform and not so different from the uniform aggregation weights in GNNs that does not have attention. The authors further tried to utilize these findings and attempted to do attention-based graph sparsification, and showed that they can get a similar level of performance with only a fraction of the edges in the original graph if they do the sparsification based on the attention weights.", "This paper carries out several kinds of analysis on the GAT networks of Velickovic (2018), which augment GNN updates with multihead self attention. Three standard attention types are compared, on several different datasets, and differences between uniform attention and learned attention are reported. An experiment is carried out where low-attention edges are pruned." ]
Does attention matter and, if so, when and how? Our study on both inductive and transductive learning suggests that datasets have a strong influence on the effects of attention in graph neural networks. Independent of learning setting, task and attention variant, attention mostly degenerate to simple averaging for all three citation networks, whereas they behave strikingly different in the protein-protein interaction networks and molecular graphs: nodes attend to different neighbors per head and get more focused in deeper layers. Consequently, attention distributions become telltale features of the datasets themselves. We further explore the possibility of transferring attention for graph sparsification and show that, when applicable, attention-based sparsification retains enough information to obtain good performance while reducing computational and storage costs. Finally, we point out several possible directions for further study and transfer of attention.
[]
[ { "authors": [ "Michael M. Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: Going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D. Manning" ], "title": "What does bert look at? an analysis of bert’s attention", "venue": null, "year": 1906 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "David K Duvenaud", "Dougal Maclaurin", "Jorge Iparraguirre", "Rafael Bombarell", "Timothy Hirzel", "Alan Aspuru-Guzik", "Ryan P Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "In Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "Johannes Hachmann", "Roberto Olivares-Amaya", "Sule Atahan-Evrenk", "Carlos Amador-Bedolla", "Roel S Sánchez-Carrera", "Aryeh Gold-Parker", "Leslie Vogt", "Anna M Brockway", "Alán Aspuru-Guzik" ], "title": "The harvard clean energy project: large-scale computational screening and design of organic photovoltaics on the world community", "venue": "grid. The Journal of Physical Chemistry Letters,", "year": 2011 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": "arXiv preprint arXiv:1506.05163,", "year": 2015 }, { "authors": [ "Wenbing Huang", "Tong Zhang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Sarthak Jain", "Byron C. Wallace" ], "title": "Attention is not explanation", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Boris Knyazev", "Graham W. Taylor", "Mohamed R. Amer" ], "title": "Understanding attention in graph neural networks", "venue": "In Workshop on Representation Learning on Graphs and Manifolds, International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Greg Landrum" ], "title": "Rdkit: open-source cheminformatics", "venue": "http://www.rdkit.org/. Accessed:", "year": 2019 }, { "authors": [ "Jure Leskovec", "Christos Faloutsos" ], "title": "Sampling from large graphs", "venue": "In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp", "year": 2006 }, { "authors": [ "Junying Li", "Deng Cai", "Xiaofei He" ], "title": "Learning graph-level representation for drug", "venue": null, "year": 2017 }, { "authors": [ "Xingjian Li", "Haoyi Xiong", "Hanchao Wang", "Yuxuan Rao", "Liping Liu", "Jun Huan" ], "title": "DELTA: Deep learning transfer using feature map with attention for convolutional networks", "venue": "In ICLR", "year": 2019 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter Battaglia" ], "title": "Learning deep generative models of graphs. 2018", "venue": null, "year": 2018 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Hadar Serviansky", "Yaron Lipman" ], "title": "Provably powerful graph networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Federico Monti", "Oleksandr Shchur", "Aleksandar Bojchevski", "Or Litany", "Stephan Günnemann", "Michaël", "Bresson" ], "title": "Dual-primal graph convolutional networks", "venue": "arXiv preprint arXiv:1806.00770,", "year": 2018 }, { "authors": [ "Christopher Morris", "Martin Ritzert", "Matthias Fey", "William L. Hamilton", "Jan Eric Lenssen", "Gaurav Rattan", "Martin Grohe" ], "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "venue": "In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Galileo Namata", "Ben London", "Lise Getoor", "Bert Huang" ], "title": "Query-driven active surveying for collective classification", "venue": "In Proceedings of the Workshop on Mining and Learning with Graphs,", "year": 2012 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs", "venue": "In Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Fabian Pedregosa", "Gael Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg", "Jake Vanderplas", "Alexandre Passos", "David Cournapeau", "Matthieu Brucher", "Matthieu Perrot", "Eduard Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Kristina Preuer", "Günter Klambauer", "Friedrich Rippmann", "Sepp Hochreiter", "Thomas Unterthiner" ], "title": "Interpretable deep learning in drug discovery", "venue": "arXiv preprint arXiv:1903.02788,", "year": 2019 }, { "authors": [ "Jiezhong Qiu", "Jian Tang", "Hao Ma", "Yuxiao Dong", "Kuansan Wang", "Jie Tang" ], "title": "Deepinf: Social influence prediction with deep learning", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeff Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Bharath Ramsundar", "Peter Eastman", "Patrick Walters", "Vijay Pande", "Karl Leswing", "Zhenqin Wu" ], "title": "Deep Learning for the Life Sciences", "venue": "O’Reilly Media,", "year": 2019 }, { "authors": [ "Seongok Ryu", "Jaechang Lim", "Seung Hwan Hong", "Woo Youn Kim" ], "title": "Deeply learning molecular structure-property relationships using attention- and gate-augmented graph convolutional network", "venue": "arXiv preprint arXiv:1805.10988,", "year": 2018 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Uday Shankar Shanthamallu", "Jayaraman J. Thiagarajan", "Andreas Spanias" ], "title": "Improving robustness of attention models on graphs", "venue": "arXiv preprint arXiv:1811.00181,", "year": 2018 }, { "authors": [ "Oleksandr Shchur", "Maximilian Mumme", "Aleksandar Bojchevski", "Stephan Günnemann" ], "title": "Pitfalls of graph neural network evaluation", "venue": "arXiv preprint arXiv:1811.05868,", "year": 2019 }, { "authors": [ "Aravind Subramanian", "Pablo Tamayo", "Vamsi K. Mootha", "Sayan Mukherjee", "Benjamin L. Ebert", "Michael A. Gillette", "Amanda Paulovich", "Scott L. Pomeroy", "Todd R. Golub", "Eric S. Lander", "Jill P. Mesirov" ], "title": "Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles", "venue": "In Proceedings of the National Academy of Sciences,", "year": 2005 }, { "authors": [ "Jan Svoboda", "Jonathan Masci", "Federico Monti", "Michael Bronstein", "Leonidas Guibas" ], "title": "Peernets: Exploiting peer wisdom against adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kiran K. Thekumparampil", "Chong Wang", "Sewoong Oh", "Li-Jia Li" ], "title": "Attention-based graph neural network for semi-supervised learning", "venue": "arXiv preprint arXiv:1803.03735,", "year": 2018 }, { "authors": [ "Rakshit Trivedi", "Mehrdad Farajtabar", "Prasenjeet Biswal", "Hongyuan Zha" ], "title": "Dyrep: Learning representations over dynamic graphs", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "L.J.P. van der Maaten", "G.E. Hinton" ], "title": "Visualizing high-dimensional data using t-sne", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Elena Voita", "Rico Sennrich", "Ivan Titov" ], "title": "The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing,", "year": 2019 }, { "authors": [ "Bowen Liu" ], "title": "Pre-training graph neural networks", "venue": null, "year": 1903 }, { "authors": [ "Zhenqin Wu", "Bharath Ramsundar", "Evan N Feinberg", "Joseph Gomes", "Caleb Geniesse", "Aneesh S Pappu", "Karl Leswing", "Vijay Pande" ], "title": "Moleculenet: a benchmark for molecular machine learning", "venue": "Chemical science,", "year": 2018 }, { "authors": [ "Zhaoping Xiong", "Dingyan Wang", "Xiaohong Liu", "Feisheng Zhong", "Xiaozhe Wan", "Xutong Li", "Zhaojun Li", "Xiaomin Luo", "Kaixian Chen", "Hualiang Jiang", "Mingyue Zheng" ], "title": "Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism", "venue": "Journal of Medical Chemistry,", "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhilin Yang", "Jake Junbo Zhao", "Bhuwan Dhingra", "Kaiming He", "William W Cohen", "Ruslan Salakhutdinov", "Yann LeCun" ], "title": "GLoMo: Unsupervisedly learned relational graphs as transferable representations", "venue": null, "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "In ICLR", "year": 2017 }, { "authors": [ "Jiani Zhang", "Xingjian Shi", "Junyuan Xie", "Hao Ma", "Irwin King", "Dit-Yan Yeung" ], "title": "Gaan: Gated attention networks for learning on large and spatiotemporal graphs", "venue": "In The Conference on Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "Marinka Zitnik", "Jure Leskovec" ], "title": "Predicting multicellular function through multi-layer tissue", "venue": "networks. Bioinformatics,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The modeling of graphs has become an active research topic in deep learning (Bronstein et al., 2017). Dozens of neural network models have been developed for exploiting the structural information of graphs (Scarselli et al., 2009; Bruna et al., 2014; Henaff et al., 2015; Duvenaud et al., 2015; Niepert et al., 2016; Defferrard et al., 2016), now collectively referred to as graph neural networks (GNNs).\nBuilt upon the success of attention in NLP (Vaswani et al., 2017), Veličković et al. (2018) proposed the graph attention networks (GATs) to integrate multi-head self-attention into node feature update for adaptive weighting, with several extensions (Thekumparampil et al., 2018; Zhang et al., 2018; Monti et al., 2018; Svoboda et al., 2019; Trivedi et al., 2019). While the use of attention in GNNs is an attractive direction, several works also report that attention contributes little to the performance of GNNs (Zhang et al., 2018; Shchur et al., 2019). Considering the high computational cost of attention, the question is then that does attention help and, if so, when and how?\nIn this paper, we take a first step towards the question. We first identify the key questions for understanding attention and propose an analytical paradigm. With extensive experiments, our findings suggest that, although attention is motivated by inductive learning, its functionality depends highly on the characteristics of the datasets. The attention distributions across heads and layers are near uniform for all citation networks (Cora, Citeseer and Pubmed) while they get more concentrated over layers on the protein-protein interaction networks (PPI) and molecular graphs, with significant diversity among heads. That the attention distribution is a telltale sign of the nature of graph class is further verified with a meta graph classification experiment. With attention features as inputs, citation networks are indistinguishable whereas PPI and molecule graphs are.\nInspired by these findings, we hypothesize that attention carry semantic meanings when they are non-uniform and can be helpful for transfer learning. This has been the case in the NLP community (Radford et al., 2019), and is motivating many research efforts on understanding multi-head attention (Jain & Wallace, 2019; Clark et al., 2019; Voita et al., 2019). We attempt the idea of attention based sparsification – sparsifying a graph by retaining edges where attention are higher, with the intuition being that the resulting graph preserves enough information. We find that not only such attention-based sparsification is transferable (meaning, it can work on unseen graphs), it also affords us to train a cheaper model without using attention to fit the downstream task. Finally, we discuss several possible fruitful directions for further exploration, including theory, interpretability, and unsupervised learning." }, { "heading": "2 RELATED WORK", "text": "Visualize and understand attention Several works attempted to visualize the learned attention by coloring edges or nodes based on the attention magnitudes (Veličković et al., 2018; Qiu et al., 2018). Thekumparampil et al. (2018) studied the averaged attention values between nodes with different or the same class labels. Shanthamallu et al. (2018) studied the attention GAT learned on two citation networks Cora and Citeseer with interquartile range metric and showed that they are near uniform. Knyazev et al. (2019) investigated the effectiveness of attentional pooling and find that attention is only effective when it is close to optimal. Our work differs in that we propose a general paradigm for analyzing graph attention, including how to characterize the overall attention statistics and how to measure the layer-wise and head-wise differences of the learned attention.\nTransfer attention In the computer vision community, transferring the attention maps using a teacher-student network to improve the downstream tasks is a well-studied technique (Zagoruyko & Komodakis, 2017; Li et al., 2019). Our approach for transferring attention is different from these works in that we use the trained graph attention network as a graph sparsifier instead of a teaching signal. Yang et al. (2018) proposed to transfer the relational structure within the data, which is represented as a set of attention weights, to boost the performance of other tasks. Our transferring strategy is different from their work because we reduce the sparsity of the affinity matrix by removing the entries with smaller attention weights. Thus, we can train a cheap graph sparsifier to accelerate the training and testing speed." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 GRAPH NEURAL NETWORKS", "text": "Let G be an undirected graph with node set V and edge set E , where each node i ∈ V has a feature h0i ∈ Rn0 . In a wide class of GNNs (Kipf & Welling, 2017; Hamilton et al., 2017; Veličković et al., 2018), the basic feature update function for node i ∈ V at the l + 1-th GNN layer takes the form of\nhl+1i = σ ∑ j∈N (i) αl+1i,j W l+1hlj , where σ is an activation function, N (i) is a set containing i and its neighbors, αl+1i,j ∈ R is the attention weight of edge (j, i) in updating the feature of node i, Wl+1 ∈ Rnl+1×nl is the projection matrix, and hli, h l+1 i are corresponding node features after the l-th and the l + 1-th layer. With a sparse implementation, it has a time complexity of O(|V|nl+1nl + |E|nl+1). Graph Convolutional Network (GCN) (Kipf & Welling, 2017) and the mean variant of GraphSAGE (Hamilton et al., 2017) use static attention 1√\n|N (i)| 1√ |N (j)| and 1|N (i)| , which we separately\nrefer to as GCN and uniform attention.\nGAT (Veličković et al., 2018) uses a parameterized subnetwork to output the attention weights αi,js. Rather than using a single attention head as in Eqn. equation 3.1, GAT aggregates the outputs of multiple heads:\nαl+1,ki,j = exp\n( score ( hli, h l j )) ∑ j′∈N (i) exp ( score ( hli, h l j′ )) , hl+1,ki = σ ∑ j∈N (i) αl+1,ki,j W l+1,khlj , hl+1i = σ ( Aggregatel+1 ( hl+1,1i , · · · , h l+1,Kl+1 i )) ,\nwhere k is the index of the attention head and Kl+1 is the number of attention heads in the l + 1-th layer. Aggregatel+1 aggregates all head results in the l + 1-th layer and we follow the approach of Veličković et al. (2018) to use concatenation for intermediate layers and average for the final layer.\nAttention variants As in Luong et al. (2015), there are multiple ways to calculate the attention scores. In this paper, we focus on the following three types of attention, namely concat, dot product,\nTable 1: Dataset task and learning setting\nCora Citeseer Pubmed PPI CEP HIV\nTask Node Classification 3 3 3 3Graph Prediction 3 3\nSetting Transductive Learning 3 3 3Inductive Learning 3 3 3\nand general:\nLReLU(aT [Whi||Whj ]) (concat), (Whi)T Whj (dot product), (Whi)T BWhj (general)\nGATs uses the concat attention. Zhang et al. (2018) and Ryu et al. (2018) separately explores dot product and general attention in GNNs. Also, LReLU(·) means the leaky ReLU activation.\nGraph-level prediction Based on the node representations generated by GNNs, we can also compute a graph representation (Li et al., 2018) for graph-level prediction problems like graph classification and regression:\nhG = ∑ v∈V Sigmoid ( g ( hLv )) ReLU ( f ( hLv )) ,\nwhere L is the number of GNN layers, hLv is the representation of node v output by the last layer of GNN, g(·) calculates the impact of node v on the graph representation, and f(·) is a linear projection." }, { "heading": "3.2 TASKS AND DATASETS", "text": "We consider the tasks of node classification and graph-level prediction. For modeling, we treat all graphs as undirected with untyped nodes and edges. Self loops are added to preserve information from previous node features. As in Veličković et al. (2018), we consider four datasets – citation networks Cora, Citeseer (Sen et al., 2008), Pubmed (Namata et al., 2012) and PPI (Zitnik & Leskovec, 2017). Additionally, we include two more datasets of molecular graphs for graph-level prediction.\nThe Harvard Clean Energy Project (CEP) (Hachmann et al., 2011) estimates the photovoltaic efficiency of organic molecules. We use a subset of it pre-processed by Duvenaud et al. (2015); Ryu et al. (2018). The HIV dataset was initially introduced by the Drug Therapeutics Program (DTP) AIDS Antiviral Screen HIV for testing the ability of compounds to inhibit HIV replication. It was later included in the MoleculeNet benchmark (Wu et al., 2018) as a binary classification task. For both datasets, the node features are extracted based on DeepChem (Ramsundar et al., 2019) and RDKit (Landrum), which includes atom type, degree, and many other chemical properties.\nSince GAT was partially motivated to work on unseen data, we consider two learning settings: transductive learning and inductive learning. In the transductive learning setting, the model can access the features of all nodes in the graph. However, only a fraction of the nodes are labeled in the training phase and the model is asked to predict the missing labels. In the inductive learning setting, we have two mutually exclusive sets of nodes separately for training and testing. The model is trained only on the features and labels of the nodes in the training set and is asked to predict the labels of the nodes in the testing set. A summary of the tasks and learning settings can be found in Table 1. We leave more detailed information like dataset statistics, training/testing split and features in Appendix A." }, { "heading": "4 METHODOLOGY", "text": "The introduction of multi-head attention into multi-layer GNNs poses many interesting questions, we investigate five in this paper. Q1: In the GAT model, all nodes have different attention distributions on their incoming edges. How should we characterize the overall statistics of these learned attention distributions? Q2: How do attention distributions differ across different heads and layers? Q3: How does the choice of dataset, attention variant, and learning setting affect the learned attention? Q4:\nIs the statistics of the learned attention related to the intrinsic properties of the graph? Q5: How to transfer attention for further usage?\nTo answer Q1, we propose multiple metrics for characterizing attention distributions. For Q2, we examine the metrics at different layers and compare the change of them over layers. To answer Q3, we run experiments to see how varying the dataset, attention variant and the learning setting impacts the learned attention. Previous works (Kipf & Welling, 2017; Hamilton et al., 2017; Veličković et al., 2018) only perform transductive learning on the citation networks and inductive learning on PPI. To fill in the gap, we perform transductive learning on PPI and inductive learning on the citation networks (see data processing strategy in Appendix B.1). We show that learning tasks are largely irrelevant. To answer Q4, we propose a new task called Meta Graph Classification which asks the model to distinguish the type of the graphs by the characteristics of the attention distributions. For Q5, we transfer attention for graph sparsification and examine whether we can preserve enough task-related information with a significant number of edges removed." }, { "heading": "5 CHARACTERIZING ATTENTION", "text": "" }, { "heading": "5.1 ANALYZING ATTENTION METRICS", "text": "Experiment settings Our attention study is completely based on the GAT architecture except that we try different types of attention mentioned in Section 3.1. We follow the experiment settings of the original authors whenever possible and perform a hyperparameter search otherwise. To make a fair comparison between attention variants, we use the same hidden size for each layer output across attention variants. The detailed settings can be found in Appendix B.2. Unless explicitly mentioned, we perform 100 random runs for Cora, Citeseer and Pubmed and 10 random runs for PPI, CEP and HIV. The test performance is mostly consistent across attention variants and is comparable to the original reported numbers, which we report in Appendix B.4.\nLearned attention v.s. static attention Attention-free GNNs employ static weights in updating node features. The first question is then how does learned attention differ from static weights? If the learned attention are almost the same as the static ones, then there is no point in performing costly attention computation and we do not need to proceed with the analysis. For any static attention (GCN or uniform), we quantify the discrepancy between it and learned attention for node i with 1 2 ∑ j∈N (i) |αlearnedi,j − αstatici,j |. With a range of [0, 1], the larger the value, the larger the discrepancy is. We average this value over all nodes in graphs. Table 2 shows the discrepancy between the static and the learned attention for the first head with the ‘concat’ variant. Surprisingly, the discrepancy against the uniform attention is very small for the citation networks, suggesting that each node attends similarly to different neighbors.\nHead-wise and layer-wise differences Figure 2 visualizes the attention of a node over its incoming edges in Cora and PPI, based on three heads in the last layer. We can find that different heads behave distinctively in the PPI case and they are all uniform in the Cora case. We summarize the change of attention over heads and layers with several metrics. To quantify the variance between two head-wise distributions, we compute the averaged L1 norm of the difference between the mean distribution over all heads and the learned distribution for each head.\nαmeani,j = 1\nK K∑ k=1 αki,j , j ∈ N (i), Head-wise Variance = 1 2K 1 |V| K∑ k=1 ∑ i∈V ∑ j∈N (i) |αki,j − αmeani,j |.\nTo probe the concentration of attention, we compute the maximum pairwise difference of them within one-hop neighborhoods maxj1,j2∈N (i) |αi,j1 − αi,j2 |. To verify whether attention are on self loops when they are concentrated and GNNs degenerate to MLPs, we monitor the self-loop attention values.\nWe average the metrics over all nodes and heads for a layer in each run, and compute the mean and standard deviation of the averaged metrics in all runs.\nVarying settings and attention variants We leverage the defined metrics to compare the learned attention by different attention variants. Figure 1 visualizes the head-wise and layer-wise metrics for Cora, Pubmed, PPI and CEP. Independent of the attention variant, the learned attention change little across layers for Cora and Pubmed while the increasing max pairwise difference indicates that they get more concentrated with deeper layers for PPI and CEP. Besides, the attention does not get increasingly more concentrated on self loops while getting sharper over layers. We also experiment with different training settings on these graphs. Figure 3 shows the learned attention when training inductively on Cora while training transductively on PPI. We observe the similar phenomenon of the learned attention, which eliminates the effect of training settings." }, { "heading": "5.2 META GRAPH CLASSIFICATION", "text": "Previous experiments suggest that the attention learned are highly graph-dependent and their characteristics can be predicted with proper knowledge of graph semantics. To verify it, we attempt to infer the graph types based on the attention learned. Specifically, we perform graph classification with attention-based features.\nTable 3: Graph Classification Accuracy\nConcat General Dot product\nAll Layers 94.1± 0.5% 95.6± 0.7% 95.5± 0.4% First Layer 81.3± 1.1% 88.4± 0.8% 91.3± 0.6% Second Layer 83.5± 0.6% 89.7± 0.6% 85.3± 0.5%\nFigure 4: t-SNE visualization of ‘concat’ attention based features. From left to right, the features are from all layers, the first, and the second layer, respectively. See Appendix B.5 for more results.\nSynthetic dataset We construct a synthetic dataset for graph classification with two steps. First, we collect 480 samples/subgraphs of 20 to 30 nodes from each dataset. For HIV and CEP, we choose 480 graphs and construct a balanced subset. For the rest four datasets, we sample 480 graphs for each using random walk as in the case of inductive learning on citation networks. Second, we separately train a 2-layer GAT for each collected dataset and compute the attention metrics for each layer. The mean and standard deviation of the metrics are then used as the graph features. We leave more details in Appendix B.5.\nWe train a logistic regression classifier for graph classification, where 20% of the graphs are used for training and the rest are used for testing. We have separately experimented with attention metrics from all layers, the first layer, and the second layer. The classification performance is reported in Table 3, with all experiments repeated for 10 times.\nIn the experiments, we find that more than 80% of the incorrect classifications happen within citation subgraphs. This is also verified by our t-SNE (van der Maaten & Hinton, 2008; Pedregosa et al., 2011) visualization of the attention metrics in Figure 4. We can see that the attention metrics of citation networks tend to be indistinguishable while the attention metrics of other datasets are better separated and clustered." }, { "heading": "6 ATTENTION-BASED GRAPH SPARSIFICATION", "text": "" }, { "heading": "6.1 PPI SPARSIFICATION FOR GAT PREDICTION", "text": "We consider two intuitive heuristics for attention based sparsification: 1) local top-k sparsification selects up to k incoming edges for each node with highest attention values; 2) global threshold sparsification selects edges whose attention value exceeds a pre-specified threshold over the entire graph. Across all GNN layers, we perform attention-based sparsification for each attention head to get a subset of edges. We then take the union of the edge subsets to get the edge set for the sparsified graph. All self loops are selected to preserve the information of original node features.\nInspired by recent research on sampling based training of GNNs (Hamilton et al., 2017; Huang et al., 2018), we consider two baseline random sparsification for comparison: 1) uniform neighbor sparsification uniformly samples at most k incoming edges for each node without replacement and we compare it against local top-k sparsification; 2) uniform graph-wise sparsification uniformly\nsamples a proportion of edges over the entire graph(s) without replacement and we compare it against threshold sparsification.\nWe first experiment attention-based graph sparsification with PPI, which tends to have sharp attentionbased on our previous study and is relatively dense (56944 nodes and 1644208 edges with self loops and bi-directional edges added). Despite similar sharp attention, molecules are not good candidates for the experiment as they are already sparse and on average each atom only has less than three neighbors. As illustrated in figure 6, after training a GAT on PPI, we perform attention-based sparsification and re-train a GAT on the sparsified training and validation graphs.\nFigure 5 compares the results of attention-based sparsification with concat variant against random sparsification. 1) With a similar degree of sparsification, the attention-based sparsification consistently performs better than the baseline in terms of test metric and its variance across runs; 2) We can reach a test accuracy comparable to the original result with only 40% ∼ 50% edges left in the training and validation graphs." }, { "heading": "6.2 SPARSIFICATION WITH GENTLE ATTENTION", "text": "What if the attention are neither uniform nor sharp? Our previous study shows that the attention in Pubmed are not completely uniform in the last GAT layer with dot product and general attention variants so we use it as a testbed for the study. In cases where attention are not very sharp, threshold sparsification is not very useful and we consider only top-k sparsification.\nTable 4 compares top-k sparsification with dot product attention against uniform neighbor sparsification. With similar proportion of edges left in the training and validation graphs, the top-k sparsification consistently outperforms uniform neighbor sparsification and achieves a performance comparable to that of training on the raw graphs." }, { "heading": "6.3 LIGHT GAT SPARSIFICATION WITH GRAPHSAGE PREDICTION", "text": "For practical usage of speeding up the computation, we do not want to train two large GATs from scratch. We propose to first quickly train a light GAT with a lot fewer parameters for graph sparsification and then train a large attention-free GNN for prediction on the sparsified graphs.\nWe explore this idea by varying the hidden sizes of attention heads in GAT and use a GraphSAGE for prediction on PPI. The GraphSAGE has 3 layers and each layer has a hidden size of 512. We consider the mean aggregator for it with skip connection added. The results are summarized in Table 5. Our experiments show that a GraphSAGE model can achieve a good performance with training on sparsified graphs as long as we sparsify test graphs. Surprisingly, while a smaller hidden size in GAT does harm its own prediction performance, it has little effect on capturing important edges for GraphSAGE prediction even with top-1 sparsification. We note that the number of parameters in GraphSage and the baseline GAT is 1.2M and 3.6M, respectively, i.e. a 3-fold reduction." }, { "heading": "7 CONCLUSIONS AND DISCUSSIONS", "text": "In this work, we propose an analytical paradigm that can summarize the characteristics of multi-head attention learned over graphs and compare them against topology-based static attention. This allows a deeper understanding of attention beyond simply comparing the performance of models and motivates further use of attention like transfer learning. In addition to the attention-based sparsification we explored, we believe below are several interesting directions that are underexplored:\n• Theory. Many efforts have performed to theoretically understand and explain GNNs, particularly their connection to kernel methods and Weisfeiler-Lehman tests (Morris et al., 2019; Xu et al., 2019; Maron et al., 2019), but few of them have considered attention.\n• Interpretability. The use of attention can add interpretability. This is particularly valued in risk-sensitive scenarios like medicine. Several efforts have been made in the chemistry community (Ryu et al., 2018; Preuer et al., 2019; Xiong et al., 2019).\n• Unsupervised learning. Unsupervised representation learning is an important approach when the training of a model is expensive and the labeled data is scarce. This is particularly the case for biological networks and molecular graphs due to the need of wet-lab experiments. The NLP community has witnessed the success of unsupervised learning with attention (Radford et al., 2019) and we might expect the same for GNNs. Weihua Hu (2019) demonstrates the effectiveness of training GNNs with unsupervised learning for chemistry and biology, but did not employ attention." }, { "heading": "A DATASETS", "text": "A.1 DATASET SUMMARY\nTable 6 and 7 summarize the statistics about the raw graph datasets. When computing the number of edges and node degrees, we have not considered self loops. Also when we model directed graphs as undirected graphs, the number of edges will get doubled. For transductive learning, the number of edges is considered to be the same for training/validation/test as all edges may be involved in message passing.\nA.2 ADDITIONAL DETAILS\nGraph Construction For citation networks, the nodes correspond to documents and the edges correspond to citations between pairs of documents. For PPI, the nodes represent proteins and the edges represent physical interactions between them. For molecules, the nodes correspond to atoms and the edges correspond to chemical bonds.\nNode featurization and labeling For citation networks, nodes have bag-of-words features and labels for the topic of documents. For PPI, the node features include positional gene sets, motif gene sets and immunological signatures and the node labels are gene ontology sets (Hamilton et al., 2017), collected from the Molecular Signatures Database (Subramanian et al., 2005). For the CEP dataset, the node features consist of one hot encodings of atom type, node degree, the total number of hydrogens attached to it, the number of implicit hydrogens attached to it, and its aromaticity indicator. For the HIV dataset, in addition to those features we also consider the formal charge of the atom, the number of radical electrons of the atom and the atom’s hybridization.\nDataset splits For Cora, Citeseer, Pubmed, and PPI, we consider a deterministic dataset split for training, validation and test. For CEP, we randomly split the dataset in each run, where approximately the proportion of graphs used for training, validation and test is separately 60%, 20% and 20%. The statistics of the CEP dataset in table 7 is obtained in one random run. For the HIV dataset, we use the scaffold split (Wu et al., 2018; Li et al., 2017), which structurally separates molecules into training, validation and test subsets and poses a greater challenge for generalization.\nImbalanced dataset The HIV dataset is highly imbalanced, with only 1487 compounds are positive, constitute approximately 3.5% of the dataset." }, { "heading": "B EXPERIMENT SETTINGS", "text": "B.1 VARYING LEARNING SETTING FOR CITATION NETWORKS AND PPI\nTransductive Learning on PPI To perform transductive learning on PPI, we sample two mutually exclusive subsets of the nodes as the training set and validation set for each graph, leaving the rest as the test set. We experiment on two splitting settings. In the first setting, we sample about 5% nodes for training and 18% nodes for validation, similar to the splitting ratio of the transductive learning setting on Cora. In the second setting, we sample 79% nodes for training and 11% nodes for validation, similar to the case of inductive learning on PPI.\nInductive Learning on Citation Networks To perform inductive learning on citation networks, we first sample 120 graphs of 100 nodes for each dataset. We use a random walk based sampling algorithm described in Algorithm 1, which by the study of (Leskovec & Faloutsos, 2006) performs best in preserving the properties of static graphs. Separately, 60%, 20%, 20% of the graphs are used for training, validation and test.\nB.2 HYPERPARAMETERS FOR ATTENTION STUDY\nFor the attention study, we consider the hyperparameters below:\n• Transductive learning on Cora and Citeseer: 2-layer GAT with 8 heads in the first layer and 1 head in the second layer, 8 hidden units for each head in the first layer, a dropout of 0.6, no residual connection, a learning rate of 0.005, L2 regularization with coefficient 0.0005, cross entropy loss\n• Transductive learning on Pubmed: 2-layer GAT with 8 heads in both layers, 8 hidden units for each head in the first layer, a dropout of 0.6, no residual connection, a learning rate of 0.01, L2 regularization with coefficient 0.001, cross entropy loss\n• Inductive/Transductive learning on PPI: no L2 regularization, no dropout, 3-layer GAT with residual connections added for the last two layers; a learning rate of 0.005 for concat attention and a learning rate of 0.0001 for other attention variants; the number of attention\nAlgorithm 1 Random Walk Sampling Require: G = (V, E) the original graph, g size = 100 the target subgraph size\n1: step = 0 2: start ∼ Unif(V) . Uniformly choose a starting node. 3: Vsub = {start}, Esub = {(start, start)} 4: src = start 5: while |Vsub| < g size and step < 100 ∗ g size do 6: step = step+ 1 7: back ∼ Bernoulli(0.15), . Return to the starting point with probability 0.15. 8: if back then 9: src = start\n10: else 11: dst ∼ Unif({j|(src, j) ∈ E}) 12: Vsub = Vsub ⋃ {dst}\n13: Esub = Esub ⋃ {(dst, dst), (src, dst), (dst, src)} 14: src = dst 15: Return (Vsub, Esub)\nheads in the three layers is separately 8, 8, 6 for general attention and 4, 4, 6 for other attention variants; 128 hidden units for each head in the first two layers for general attention and 256 hidden units for each head in other attention variants1; we use a batch size 1 for general attention and a batch size 2 for other attention variants; binary cross entropy loss\n• Inductive learning on Cora, Citeseer, Pubmed: batch size 24, 3-layer GAT with separately 4, 4, 6 attention heads, residual connection is added for the last two layers, 8 hidden units per head for the first two layers, a learning rate of 0.005, a dropout of 0.6; L2 regularization with coefficient 0.001 for pubmed and 0.0005 for the rest two citation networks; cross entropy loss\n• CEP: a batch size of 512, 3-layer GAT where each layer has 4 heads and each head has 32 hidden units, a dropout of 0.0, residual connection is added for the last two layers, a learning rate of 0.001, no L2 regularization; smooth L1 loss\n• HIV: a batch size of 64, 2-layer GAT where each layer has 4 heads and each head has 32 hidden units, a dropout of 0.0, residual connection is added for the last two layers, no L2 regularization, an initial learning rate of 0.0005, a decay of learning rate by 0.99 after each epoch; weighted focal loss with −(wy(1− p)γ log p+(1− y)pγ log (1− p)), where γ = 2 and w = #negative samples/#positive samples\nAn early stop is performed if the validation score hasn’t been improved for 100 epochs.\nB.3 GRAPH-LEVEL PREDICTION\nBased on node features updated with a GNN, we can also perform a graph-level prediction. First, a graph representation can be obtained with:\nhG = ∑ v∈V Sigmoid ( g ( hLv )) ReLU(f ( hLv ) )\nwhere L is the number of GNN layers, g : RnL → R and f : RnL → RnG are two linear layers with bias added. In all cases we consider nG = 128. A graph-level prediction is then computed with a 3-layer MLP where all hidden sizes are equal to nG and a ReLU activation is applied after each of the first two linear layers.\n1With the formulation of score(hi, hj) = (hi)TBhj , the general attention variant requires a lot more parameters than the other two attention variants with a same number of hidden units, which can result in an out of memory error. As a work around, we use a larger number of heads for this variant with a smaller hidden size per head so that the final output size of layers does not change.\nB.4 TEST PERFORMANCE ACROSS ATTENTION VARIANTS\nWe evaluate test performance using different metrics for different datasets – accuracy for Cora, Citeseer, Pubmed, micro-averaged F1 score for PPI, mean absolute error for CEP and roc auc score for HIV. See table 8 for a summary of the prediction performance, where different attention variants mostly have similar performance. The reference numbers are from Veličković et al. (2018) unless stated otherwise. For Cora, Citeseer, Pubmed and PPI, we include the original results of GATs for reference. For the rest datasets, we include the best performance of previous work for reference whenever applicable, but some models do not involve attention mechanism.\nB.5 GRAPH CLASSIFICATION\nDataset construction 1) As PPI has multiple graphs, we sample the same number of subgraphs from each original graph. 2) For CEP dataset, we first sort the whole dataset based on the photovoltaic efficiency and split it into 96 buckets. We sample 5 graphs from each bucket where 3 graphs are used for training, one graph is used for validation and one graph is used for test. 3) As the HIV dataset is intrinsically imbalanced, we construct a balanced subset by sampling a same number of positive samples and negative samples. Also since we are considering a special dataset split, the training, validation and test subset is separately constructed from the training, validation and test set. 4) For Cora, Citeseer, Pubmed and PPI, we construct a whole dataset of subgraphs first and then perform a random split to get the training, validation and test set with a splitting ratio of 60%:20%:20%.\nGraph feature extraction For graph feature extraction, we train a 2-layer GAT with 8 heads for each layer and 64 hidden units for each head. We use a batch size of 16 and perform an early stop if the validation score no longer improves for 10 epochs. We use the same loss functions for each dataset as explained in B.2. We also perform a hyperparameter search for learning rate, dropout, L2 regularization coefficient λ and whether to perform a residual connection. The selected hyperparameters are as follows:\n• Cora: a dropout of 0.1, residual connection is added for the second layer, no L2 regularization, a learning rate of 0.01 for concat attention and a learning rate of 0.005 for other attention variants • Citeseer: a dropout of 0.1, residual connection is added for the second layer, a learning rate\nof 0.01, no L2 regularization • Pubmed: a dropout of 0.1, residual connection is added for the second layer, no L2 regular-\nization, a learning rate of 0.005 for general attention and a learning rate of 0.01 for other attention variants • PPI: no dropout, residual connection is added for the second layer, no L2 regularization,\na learning rate of 0.005 for concat attention and a learning rate of 0.01 for other attention variants\n2The original work only has a bar plot and we contacted the authors for the numbers.\n• CEP: no dropout, a learning rate of 0.005; a residual connection is added for the second layer only with general attention; L2 regularization with coefficient 0.001 is used except for concat attention • HIV: a residual connection is added for the second layer, no L2 regularization is used; a\ndropout of 0.6 is used only for concat attention; a learning rate of 0.005 is used for dot product attention and a learning rate of 0.01 is used for the rest attention variants\nt-SNE visualization of attention metrics In the text we included the t-SNE visualization of attention based features for concat attention only. Here we include the results for all three variants for comparison in figure 7, 8 and 9. Across all attention variants, we observe a similar pattern that the attention metrics for citation subgraphs get blurred while those for the rest datasets are better separated and clustered." } ]
2,019
null
SP:e4482ea19c071040799a23293a00fef8305126d5
[ "The authors introduce a variational autoencoder for conditional generation of molecules. The model is borrowed from text-based style transfer, applied here on sequence (SMILES) representation of molecules rather than viewing molecules as graphs (as more recent approaches). From a modeling point of view, the main new part is an additional regularizer whose role is to 1) ensure that the property used as input during generation matches the property derived from the generated molecule, and 2) to dissociate the latent molecule representation in the autoencoder (loosely speaking, its overall structure) from the property being controlled. This regularizer is just a squared difference between predicted and actual properties, averaged over independent samples from latent states and properties which are parametrically mapped to predicted properties via the decoder state (so as to be able to backprop).", "This paper proposes a VAE-based conditional molecular graph generation model. For that purpose, the disentanglement approach is adopted in this paper: learn to separate property information from the structure representation of a molecular graph. The authors use the supervised VAE objective since the KL regularizer in the objective has reportedly disentanglement-promoting effect. The final objective function is a standard VAE ELBO plus a penalty term of the property value prediction error. Non-differentiable property estimation is conducted via stochastic sampling expectation with the help of an external program (RDKit). " ]
Though machine learning approaches have shown great success in estimating properties of small molecules, the inverse problem of generating molecules with desired properties remains challenging. This difficulty is in part because the set of molecules which have a given property is structurally very diverse. Treating this inverse problem as a conditional distribution estimation task, we draw upon work in learning disentangled representations to learn a conditional distribution over molecules given a desired property, where the molecular structure is encoded in a continuous latent random variable. By including property information as an input factor independent from the structure representation, one can perform conditional molecule generation via a “style transfer” process, in which we explicitly set the property to a desired value at generation time. In contrast to existing approaches, we disentangle the latent factors from the property factors using a regularization term which constrains the generated molecules to have the property provided to the generation network, no matter how the latent factor changes.
[]
[ { "authors": [ "Rim Assouel", "Mohamed Ahmed", "Marwin H. Segler", "Amir Saffari", "Yoshua Bengio" ], "title": "Defactor: Differentiable edge factorization-based probabilistic graph generation", "venue": "CoRR, abs/1811.09766,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "arXiv preprint arXiv:1412.3555,", "year": 2014 }, { "authors": [ "Hanjun Dai", "Yingtao Tian", "Bo Dai", "Steven Skiena", "Le Song" ], "title": "Syntax-directed variational autoencoder for structured data", "venue": "arXiv preprint arXiv:1802.08786,", "year": 2018 }, { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "MolGAN: An implicit generative model for small molecular graphs", "venue": null, "year": 2018 }, { "authors": [ "Hanna Eckert", "Jürgen Bajorath" ], "title": "Molecular similarity analysis in virtual screening: foundations, limitations and novel approaches", "venue": "Drug discovery today,", "year": 2007 }, { "authors": [ "Rafael Gómez-Bombarelli", "David K. Duvenaud", "José Miguel Hernández-Lobato", "Jorge AguileraIparraguirre", "Timothy D. Hirzel", "Ryan P. Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "CoRR, abs/1610.02415,", "year": 2016 }, { "authors": [ "Gabriel Lima Guimaraes", "Benjamin Sanchez-Lengeling", "Carlos Outeiral", "Pedro Luis Cunha Farias", "Alán Aspuru-Guzik" ], "title": "Objective-Reinforced generative adversarial networks (ORGAN) for sequence generation models", "venue": null, "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Toward controlled generation of text", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "arXiv preprint arXiv:1802.04364,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "arXiv preprint arXiv:1703.01925,", "year": 2017 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander Gaunt" ], "title": "Constrained graph variational autoencoders for molecule design", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Tengfei Ma", "Jie Chen", "Cao Xiao" ], "title": "Constrained generation of semantically valid graphs via regularizing variational autoencoders", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "Marwin HS Segler", "Thierry Kogej", "Christian Tyrchan", "Mark P Waller" ], "title": "Generating focused molecule libraries for drug discovery with recurrent neural networks", "venue": "ACS central science,", "year": 2017 }, { "authors": [ "N. Siddharth", "Brooks Paige", "Jan-Willem Van de Meent", "Alban Desmaison", "Noah Goodman", "Pushmeet Kohli", "Frank Wood", "Philip Torr" ], "title": "Learning disentangled representations with semi-supervised deep generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Graphvae: Towards generation of small graphs using variational autoencoders", "venue": "arXiv preprint arXiv:1802.03480,", "year": 2018 }, { "authors": [ "Teague Sterling", "John J Irwin" ], "title": "Zinc 15–ligand discovery for everyone", "venue": "Journal of chemical information and modeling,", "year": 2015 }, { "authors": [ "Scott A Wildman", "Gordon M Crippen" ], "title": "Prediction of physicochemical parameters by atomic contributions", "venue": "Journal of chemical information and computer sciences,", "year": 1999 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Rex Ying", "Vijay Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 } ]
[ { "heading": null, "text": "Though machine learning approaches have shown great success in estimating properties of small molecules, the inverse problem of generating molecules with desired properties remains challenging. This difficulty is in part because the set of molecules which have a given property is structurally very diverse. Treating this inverse problem as a conditional distribution estimation task, we draw upon work in learning disentangled representations to learn a conditional distribution over molecules given a desired property, where the molecular structure is encoded in a continuous latent random variable. By including property information as an input factor independent from the structure representation, one can perform conditional molecule generation via a “style transfer” process, in which we explicitly set the property to a desired value at generation time. In contrast to existing approaches, we disentangle the latent factors from the property factors using a regularization term which constrains the generated molecules to have the property provided to the generation network, no matter how the latent factor changes." }, { "heading": "1 INTRODUCTION", "text": "Conditional molecule generation is far from being solved. The main challenge is the enormous and discrete nature of the molecules space and the fact that molecule properties are highly sensitive to molecular structure (Kirkpatrick & Ellis, 2004). Approaches to conditional generation are typically two-step, either using a model or genetic algorithm to generate candidates which are later filtered, or learning a continuous embedding of the discrete molecules and optimizing in a real-valued representation space. The former is computationally expensive, the latter performs conditional generation only very obliquely.\nWe propose a conditional generative model that produces candidate molecules which targeting a desired property in a single step. This approach builds on work in structured deep generative models (Kingma et al., 2014; Siddharth et al., 2017), which aim to learn a disentangled representation that factors into observed properties we want to control for, and latent factors that account for the remaining features which are either hard to annotate or irrelevant to the properties we wish to optimize.\nWe derive a regularizer for supervised variational autoencoders which exploits property information that we provide as supervision, ensuring that produced molecules adhere to target properties they are conditioned on. We demonstrate the ability of our model to perform accurate conditional molecule generation and a sort of “style transfer” on molecules, where a latent representation for a single molecule can have its target properties perturbed independently of its learnt structural characteristics, allowing direct and efficient generation of candidates for local optimization of molecules." }, { "heading": "2 BACKGROUND", "text": "Molecule discovery tasks come in two flavors. Global optimization seeks to find molecules that have a particular target property. Local optimization starts from some initial molecule and searches for molecules which have a desired property while not straying too far from the prototype. There is some overlap in methods used in the two approaches." }, { "heading": "2.1 DEEP GENERATIVE MODELS FOR MOLECULES", "text": "Virtual screening methods start from a large database of possible molecules and retain the promising ones (Eckert & Bajorath, 2007), as measured by some quality function f(·). Machine learning approaches expand on this by dynamically generating additional candidate molecules; Segler et al. (2017) uses a stacked LSTM to produce large numbers of novel molecules which have similar characteristics to an existing database.\nFor properties which are expensive to evaluate, generating large sets of candidate molecules is not particularly useful. More sample-efficient global search can be achieved using Bayesian optimization methods, which use a generative model with a latent space that functions as a continuous representation of molecules (Gómez-Bombarelli et al., 2016; Kusner et al., 2017). Optimization is then carried out over this continuous representation space to find candidates which are expected to have the desired property. Local gradient-based search can also be applied on continuous latent spaces to optimize the latent representation with respect to a target property (Jin et al., 2018; Liu et al., 2018).\nA challenge for these latent variable models is to reliably produce valid molecules. Character variational autoencoders (CVAEs) (Gómez-Bombarelli et al., 2016) generate molecules one character at a time, and are prone to syntactic and semantic errors; the grammar-based variational autoencoder (GVAE) (Kusner et al., 2017) and syntax-directed variational autoencoder (SD-VAE) (Dai et al., 2018) instead operate in the space of context-free and attribute grammars, respectively, to ensure syntactic validity. Other work generative models that operates on graph representations (Simonovsky & Komodakis, 2018; De Cao & Kipf, 2018; Jin et al., 2018; You et al., 2018; Liu et al., 2018), largely improving the ability to generate valid molecules.\nSuppose we are given a training set of pairs D = {(xi,yi)}, i = 1, . . . , N , where x corresponds to molecules and y represents a value of some properties of the molecule x. Assume the molecules represent an i.i.d. sample from some unknown distribution p̃(x), which assigns high probability to molecules believed to be useful for a given task. Aside from Segler et al. (2017), which has no latent space and thus directly trains via maximum likelihood, these latent variable models are trained by optimizing a standard ELBO objective for variational autoencoders (?). This entails learning a stochastic encoder qφ(z|x) which maps molecules into a latent space, and a stochastic decoder pθ(x|z) for reconstructing molecules, by maximizing\nL(θ, φ) = N∑ i=1 { Eqφ(zi|xi)[log pθ(xi|zi)]−DKL(qφ(zi|xi)||p(zi)) } . (1)\nNotably, the objective is not a function of y: most existing generative models with latent variables do not perform direct conditional generation, and approaches for targeted molecule discovery are bolted on to the learnt model. Some, e.g. Kusner et al. (2017), are trained in an “unsupervised” manner, agnostic to any property which later may need to be optimized. Others, e.g. Gómez-Bombarelli et al. (2016); Liu et al. (2018), train the autoencoder jointly alongside a function to predict y from z, hoping to guide the latent space to be also good for predicting the desired property. A recent exception is Assouel et al. (2018), which learns a deterministic autoencoder where the decoder takes the latent code and the desired property as input, using a mutual information term in training to steer the model towards generating molecules whose target properties match the input. Guimaraes et al. (2017); De Cao & Kipf (2018); You et al. (2018) instead learn generation models optimized towards specific metrics, such as drug-likeliness and solubility; the major downside is that these models must be retrained each time for a new property. In contrast, the autoencoder-based methods can be re-used to optimize towards any particular value of the property." }, { "heading": "2.2 STYLE TRANSFER WITH SUPERVISED VAES", "text": "While the latent representations learned through standard VAE models perform well on the task of molecule reconstruction they do not necessarily provide interpretable factorised representations. A disentangled representation gives us additional control on the molecule generation process, allowing us to modify a single property leaving the remaining unaffected (Bengio et al., 2013a). In many cases important variation in the data is easy to annotate. For example in the case of molecule datasets we have access to different functional descriptors of the molecules obtained by chemoinformatics software such as RDKit (Landrum). Particularly useful to us here are supervised methods for learning disentangled representations (Kingma et al., 2014; Siddharth et al., 2017). These are distinct from\nunsupervised disentangling approaches such as InfoGAN (Chen et al., 2016) or β-VAE (Higgins et al., 2017), which encourages the latent factor to learn a disentangled representation by modifying the objective to promote component independence.\nWe will learn representations that specifically disentangle molecular properties of interest which we may later want to modify. Kingma et al. (2014) demonstrates how disentangling can be used to take two MNIST images of different digits, written in different styles, and independently change the digit while holding the style constant. An analogous operation on molecules would involve holding the physical structure of a molecule (its “style”) relatively fixed while modifying a salient property. Unlike (say) the style transfer example for the MNIST digits, the conditional distribution of molecules with a particular value of properties might be very diverse; for example, the QED score attempts to measure the drug-likeness of a molecule, and the set of molecules generated at high values of this score would hopefully have high probability on a large, varied set of molecules. An essential challenge here is that the property only provides a very weak signal as to the overall structure of the molecule. To account for this diversity, we model the conditional distribution with a latent variable z, such that pθ(x|y) = ∫ pθ(x|z,y)p(z)dz.\nDisentangling the latent code z from the property y enables style transfer. This is done by taking an initial x, computing the posterior over the latent variable z, and then generating a new x′ with the property modified to have a target value y′, with pθ(x′|y′,x) =∫ pθ(x\n′|y′, z)pθ(z|x)dz. Concretely, this involves fitting a joint generative model of the form pθ(x,y, z) = pθ(x|y, z)p(y)p(z), in which y and z are independent under the prior, and we assume a unit multivariate normal prior p(z). To infer the latent variable z we will use a variational distribution qφ(z|x), which takes the form of a multivariate normal distribution with parameters a nonlinear function of x, to approximate the true posterior pθ(z|x,y). This objective function\nLELBO(θ, φ) = N∑ i=1 { Eqφ(zi|xi)[log pθ(xi|yi, zi)]−DKL(qφ(zi|xi)||p(zi)) } (2)\ncorresponds to learning a supervised VAE (Kingma et al., 2014), and represents a fairly naı̈ve approach to modeling a conditional distribution." }, { "heading": "3 CONDITIONAL GENERATION BY DISENTANGLING", "text": "Maximizing this conditional ELBO in Eq (2) will likely yield good reconstructions of molecules from an embedding z (alongside the true property y), but for properties which only weakly inform the generative model there is nothing to enforce that the variable y actually directly has an effect on the generative process. Since the value y is something we know is a derived property of the molecule x, it is completely possible for all information about y to also be encoded in the representation z, in which case there is no guarantee that the learnt likelihood pθ(x|y, z) actually takes into account the value of y — in fact, we know it is possible to fit variational autoencoders where the decoder simply has the form pθ(x|z) — and we are relying on the utility of y in reconstructions to see any sort of disentangling effect." }, { "heading": "3.1 CONSTRAINED ELBO", "text": "In the case of conditional generation of molecules, we often have access to some oracle function f (possibly non-differentiable) which for any given x outputs a property estimate y, for instance, the chemoinformatics software RDKit (Landrum). Since for conditional generation our ultimate goal is to generate a molecule x for any given target property y0, which then actually has f(x) = y0, we can reframe the problem by introducing hard constraints on the generated values, i.e. if restricting to values of y in the training set,\nmax θ,φ LELBO(θ, φ)\nsubject to Ex∼p(x|yi)[I[f(x) = yi]] = 1\nfor all i = 1, . . . , N . This is an unreasonably hard constraint, unlikely to be satisfied by any distribution other than one which simply places a point mass on the single training xi associated with yi, but we can relax it by considering that (unlike the molecular space x) the property space y is typically smooth, as many properties are continuous-valued and correspond to a human-interpretable scale. Following Ma et al. (2018) and Hu et al. (2017), we reframe the constraint as a soft penalty on the ELBO,\nL(θ, φ) = LELBO(θ, φ)− λ1 2 N∑ i=1 Ex̂∼pθ(x|yi)‖f(x̂)− yi‖ 2 (3)\nso that they are consistent with the property prediction, i.e., as we have an oracle function f which enable us to access the property of any generated data, we can explicitly add a soft constraint to our loss function to provide explicit guidance for the generative model such that f(x̂) = y. This constraint is expected to hold for any pair (x,y) we may happen to come across, not just those in the training data. We also show optimizing the relaxed constraint is equivalent to maximizing mutual information with the target yi and generated molecule x̂; for details see appendix Section 6.1." }, { "heading": "3.2 APPROXIMATING THE PROPERTY PREDICTOR", "text": "Introducing the regularizer as in Eq. (3) implicitly guides the reconstruction to take into account the property information, such that the reconstructed data should exhibit properties which match the input properties it is conditioned on. However, existing implementations of f are often non-differentiable or CPU-bound, and x̂ are discrete samples from a categorical distribution, all of which means the gradient of the regularizer can’t flow back to the generator. This is outlined in Figure 2. To enable the gradient based methods on GPUs during training and avoid discrete sampling, one approach would be to first fit a differentiable approximation to f , and then use either a Gumbel-softmax relaxation (Jang et al., 2016) or tricks like a “straight-through” estimator (Bengio et al., 2013b) as a continuous approximation for the discrete samples. Instead, we propose bypassing the discrete sampling step entirely and learning a function fω that can map from a learned representation of the molecules directly to molecules property (Hu et al., 2017).\nTo do this, we take as input the last hidden layer of the decoder network which parameterizes pθ(x|z,y), denoting this deterministic transformation as gθ(z,y). For the grammar VAE and the syntax-directed VAE, this last layer h = gθ(z,y) is the output of a recurrent layer that generates logits corresponding to unmasked and unnormalized log probabilities for each character at each position in the string; see Kusner et al. (2017) and Dai et al. (2018) for details on the implementation of the somewhat complex sampling process in the decoder. Ideally, fω would estimate the property distribution obtained by marginalizing out the discrete sampling step, with\nfω(h ≡ gθ(z,y0)) ≈ Epθ(x|z,y0)[f(x)], (4)\nwhere we condition on z, and y0 refers to an arbitrary input target property.\nAssuming the approximation in Eq. (4), we have\nEpθ(x̂|yi)‖f(x̂)− yi‖ 2 = Ep(z) [ Epθ(x̂|yi,z)‖f(x̂)− yi‖ 2 ] ≈ Ep(z)‖fω(gθ(z,yi))− yi‖2,\nan expectation over a real-valued variable which does not depend on any of the parameters we are estimating, meaning we can use a simple path estimate of the gradient with respect to θ, ω by exchanging the gradient with the expectation. We thus define a regularization term\nLdisent(θ, ω) = λ1 2 N∑ i=1 Ep(z)‖fω(gθ(z,yi))− yi‖2 (5)\nwhich can be used as a drop-in replacement for the non-differentiable penalty term in Eq. (3), yielding a candidate objective function\nL(θ, φ) ≈ L̂ω(θ, φ) = LELBO(θ, φ)− Ldisent(θ, ω) (6)" }, { "heading": "3.3 LEARNING THE PROPERTY ESTIMATOR JOINTLY WITH GENERATIVE MODEL", "text": "While one could imagine attempting to learn fω jointly with φ, θ by direct optimization of Eq. (6), in practice this is very unstable, as values of gθ(z,yi) early in training may correspond to very poor generated molecules x̂i which may not have properties at all similar to yi. This can be sidestepped by training the property estimator jointly as part of an extended generative model on [x,y].\nWe note that the property estimator fω parameterizes a probability distribution pω(f(x)|z,y0), where x ∼ pθ(x|z,y0) and f is the oracle function that f(x) = y. With a Gaussian distribution over the error, we can consider\npω(f(x)|z,y0) = N (f(x)|fω(gθ(z,y0)), λ−12 I) (7)\nfor small, fixed λ2. Therefore, we propose defining a new ELBO based on a joint autoencoder for {f(xi),yi)}, albeit with a factorization such that the input yi bypasses the encoder and is passed directly into the decoder, with a joint likelihood\np{θ,ω}(xi, f(xi)|zi,yi) = pω(f(xi)|zi,yi)pθ(xi|zi,yi). (8)\nThis yields a joint ELBO for the training set of\nLELBO(ω, θ, φ) = N∑ i=1 Eqφ(z|xi) [ log pω(f(xi)|z,yi)pθ(xi|z,yi)p(z) qφ(z|xi) ] . (9)\nNote that we can rewrite this ELBO as a function of the previous one, with\nLELBO(ω, θ, φ) = LELBO(θ, φ)− λ2 2 N∑ i=1 Eqφ(z|xi)‖fω(gθ(z,yi))− yi‖ 2 2, (10)\nwhere we also see that LELBO(ω, θ, φ) ≤ LELBO(θ, φ), allowing us to define an objective\nL̂(ω, θ, φ) = LELBO(ω, θ, φ)− Ldisent(θ, ω), (11)\nwhich is a lower bound on Eq. (6). Notice the two terms we have added to the original ELBO are quite similar, differing only in choice of distribution: for learning fω, we wish to use values of z simulated form the approximate posterior qφ(z|x), whereas for enforcing a constraint across all possible generations we simulate z from the prior p(z)." }, { "heading": "3.4 GRADIENT ESTIMATION", "text": "As the regularizer Ldisent(θ, ω) encourages disentangling by constraining the molecules generated from yi to have property yi no matter what value z takes, we found that it does not necessarily evaluate at meaningful values of z when sampled randomly from p(z). This roughly corresponds to the notion that not all combinations of “style” and property are physically attainable; ideally for style transfer we would like the generated molecule to stay “close” in structure to the original molecule that we intended to modify. When estimating (gradients of) the soft constraint term Ldisent(θ, ω), we found it advantageous to use samples of z which correspond to encodings of actual data points, as\nopposed to random samples from the prior. We approximate expectations with respect to p(x) by looking at the so-called marginal posterior; we note that\np(z) = ∫ pθ(z|x)pθ(x)dx ≈ 1\nN ∑ j pθ(z|xj) ≈ 1 N ∑ j qφ(z|xj),\nwhere the first approximation uses the empirical data distribution as an approximation to the model marginal pθ(x), and the second uses our variational posterior approximation qφ(z|x). We define this quantity as q(z) = 1N ∑ j qφ(z|xj), a mixture of Gaussians, which we can sample from by drawing random values from our dataset and then drawing from their encoding distributions.\nWhen we use this in estimating gradients of the soft constraint, we can use samples from the same minibatch, exactly corresponding to a property transfer task. That is, for any particular yi in the dataset, we can estimate\nEp(z)∇θ,ω‖fω(gθ(z,yi))− yi‖2 ≈ Eq(zj |xj)∇θ,ω‖fω(gθ(z,yi))− yi‖ 2.\nfor any uniformly randomly sampled j 6= i. By sampling zj from q(zj |xj) where j 6= i, we make sure that all the label information decoder is receiving comes from the actual yi that is feed to the decoder and zj does not include any information about label. This can be evaluated easily by simply evaluating the penalty term of Eq. (10) twice per minibatch; once as in Eq. (10), and once to approximate Ldisent(θ, ω) by permuting the properties in the minibatch to be assigned to incorrect molecules. We detail the training algorithm in Section 6.2 of the appendix." }, { "heading": "4 EXPERIMENTS", "text": "We experiment with the QM9 dataset (Ramakrishnan et al., 2014), that contains 134k molecules with up to 9 heavy atoms, and the ZINC dataset (Sterling & Irwin, 2015) containing 250k druglike molecules. Our goal here is two-fold: we would like to understand (1) whether a supervised variational autoencoder is capable of learning suitable conditional distributions over molecules, and (2) to what extent this task is assisted by the additional regularization term corresponding to the soft constraint.\nWe represent molecules using the one-hot encoding of their SMILES production rules (Kusner et al., 2017) and add a semantic constraint (Dai et al., 2018) on the decoder network to avoid generating syntactically correct but semantically invalid molecules. We use 80 production rules to describe molecules and set the maximum SMILES sequence length to 100 for the QM9 dataset and 278 for the Zinc dataset. We experiment with the logP property of the molecules (Wildman & Crippen, 1999). We use the same encoder and decoder network structure as Dai et al. (2018) with the only difference that our decoder takes as input the concatenation of y, z. We give the details of the architecture in the appendix section 6.2.\nWe evaluate the reconstruction accuracy and the quality of the molecules generated by our method, which we denote by CGD-VAE (conditional generation with disentangling) and compare against CVAE (Gómez-Bombarelli et al., 2016), GVAE (Kusner et al., 2017), and SD-VAE (Dai et al., 2018). We explore its conditional generation performance in two settings: controlling only the property value and controlling both the property value and the molecule structure to what can be seen as property transfer. We took the results of CVAE, GVAE from the literature. For SD-VAE we used the authors code with the default values to generate results for QM9 since these were not available for QM9. We also implemented supervised VAE versions of SD-VAE which we denote Sup-VAE-X-GRU (X ∈ {1, 3}, denotes GRU layers) and which can do conditional generation.\nwithin a 15% range of the desired one. Figure 4: Property transfer\nBefore proceeding with the experiments we will give some additional details on how we do conditional generation from pθ(x|y0) given the target property y0. Instead of marginalizing over the prior p(z), we mirror the approach taken during training and integrate over an approximation to the marginal inference distribution qφ(z) = 1N ∑N i=1 qφ(z|xi) which better characterizes where the mass of the dataset is in the latent space. However, as N is large and we do not wish to keep the entire dataset available at test time, we approximate qφ(z) with an isotropic Gaussian distribution q̂σ(z) = N (z|0, σ2I). We estimate σ for each model by Monte Carlo samples from qφ(z). For the supervised VAE without the soft constraint regularizer this yields 0.053 for QM9 and 0.118 for ZINC. For our model with the soft constraint we get 0.0354 for QM9 and 0.096 for ZINC. We do conditional generation of x given y0 by sampling from pθ(x|y0) = ∫ q̂σ(z)pθ(x|z,y0)dz.\nWe evaluate reconstruction performance in terms of the correctly reconstructed molecules on test sets of size 10k for QM9 and 5k for ZINC, for the latter we used the default test set. We evaluate the generated molecules’ quality by the percentage of valid, unique (i.e. percentage of unique molecules among the generated valid molecules) and novel (i.e. percentage of molecules never seen in the training set among the generated molecules) molecules. We estimate these quantities by sampling 10k (5K for ZINC) z from the q̂σ(z) and coupling each one of them with a logP value, y, randomly selected from the test set, and we subsequently decode the z,y concatenation. We can see that our model has a better reconstruction performance compared to the baselines while in some cases generating slightly less valid molecules table 1. In terms of the three quality measures achieves an excellent performance across all three metrics being always one of the two best performing methods for any metric.\nTo visualise how the conditional generation operates we randomly sample from the test set some molecule and obtain its property value y0. We then draw 50 random samples zi from q̂σ(z) and decode the [zi,y0] vectors. Among the generated valid molecules we compute the percentage of those that have a property value yi that is within a 15% range from the y0 property value. In Figure 3 we present the molecules obtained for a test molecule that had a logP of −0.5759. Out of the 50 generated molecules 46 were valid of which we give the five that were within a 15% range from the y0 value in Figure 3. As we can see we get molecules that are structurally very different from the original one yet they have similar logP value.\nTo quantify the quality of the conditional generations we measure the correlation between the property value we obtain by the conditional generation and the property value on which we conditioned the generation. We randomly sample 1000 y values from the test set and 1000 z values from the approximate learned prior q̂σ(z). We decode each pair, obtain x̂ ∼ pθ(x|y, z), and then measure the correlation of the original y with the ŷ of generated x̂. In Table 2, we give the correlation estimates for our method and the Sup-VAE baselines. As we can see our method has a considerably higher correlation score between the input and the obtained property than Sup-VAE. Conditional generation seems considerably harder for the ZINC dataset for all methods.\nTo visualise the style transfer behavior of our model we randomly sample two molecules xA,xB from the test set. We then sample zA from the learned posterior qφ(z|xA). We subsequently decode [zA,yB ], yB is the property of xB , and get a new molecule x̂AB . Ideally, the obtained molecule x̂AB should have a property value (logP) close to the target yB and be similar to xA. In Figure 4 we give one such example. To put the results into context in Figure 8 in appendix, we give the results of a virtual screening method, where we select from the full dataset five molecules which are structurally\nsimilar to xA and have logP values close to yB . As we can see the molecule that our model generates is a new one.\nTo quantify the style transfer performance we proceed in exactly the same manner as we did to quantify the conditional generation performance. However, now instead of sampling z from the approximate learned prior, q̂σ(z), we first sample some x from the test set and then we sample z from the learned posterior q(z|x). The results are in the second column of Table 2. As we can see the correlation values are now lower than the ones we obtained in the simple conditional generation case. This can be explained by the fact that now we are forcing a specific combination of structure, z comes from a real molecule, and property, which might simply be physically infeasible since the molecule space is discrete and not all combinations are possible. In addition, as it was the case for the conditional generation, style transfer is considerably more difficult for the ZINC dataset.\nWe further explore the style transfer and visualize how our model covers the combined space of molecule structure and properties. We sample nine molecules from the QM9 test set, and get their z encodings. For each such encoding we decode the vectors [z, y], y ∈ [−4.9, 4.9], with the y (logP) interval sampled at 11 points. We give in Figure 5 the resulting valid molecules, each column there corresponds to one of the nine original molecules, the ones surrounded by dotted rectangle, and their decodings with different logP values.For each original molecule we give the generated molecules ordered along the y axis according to the y property that they actually exhibit. The x-axis does not provide an ordering of the original molecules according to z, in fact we have ordered the original molecules by their y property. As we can see not all (z, y) combinations produce a result. These holes can be explained either by the physical infeasibility of the combination and/or a limitation of the learned model.\nWe can use conditional generation to control in a fine manner the value of the desired property, to what can be seen as direct property optimization. We visualise the level of control we have on an experiment with a single molecule (with logP is -1.137), which we randomly sample from the test set. We obtain its z encoding and perform generations with increased logP taking values in 1000 point grid in [−1.137, 4.9]. We then decode [z,yi] and compute the logP value of the generated molecules. Among the 1000 generated molecules only 19 are unique. We get an increase of logP of a very discrete nature, Figure 6. As already discussed not all combinations of structure and properties are possible. The generated molecules themselves are shown in the supplemental material.\nModel z ∼ q̂σ(z) z ∼ q(z|x)\nQM9 Sup-VAE-1-GRU 0.5420 0.2526 CGD-VAE-1-GRU 0.7185 0.5005 Sup-VAE-3-GRU 0.6958 0.4204 CGD-VAE-3-GRU 0.7414 0.4715\nZINC Sup-VAE-1-GRU 0.2301 0.0481 CGD-VAE-1-GRU 0.3877 0.0880 Sup-VAE-3-GRU 0.3514 0.1808 CGD-VAE-3-GRU 0.3966 0.1559\nTable 2: Correlation between the desired input property and the obtained property . z ∼ q̂σ(z) corresponding to conditional generation), and x, z ∼ q(z|x) to property trans-\nfer case." }, { "heading": "4.1 CONDITIONAL LSTM BASELINE", "text": "Finally, we consider a variant of the stacked LSTM model of Segler et al. (2017), with no latent space, where the model is modified to take a target logP value as an additional input at each generation step. This model forms a very strong baseline for many distribution matching tasks (Liu et al., 2018; ?), though as best we are aware this has never been used directly for conditional generation given a target property. We use a modification of the implementation provided by (?) with three layers and default settings, and fit the model by maximum likelihood training on pθ(x|y) = ∏T t=1 pθ(xt|x1:t−1,y).\nTraining on the ZINC dataset, we find the generated molecules from this model have a very high correlation 0.975 with the target logP value, greatly outperforming any of the latent variable models we consider. This suggests that such a model would be very useful for generating candidates globally, but as the model has no latent variable it is not amenable to style transfer. We observe this in Figure 7, which samples 100 candidate molecules from both the stacked LSTM model and for CGD-VAE-3GRU, conditioning on the property of one randomly-chosen test set example, while computing the Tanimoto similarity (computed using Morgan fingerprints of radius 2) to a second randomly-chosen test set example, across 200 pairs. The VAE has higher Tanimoto similarities as it can condition on the latent variable of the target molecule, representing a trade-off against the better adherence to the target property value of the unconditioned LSTM." }, { "heading": "5 CONCLUSION", "text": "We presented a single step approach for the conditional generation of molecules with desired properties. Our model allows also to condition generation on a prototype molecule with a desired high-level structure. This work thus directly inverts the traditional relationship between molecules and their properties. We found that training the deep generative models conditional on target properties, following a supervised VAE approach, does not appreciably harm the quality of the unconditional\ngenerative model as measured by validity, novelty, and uniqueness of samples. Furthermore, we see that the additional act of regularizing the output using an approximate property predictor helps improve both reconstruction accuracy and property correlations in most combinations of tasks and datasets, particularly for the smaller QM9 dataset and for smaller models with fewer RNN layers. We also note that although none of the deep latent variable models are competitive with an LSTM baseline when purely considering generation conditioned on a target property value, the low Tanimoto similarity between randomly sampled candidates and an arbitrary style transfer target makes clear that such a model is not suitable for targeted generation of candidates which are close in structure to a particular prototype.\nIn future work, we want to explore how to further improve the correlation between the desired input properties to the decoder, and the properties of the generated molecules. Moreover, we want also to condition on multiple properties; while this is in principle possible in our framework, we do not explore it empirically here. Modifying a single property while constraining the remaining to be close to the original can further aggravate the infeasibility problem, as not all combinations of molecular properties may even be feasible, perhaps requiring learning a dependency structure between multiple properties." }, { "heading": "6 APPENDIX", "text": "" }, { "heading": "6.1 THE REGULARISER AND ITS RELATION TO THE MUTUAL INFORMATION MAXIMIZATION", "text": "The soft constrain in the loss 3, in fact , is equivalent to a simple maximizing mutual information formulation between generated molecules x̂ and the target property y provided to the generator. Assume the true conditional distribution is p̃(y|x):\nI(y; x̂) = H(y)−H(y|x̂) = H(y) + Ex̂∼pθ(x|y)Ey′∼p̃(y|x̂)[log p̃(y ′|x̂)] (12)\nWe do not know the true distribution p̃(y|x̂), however, the RDKit provides an estimation p(y|x̂) of the distribution assuming a Gaussian distribution over error:\np(y|x̂) = N (y|f(x̂), λ−11 I) (13)\nwhere f is the molecule property estimator, i.e., RDKit. We have:\nI(y; x̂) = H(y) + Ex̂∼pθ(x|y)Ey′∼p̃(y|x̂)[log p̃(y′|x̂) p(y′|x̂) p(y′|x̂)]\n= H(y) + Ex̂∼pθ(x|y)[Dkl(p̃(y ′|x̂)||p(y′|x̂)) + Ey′∼p̃(y|x̂) log p(y′|x̂)] ≥ H(y) + Ex̂∼pθ(x|y)[Ey′∼p̃(y|x̂) log p(y ′|x̂)] (14)\nBy following the Lemma 5.1 given in Chen et al. (2016), we have\nI(y; x̂) ≥ H(y) + Ey∼p(y),x̂∼pθ(x|y)[log p(y|x̂)] (15)\nAsH(y) is constant, minimizing Ex̂∼pθ(x|yi)‖f(x̂)−yi‖2 is equivalent to maximizing I(y; x̂) under the assumption that p(y|x̂) is close to p̃(y|x̂)" }, { "heading": "6.2 ARCHITECTURE AND TRAINING PROCEDURE DESCRIPTION", "text": "We use the same encoder and decoder network structure as Dai et al. (2018) with the only difference that our decoder takes as input the concatenation of y, z. As GRU layers become computationally expensive when the sequences length increase, we also examined the model using less layer GRU. To be precise, the decoder in Dai et al. (2018) takes the from of a dense hidden layer with ReLU activation followed by three layers GRU Chung et al. (2014). We tried two different settings of decoder: in the first setting, we feed the concatenation of y, z to dense layer then apply one layer GRU, in the second setting, to enhance the effect of y in the decoder, we feed y not only to the dense layer but also to each layer of GRU. Furthermore, we set the dimension of the latent representation to 56. For the oracle function estimator fw, we use the same network architecture as the encoder (there is no parameter sharing) and we add one more fully connected layer followed by a Tanh transformation.\nTo speed up convergence, we initialize the fw from a pre-training, where we train fw on the welltrained (maximum 500 epochs with early stopping) supervised VAE’s decoder output to predict the molecules property value. We also initialize the parameters of the encoder/decoder networks with the partially trained supervised VAE model (after 40 epochs for QM9, 100 epochs for ZINC). We do not update ω and φ, θ simultaneously, instead we do an alternate optimization. We update ω continuously for five epochs while holding φ, θ and do the same for updating φ, θ. We set the hyper-parameter value λ1 to 50 and λ2 to 1. The mini-batch size is set to 300 for QM9 and 100 for ZINC. We use ADAM optimizer with learning rate 0.0001 and pytorch lr-schedular on the validation loss. The general training algorithm is described in below algorithm block1. In our experiment, to train fω, we skipped the second term in step 7, which means we only train fω on the training data but not the newly generated molecules obtained by permuting the property. The reason for this is that, during the training, we found that it is easy for the model to learn to reconstruct but hard to conditionally generate the molecules with given properties while we have no guidance of what the molecules should look like. Further more, some combination of z and y are physically not feasible. In this case, when the conditional generation is not good enough yet during the training, we end up fitting fω on the miss represented molecules representations and it makes the optimization harder.\nAlgorithm 1 Training algorithm 1: Initialize pθ(x|z,y), qφ(z|x), fw 2: for i=1,2, . . . , N (maximum epoch number) do 3: for j = 1,2, . . . , L, sample a minibatch D = (X,Y) = {xi,yi)}Mi=1 of M samples do 4: randomly permute the property set Y to obtain Y∗ and define a label permuted mini\nbatch D∗ = (X,Y∗) = {xi,y∗i }Mi=1 5: θj = θj−1 − γ(−∇θLELBO(θ, φ, ω) + λ12 ∑ (xi,yi)∈D∗ Eq(z|xi)∇θ‖fω(gθ(z,yi))− yi‖2)\n6: φj = φj−1 − γ(−∇φLELBO(θ, φ, ω)) 7: wj = wj−1 − γ(λ22 ∑ (xi,yi)∈D Eq(z|xi)∇ω‖fω(gθ(z,yi))− yi‖2 + λ1 2 ∑ (xi,yi)∈D∗ Eq(z|xi)∇ω‖fω(gθ(z,yi))− yi‖2)" }, { "heading": "6.3 USE A FIXED PRE-TRAINED PROPERTY PREDICTION FUNCTION", "text": "We also investigate the case where we train the property prediction function fω on the well trained supervised VAE output and keep it fixed during the training of the main model. We give the performance in tables 3, 4. Using a fixed fω , in terms of reconstruction and generation performance, delivers mixed results 3. However, in terms of conditional generation 4, it does perform better than the baselines but worse compared to the case where we also update fω during the learning." }, { "heading": "6.4 VIRTUAL SCREENING", "text": "The figure 8 displays the results of a virtual screening method, where we select from the full dataset five molecules which are structurally similar to xA in figure 4 in section 4 and have logP values close to yB . As we can see the molecule that our model generates is a new one.\n6.5 METRICS AS FUNCTION OF y − y′\nThe validity and novelty were mainly used to assess purely the generative model performance. However, it is also interesting to see if the model capable of generating valid and novel molecules if we start from an existing molecules and drift away from the original label, i.e., if we get z from q(z|x), and then compare different values from p(x|z,y′) as y′ moves far from y. We randomly sample a molecule x whose LogP is y from the test set, then sample 10 z from q(z|x). For each such z we couple it with a y′ that is different that y, and sample 10 molecules from p(x|z,y′). Eventually, for each such y′, starting from original molecule x, we generated 100 molecules, and we report the validity, uniqueness and novelty as a function of y − y′. The figure 9 displays result of repeating above process for 20 randomly sampled (x,y) along 100 grid points for y − y′. The result confirms that on a big data set, conditional generative models uniqueness, validity and novelty performance is not affected by the size of the modification done on the property. However, on a small dataset, uniqueness is not affected. As expected, novelty increases as the properties modification size increases and validity drops slightly as the property modification size increase.\nFigure 9: CGD-VAE-3-GRU model validity, novelty, uniqueness performance on property transfer task as a function of y − y′ on QM9 dataset.\nFigure 10: CGD-VAE-3-GRU model validity, novelty, uniqueness performance on property transfer task as a function of y − y′ on ZINC dataset." }, { "heading": "6.6 SAMPLING FROM APPROXIMATED MARGINAL POSTERIOR FOR GENERATION", "text": "With our model, during the generation, we observe that sampling from the approximated marginal posterior improves generation performance when compered to sampling from the prior. Here we\ninvestigate if this findings holds for other baseline models or not. We explored the behavior of baselines when the z is sampled from the approximate marginal posterior, we observe that for CVAE and GVAE the validity did not change (as the q̂(z) and p(z) are essentially identical); for the SD-VAE the validity increases (Table 5).\nSDVAE" }, { "heading": "6.7 MOLECULE PROPERTY OPTIMIZATION", "text": "Figure 11 is the visualization of the generated molecules from property optimization task, given in figure 6 section 4. The molecules are generated by increasing the logP of a given molecule, i.e. we hold z fixed and increase the y value. Among the all generated molecules, 19 of them are unique." } ]
2,019
null
SP:961781ea8113343d82568b49c26f0889d5632aba
[ "The paper proposes a new method for organizing episodic memory in with deep Q-networks. It organizes the memory as a graph in which nodes are internal representations of observed states and edges link state transitions. Additionally, nodes from different episodes are merged into a single node if they represent the same state, allowing for inter-episode value propagation of stored rewards. The authors experimentally evaluate the method against reasonable baselines and show improved performance in the majority of tasks.", "This paper proposes Episode Reinforcement Learning with Associative Memory (ERLAM), which maintains a graph based on the state transitions (i.e. nodes correspond to states, and edges correspond to transitions) and propagates the values through the edges in the graph in the reverse order of each trajectory. The learned associative memory is then used for the regularization loss for training Q-network. Experimental results show that ERLAM significantly improves the sample efficiency in Atari benchmarks." ]
Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop a reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on the navigation domain and Atari games show our framework achieves significantly higher sample efficiency than state-of-the-art episodic reinforcement learning models.
[ { "affiliations": [], "name": "Guangxiang Zhu" }, { "affiliations": [], "name": "Zichuan Lin" }, { "affiliations": [], "name": "Guangwen Yang" }, { "affiliations": [], "name": "Chongjie Zhang" } ]
[ { "authors": [ "John R Anderson", "Gordon H Bower" ], "title": "Human associative memory", "venue": "Psychology press,", "year": 2014 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Richard Bellman" ], "title": "On a routing problem", "venue": "Quarterly of applied mathematics,", "year": 1958 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Dynamic programming and optimal control, volume 1", "venue": "Athena scientific Belmont, MA,", "year": 1995 }, { "authors": [ "Charles Blundell", "Benigno Uria", "Alexander Pritzel", "Yazhe Li", "Avraham Ruderman", "Joel Z Leibo", "Jack Rae", "Daan Wierstra", "Demis Hassabis" ], "title": "Model-free episodic control", "venue": "arXiv preprint arXiv:1606.04460,", "year": 2016 }, { "authors": [ "Mathew Botvinick", "Sam Ritter", "Jane X Wang", "Zeb Kurth-Nelson", "Charles Blundell", "Demis Hassabis" ], "title": "Reinforcement learning, fast and slow", "venue": "Trends in cognitive sciences,", "year": 2019 }, { "authors": [ "Olivier Chapelle", "Lihong Li" ], "title": "An empirical evaluation of thompson sampling", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Will Dabney", "Georg Ostrovski", "David Silver", "Rémi Munos" ], "title": "Implicit quantile networks for distributional reinforcement learning", "venue": "arXiv preprint arXiv:1806.06923,", "year": 2018 }, { "authors": [ "Benjamin Eysenbach", "Ruslan Salakhutdinov", "Sergey Levine" ], "title": "Search on the replay buffer: Bridging planning and reinforcement learning", "venue": "arXiv preprint arXiv:1906.05253,", "year": 2019 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Ian Osband", "Alex Graves", "Vlad Mnih", "Remi Munos", "Demis Hassabis", "Olivier Pietquin" ], "title": "Noisy networks for exploration", "venue": "arXiv preprint arXiv:1706.10295,", "year": 2017 }, { "authors": [ "Itzhak Gilboa", "David Schmeidler" ], "title": "Case-based decision theory", "venue": "The Quarterly Journal of Economics,", "year": 1995 }, { "authors": [ "Steven Hansen", "Alexander Pritzel", "Pablo Sprechmann", "André Barreto", "Charles Blundell" ], "title": "Fast deep reinforcement learning using online adjustments from the past", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Anna Harutyunyan", "Marc G Bellemare", "Tom Stepleton", "Rémi Munos" ], "title": "Q(λ) with off-policy corrections", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2016 }, { "authors": [ "Frank S He", "Yang Liu", "Alexander G Schwing", "Jian Peng" ], "title": "Learning to play in a day: Faster deep reinforcement learning by optimality tightening", "venue": "arXiv preprint arXiv:1611.01606,", "year": 2016 }, { "authors": [ "Zhiao Huang", "Fangchen Liu", "Hao Su" ], "title": "Mapping state space using landmarks for universal goal reaching", "venue": "arXiv preprint arXiv:1908.05451,", "year": 2019 }, { "authors": [ "Leslie Pack Kaelbling" ], "title": "Hierarchical learning in stochastic domains: Preliminary results", "venue": "In Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "Michael Kearns", "Satinder Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Teuvo Kohonen" ], "title": "Self-organization and associative memory, volume 8", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Máté Lengyel", "Peter Dayan" ], "title": "Hippocampal contributions to control: the third way", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Zichuan Lin", "Tianqi Zhao", "Guangwen Yang", "Lintao Zhang" ], "title": "Episodic memory deep q-networks", "venue": "arXiv preprint arXiv:1805.07603,", "year": 2018 }, { "authors": [ "David Marr", "David Willshaw", "Bruce McNaughton" ], "title": "Simple memory: a theory for archicortex", "venue": "In From the Retina to the Neocortex,", "year": 1991 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": null, "year": 1905 }, { "authors": [ "Alexander Pritzel", "Benigno Uria", "Sriram Srinivasan", "Adria Puigdomenech Badia", "Oriol Vinyals", "Demis Hassabis", "Daan Wierstra", "Charles Blundell" ], "title": "Neural episodic control", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Bradly C Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "arXiv preprint arXiv:1507.00814,", "year": 2015 }, { "authors": [ "Robert J Sutherland", "Jerry W Rudy" ], "title": "Configural association theory: The role of the hippocampal formation in learning, memory, and amnesia", "venue": "Psychobiology,", "year": 1989 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction, volume 1", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": null, "year": 2011 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Norman Tasfi" ], "title": "Pygame learning environment", "venue": "https://github.com/ntasfi/ PyGame-Learning-Environment,", "year": 2016 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning", "venue": null, "year": 2012 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI conference on artificial intelligence,", "year": 2016 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "arXiv preprint arXiv:1511.06581,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep reinforcement learning (RL) has achieved remarkable performance on extensive complex domains (Mnih et al., 2015; Lillicrap et al., 2016; Silver et al., 2016; Schulman et al., 2017). Deep RL research largely focuses on parametric methods, which usually depend on a parametrized value function. The model-free approaches are quite sample inefficient and require several orders of magnitude more training samples than a human. This is because gradient-based updates are incremental and slow and have global impacts on parameters, leading to catastrophic inference issues.\nRecently, episodic reinforcement learning has attracted much attention for improving sample efficiency of deep reinforcement learning, such as model-free episodic control (MFEC) (Blundell et al., 2016), neural episodic control (NEC) (Pritzel et al., 2017), ephemeral value adjustments (EVA) (Hansen et al., 2018), and episodic memory deep q-networks (EMDQN) (Lin et al., 2018). Episodic control is inspired by the psychobiological and cognitive studies of human memory (Sutherland & Rudy, 1989; Marr et al., 1991; Lengyel & Dayan, 2008; Botvinick et al., 2019) and follows the idea of instance-based decision theory (Gilboa & Schmeidler, 1995). It builds a non-parametric episodic memory to store past good experiences and thus can rapidly latch onto successful policies when encountering states similar to past experiences.\nHowever, most of the current breakthroughs have focused on episodic memory and leave the association of memory largely unstudied. Previous work usually uses a tabular-like memory, and experiences are stored as unrelated items. Studies in psychology and cognitive neuroscience (Kohonen, 2012; Anderson & Bower, 2014) discover that associative memory in the hippocampus plays a vital role in human activities, which associates past experiences by remembering the relationships between them. Inspired by this, we propose a novel associative memory based reinforcement learning framework to improve the sample-efficiency of reinforcement learning, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories\n∗Equal Contribution\nto enable reasoning effective strategies. We store the best historical values for memorized states like episodic memory, and maintain a graph on top of these states based on state transitions at the same time. Then we develop an efficient reverse-trajectory propagation strategy to allow the values of new experiences to propagate to all memory items through the graph rapidly. Finally, we use the fast-adjusted non-parametric high values in associative memory as early guidance for a parametric RL agent so that it can rapidly latch on states that previously yield high returns instead of waiting for many slow gradient updates.\nTo illustrate the superiority of the associative memory in reinforcement learning, consider a robot exploring in a maze to seek out the apple (place G), as shown in Figure 1. It collects two trajectory experiences starting from place A and B (i.e., blue dash line A-D-C and B-D-G), respectively. All the states of trajectory A-D-C receive no reward because the agent terminates at a non-reward state (place C). While in trajectory B-D-G, the final non-zero reward of catching an apple (place G) back-propagates to all the states of this trajectory. Episodic memory keeps a higher value of two trajectories at the intersection (place D) when taking actions toward the lower-right corner, but the other states in trajectory A-D are still 0. If an episodic memory based robot starts from place A again, it will wander around A because there are no positive values indicating the way to the goal. Thus based on the episodic memory, the robot may eventually take a policy like the green line (A-B-D-G) after multiple attempts. However, if the robot adopts associative memory, the high value in the place D collected from trajectory B-D-G will be further propagated to the start point A, and thus the robot can correctly take the red-line policy (A-D-G).\nTo some extent, our associative memory is equivalent to automatic augmentation of counterfactual combinatorial trajectories in memory. Thus, our framework significantly improves the sampleefficiency of reinforcement learning. Comparisons with state-of-the-art episodic reinforcement learning methods show that ERLAM is substantially more sample efficient for general settings of reinforcement learning. In addition, our associative memory can be used as a plug-and-play module and is complementary to other reinforcement learning models, which opens the avenue for further research on associative memory based reinforcement learning." }, { "heading": "2 BACKGROUND", "text": "In the framework of reinforcement learning (Sutton & Barto, 1998), an agent learns a policy to maximize its cumulative rewards by exploring in a Markov Decision Processes (MDP) environment. An MDP is defined by a tuple (S,A, P,R, γ), where S is a finite set of states, A is a finite set of actions available to the agent, P : S × A × S → R defines the transition probability distribution, R is the reward function, and γ ∈ (0, 1] is the discount factor. At each time step t, the agent observes state st ∈ S , selects an action at ∈ A according to its policy π : S → A, and receives a scalar reward rt. In the setting of finite horizon, the accumulated discounted return is calculated as, Rt = ∑T k=0 γ\nkrt+k where T is the episode length and goal of the agent is to maximize the expected return for each state st.\nThe state-action value function Qπ(s, a) = E[Rt|st = s, a] is the expected return for executing action a on state s and following policy π afterwards. DQN (Mnih et al., 2015) parameterizes this action-value function by deep neural networks Qθ(s, a) and use Q-learning (Watkins & Dayan, 1992) to learn it to rank which action at is best to take in each state st at time step t. The parameters of the value network θ are optimized by minimizing the L2 difference between the networks output Qθ(s, a) and the Q-learning target yt = rt + γmaxaQθ̂(st+1, at), where θ̂ are parameters of a target network that is a older version of the value network and updated periodically. DQN uses an off-policy learning strategy, which samples (st, at, rt, st+1) tuple from a replay buffer for training.\nDQN, as a typical parametric reinforcement learning method, suffers from sample inefficiency because of slow gradient-based updates. Thus episodic reinforcement learning is proposed to speed up the learning process by a non-parametric episodic memory. Episodic reinforcement learning enables fast learning by modeling hippocampal instance-based learning. The key idea is to store good past experiences in a tabular-based non-parametric memory and rapidly latch onto past successful policies when encountering similar states instead of waiting for many steps of optimization." }, { "heading": "3 RELATED WORK", "text": "Deep Reinforcement Learning Our method is closely related to DQN (Mnih et al., 2015). As the seminal work of deep reinforcement learning, DQN learns a deep neural network for state-action value function by gradient back-propagation and conducts parametric control. Following this line, a large number of extensions have been proposed to improve the learning efficiency of the parametric model. Double DQN (Van Hasselt et al., 2016) alleviates the over-estimation issue of Q-Network. Dueling network (Wang et al., 2015) separates Q-Network into two streams which predict state value and advantage value respectively and achieves better generalization across actions. Prioritized experience replay (Schaul et al., 2015b) changes the sampling priority of each training sample according to its learning error. Apart from these prior improvements, many algorithms have been proposed to accelerate reward propagation and backup mechanism. Optimality Tightening method(He et al., 2016) combines the strength of DQN with a constrained optimization approach to rapidly propagate close-by rewards. Q∗(λ) (Harutyunyan et al., 2016) and Retrace(λ) (Munos et al., 2016) incorporate on-policy samples into off-policy learning targets. Noisy Net (Fortunato et al., 2017) adds noise to the parametric model during learning to improve the exploration ability. Distributional RL (Bellemare et al., 2017) learns the value function as a full distribution instead of a expected value. Unlike these works, we focus on combining non-parametric memory and parametric model in this paper. Thus our method is complementary to these prior extensions and can be combined with them seamlessly.\nEpisodic Reinforcement Learning Our work is also related to episodic reinforcement learning. Model-free episodic control (Blundell et al., 2016) uses a completely non-parametric model that keeps the best Q values of states in a tabular-based memory and replays the sequence of actions that so far yielded the highest return from a given start state. At the end of each episode, the Q values in memory are updated by the greater of the existing values and the accumulated discounted returns in the current episode. In the execution stage, the agent selects actions according to a k-nearestneighbors lookup in the memory table. Recently, several extensions have been proposed to integrate episodic control with parametric DQN. Neural episodic control (Pritzel et al., 2017) develops end-to-end episodic control by a differentiable neural dictionary to generate semi-tabular representation as slow-changing keys and then retrieves fast-updating values by context-based lookup for action selection. To better leverage the trajectory nature of experience, ephemeral value adjustments method (Hansen et al., 2018) proposes to further leverage trajectory information from replay buffer to propagate value through time and produce trajectory-centric value estimates. Our method differs from EVA in that we associate memory by a graph, and thus we can leverage not only intra-episode but also inter-episode information. Episodic memory deep q-networks (Lin et al., 2018) distills the information of episodic memory into a parametric model by adding a regularization term in the objective function and significantly boosts up the performance of DQN. Unlike these prior works, which adopt either tabular memory or semi-tabular memory, our work builds a graph on memory items based on their relationship to form an associative memory.\nGraph Based Methods in Deep Reinforcement Learning Recently, several works have also been proposed to use graph for planning in deep reinforcement learning. Eysenbach et al. (2019) builds a directed graph directly on top of states in replay buffer and runs graph search to find the sequence of waypoints, leading to many easier sub-tasks and thus improve learning efficiency. Huang et al. (2019) abstracts state space as a small-scale map which allows it to run high-level planning using a pairwise shortest path algorithm. Unlike these prior works that use graphs for planning, our method reorganizes episodic memory by a graph to allow faster reward propagation. In addition, these graph-based models rely on goal-conditioned RL (Kaelbling, 1993; Schaul et al., 2015a) and only demonstrate their performance in navigation-like problems, while our approach is intended for general RL settings.\nExploration Efficient exploration is a long-standing problem in reinforcement learning. Prior works have proposed guiding exploration based on criteria such as intrinsic motivation (Stadie et al., 2015), state-visitation counts (Tang et al., 2017), Thompson sampling and bootstrapped models (Chapelle & Li, 2011; Osband et al., 2016), optimism in the face of uncertainty (Kearns & Singh, 2002), parameter-space exploration (Plappert et al., 2017; Fortunato et al., 2017). Recently, Oh et al. (2018) proposed self-imitation learning (SIL) and found that exploiting past good experiences can indirectly drive deep exploration. In their work, the agent imitates its own decisions in the past only when such decisions resulted in larger returns than expected. Like SIL, EMDQN (Lin et al., 2018) learns from episodic memory to replay past best decisions, therefore incentivizing exploration. In our method, we build associative memory through a graph, which enhances the exploitation of past good experiences and thus can indirectly encourage deeper exploration than EMDQN (Lin et al., 2018)." }, { "heading": "4 EPISODIC REINFORCEMENT LEARNING WITH ASSOCIATIVE MEMORY", "text": "" }, { "heading": "4.1 ASSOCIATING EPISODIC MEMORY AS A GRAPH", "text": "Similar with previous episodic reinforcement learning, we adopt an episodic memory to maintain the historically highest values QEC(φ(s), a) of each state-action pair, where φ is an embedding function and can be implemented as a random projection or variational auto-encoders (VAE) (Kingma & Welling, 2013). When receiving a new state, the agent will look up in the memory and update the values of states according to the following equation,\nQEC(φ(st), at)← {\nmax(QEC(φ(st), at), Rt) , if(φ(st), at) ∈ QEC Rt , otherwise.\n(1)\nHowever, episodic memory stores states as unrelated items and does not make use of the relationship between these items. To fully exploit information in episodic memory, we further build a directed graph G on top of items in the episodic memory to form an associative memory, as shown in Figure 2. In this graph, each node corresponds to a memory item that records the embedded vector of a state φ(s), and we leverage transitions of states to bridge the nodes. The graph is defined as,\nG = (V,E), V = φ(s), E = {s→ s′ | (s, a, s′) is stored in memory}. (2) Given a sampled trajectory, we temporarily add each state to the graph. We add directed edges from the given state to every other previously memorized state that is the successor of it under a certain action. Our associative memory reorganizes the episodic memory and connects these fragmented states that previously yielded high returns by a graph. We rewrite these stored valuesQEC(φ(s), a) as QG(φ(s), a) in our graph augmented episodic memory. In addition, we adopt a strategy of discarding the least recently used items when the memory is full." }, { "heading": "4.2 PROPAGATING VALUES THROUGH ASSOCIATIVE MEMORY", "text": "Typical deep RL algorithms sample experience tuples uniformly from the replay buffer to update value function. However, the way of sampling tuples neglects the trajectory nature of an agent’s experience (i.e., one tuple occurs after another, and thus information of the following state should be quickly propagated into the current state). EVA (Hansen et al., 2018) encourages faster value propagation by introducing trajectory-centric planning (TCP) algorithm. Nonetheless, EVA only propagates value through the current episode, which we refer to as intra-episode propagation. Our insight here is that one state might appear in different trajectories, and such join points can help connect different trajectories. Therefore, we explicitly build the graph between states from different trajectories in memory and thus allows inter-episode value propagation.\nSince the graph over states is complicated (e.g., not a tree structure), value propagation over such a graph is always slow. To accelerate the propagating process, we propagate values using the sequential property. The pseudo-code of value propagation is shown in Algorithm 1. Our general idea is to update the values of the graph in the reverse order of each trajectory. Specifically, when adding a new state to the memory, we record the sequential step ID t of the state at the current trajectory. For memory associating, we first sort the elements in memory by their sequential step IDs in descending order and propagate the value from states with large sequential step ID to a small one for several\nAlgorithm 1 Value propagation in Associative Memory h: embedded vector of state, h = φ(s) G ← Sort nodes in graph G by sequential step ID t in descending order repeat\nfor m = 1 . . . |G| do Get current state-action pair (s, a) =\n(sm, am) Get successor state embedding s′ and action a′ using graph G. Update graph augmented memory using Eq. 3. end for until QG converges\niterations untilQG values converge. At each update, we get all successor state-action pairs (s′, a′) of the current one (s, a) and current reward r according to the graph G and apply max operation on successor action a′ to propagate the values to current state-action pair. Formally, our graph augmented memory is updated as follow:\nQG(φ(s), a)← r + γmax a′ QG(φ(s ′), a′). (3)\nSince most of states at the beginning are similar across different episodes, our reverse order updating strategy can efficiently propagate all the values of the graph. In addition, as we show in Theorem 1, our graph-based value propagation algorithm can converge to a unique optimal point. The proofs are shown in Appendix A. Theorem 1. Denote the Bellman backup operator in Equation 3 as B : R|S|×|A| → R|S|×|A| and a mapping Q0 : S ×A → R|S|×|A| with |S| <∞ and |A| <∞, and define Qk+1 = BQk. Repeated application of the operator B for our graph-based state-action value estimate Q̂G converges to a unique optimal value Q∗G .\nIn the previous episodic reinforcement learning with no graph built, only the values of exactly the same or similar states can be updated. This is because in the typical update rule of episodic memory, as shown in Eq. 1, the relationship between states has been neglected. Episodic memory does not leverage the information of edges E in our graph G. Consequently, stored values in episodic memory often violate Bellman’s equation. On the contrary, our associative memory allows efficient value propagation through the edges of the graph to compute the more accurate values for each state." }, { "heading": "4.3 LEARNING WITH ASSOCIATIVE MEMORY", "text": "Building associative memory can be viewed as a way of augmenting counterfactual experiences. As shown in Figure 2, the same states might appear in N > 1 trajectories. Vanilla episodic memory maps such states to the highest values among N trajectories, while our associative memory regards such states as join points to connect different trajectories, leading to totally N2 trajectories. This is equivalent to sample more combinatorial trajectories from environments and thus can significantly improve sample efficiency of RL algorithms.\nOur associative memory can be applied to both the learning and control phases. In this paper, we use our associative memory as guidance for the learning of the Q function. The overall framework is shown as Figure 3. Specifically, we use associative memory as a regularization term of objective function to supervise the learning of the Q network. The Q network is learned by minimizing the following objective function:\nLθ = E(s,a,s′,r)∼D [( r + γmax\na Qθ̂(s\n′, a)−Qθ(s, a) )2 + λ ( QG(φ(s), a)−Qθ(s, a) )2] , (4)\nwhere λ is the weight of the regularization term, θ represents parameters of parametric Q-network. Similar with DQN (Mnih et al., 2015), we also adopt a target network parameterized by θ̂ to stabi-\nlize the learning process. Through the combination of parametric and non-parametric term, we can efficiently guide the learning of a conventional Q-network by the fast-adjusted high values in associative memory so that the agent can rapidly latch on strategies that previously yield high returns instead of waiting for many steps of slow gradient update. The pseudo code of our method is shown in Algorithm 2.\nAlgorithm 2 ERLAM: Episodic Reinforcement Learning with Associative Memory D: Replay buffer G: Graph (Associative memory) Te: Trajectory length of e-th episode K: Associate frequency for Episode number e = 1 . . . E do\nfor t = 1 . . . Te do Receive initial observation st from environment with state embedding ht = φ(st) at← -greedy policy baesd on Qθ(st, a) Take action at, receive reward rt and next state st+1 Append (st, at, rt, st+1) to D if t mod update freq == 0 then\nSample training experiences (s, a, r, t) from D Retrieve QG(φ(s), a) from associative memory Update parameter θ using Eq. 4\nend if end for for t = Te . . . 1 do\nRt ← rt + γRt+1, if t < Te ; Rt ← rt, if t = Te Append (ht, at, rt, t, Rt) to G if (ht, at) /∈ G Update QG using Eq.1 if (ht, at) ∈ G\nend for if e mod K == 0 then\nRun Algorithm 1 to update QG end if\nend for" }, { "heading": "4.4 CONNECTION TO GRAPH-BASED DEEP REINFORCEMENT LEARNING", "text": "When the general RL setting used in our approach degenerates to a setting of navigation-like task that is usually adopted by goal-conditional RL (Kaelbling, 1993; Schaul et al., 2015a), the update target of associative memory in Eq. 3, y = r + γmaxa′ QG(φ(s′), a′) can be rewritten as,\ny = { r , if s′ is a terminal state, γmaxa′ QG(φ(s ′), a′) , otherwise. (5)\nOptimizing with the target in Eq. 5 is equivalent to finding the shortest path in the graph of all states. In this case, algorithm 1 is analogous to Bellman-Ford algorithm (Bellman, 1958), which is proved\nthat the value can converge in limited iterations. In the context of goal-conditional RL, some graphbased methods (Huang et al., 2019; Eysenbach et al., 2019) also calculated shortest path. They focus on a graph of waypoints learned by goal-conditional RL instead of memorized states that previously yield high returns. In addition, they use a parametric approach for value approximation, while we develop a non-parametric approach to improve sample efficiency of a parametric RL agent." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENT SETTING", "text": "We follow the same setting for network architecture and all hyper-parameters as DQN (Mnih et al., 2015). The raw images are resized to an 84 × 84 grayscale image st, and 4 consecutive frames are stacked into one state. The Q value network alternates convolutions and ReLUs followed by a 512-unit fully connected layer and an output layer whose size is equal to the number of actions in each game. Denote Conv(W , F , S) as the convolutional layer with the number of filters W , kernel size F , and stride S. The 3 convolutional layers can be indicated as Conv(32,8,4), Conv(64,4,2), and Conv(64,3,1). We used the RMSProp algorithm (Tieleman & Hinton, 2012) with learning rate α = 0.00025 for gradient descent training. The discount factor γ is set to 0.99 for all games. We use annealing -greedy policies from 1.0 to 0.1 in the training stage while fixing = 0.05 during evaluation.\nFor hyper-parameters of associative memory, we set the value of λ as 0.1 and associate frequency K as 10 in the navigation domain, Monster Kong. In Atari games, we use the same settings for all games. The value of λ is 0.3, and the associate frequency K is 50. The memory size is set as 1 million. We use random projection technique and project the states into vectors with the dimension of d = 4. For efficient table lookup, we build a kd-tree for these low-dimension vectors." }, { "heading": "5.2 RESULTS ON NAVIGATION DOMAIN", "text": "We first test our model on the navigation domain, which contributes to demonstrate the superiority of our algorithm and understand the contribution of associative memory. We use a video game Monster Kong from Pygame Learning Environment (PLE)(Tasfi, 2016) to set up the navigation experiments. In this game, the goal of the agent is to approach the princess with actions up, down, left, right, jump and noop from random starting positions. The agent will win with an extra reward +1 when touching the princess and lose when hitting the thorns (silver triangles). We run ERLAM on three maps of Monster Kong (see Figure 4) and compare it with EMDQN and DQN.\nAs shown in Figure 5, the sample-efficiency of ERLAM significantly outperforms EMDQN and DQN. ERLAM with only 10M samples can gain higher scores than EMDQN with 80M samples on map MonsterKong2 and MonsterKong3. Then, we inspect the value estimation of Q networks and the stored values in memory to provide insight into our reinforcement learning results. We plot the average values of states in associative memory (orange line in the bottom row of Figure 5) during the training process of ERLAM. To better understand the contribution of the value propagation\nprocess in associative memory, we maintain a memory without value propagation (which amounts to episodic memory, shown as the green line in the bottom row of Figure 5) in the meanwhile and compare the state-action values of it to associative memory. As expected, the values after value propagation of associative memory grow higher, indicating associative memory provides a better non-parametric lower bound of Q value than episodic memory. Values estimated by associative memory are closer to the true values of optimal policy (black dash line) and capable of guiding the learning of the Q network (blue line). We further visualize and compare the execution policies according to associative memory and episodic memory to gain a deeper understanding of their connections. We study a case in Figure 6. We observe that the policy provided by associative memory (yellow dash line) is exactly the combination of two policies in episodic memory (blue line and red line), and such a combinatorial trajectory is not a real trajectory in replay buffer. This result suggests that the value propagation in associative memory enables automatic augmentation of counterfactual combinatorial trajectories, which accounts for the improvement of sample efficiency in ERLAM.\n5.3 RESULTS ON ATARI GAMES\nTo further evaluate the sample efficiency of ERLAM on a diverse set of games, we conduct experiments on the benchmark suite of Atari games from the Arcade Learning Environment (ALE) (Bellemare et al., 2013), which offer various scenes to test RL algorithms over different settings. We largely follow the training and evaluation protocol as (Mnih et al., 2015). We train our agents for 10 epochs, each containing 1 million frames, thus 10 million frames in total. For each game, we evaluate our agent at the end of every epoch for 0.5 million frames, with each episode up to 18000 frames, and start the game with up to 30 no-op actions to provide random starting positions for the agent.\nIn our experiments, we compare ERLAM with episodic reinforcement learning baselines, MFEC (Blundell et al., 2016), NEC (Pritzel et al., 2017), EMDQN (Lin et al., 2018), EVA (Hansen et al., 2018), as well as an ablation (i.e., DQN with no associative memory). MFEC directly uses the non-parametric episodic memory for action selection, while NEC, EMDQN, and EVA combine non-parametric episodic memory and a parametric Q-network. Different from previous work, ERLAM adopts associative memory to guide the learning of a Q-network.\nWe tested ERLAM on 25 popular and challenging Atari games. To evaluate our approach, we follow Wang et al. (2015) and measure improvement in percentage in score over the better of human and DQN agent scores for both ERLAM and EMDQN:\nScoreAgent − ScoreDQN max{ScoreHuman,ScoreDQN} − ScoreRandom . (6)\nTo test the sample efficiency of our method, we limit our training data to 10 million frames and compare with state-of-the-art results on episodic RL (i.e., EMDQN (Lin et al., 2018)), which are trained with 40 million frames and reported in their original paper. The results are shown in Figure 7. We found that even though our agent uses 4 times fewer training samples than EMDQN, ERLAM still outperforms EMDQN on 17 games. Overall, ERLAM significantly outperforms all baselines over most games. This suggests that associative memory can efficiently guide the learning of a parametric RL agent, and our framework of combining associative memory with parametric RL can achieve significantly better sample efficiency than existing RL algorithms. For the games where ERLAM does not perform very well, we summarize the reasons as follows. First, ERLAM is good at improving the sample-efficiency in near-deterministic environments but may suffer from overestimation in highly stochastic environments, such as Tutankham. Second, since representations learning is not the focus of this paper, we simply use the naive random projection as the state representations in memory. Random projection is only used for dimension reduction and does not contain useful high-level features or knowledge (e.g., objects and relations). Thus in some games with rarely revisited states, there are not enough joint nodes in our graph, and our algorithm does not perform well, such as FishingDerby and Jamesbond. In addition, We compare the overall performance (mean and median) of ERLAM with other methods in Table 1, which also shows that ERLAM has the best performance.\nTo gain a better understanding of our superior performance, we further plot learning curves (Figure 8) on four games, which include three general good cases (Atlantis, BattleZone, StarGunner) and a bad case (BankHeist) to demonstrate when associative memory works extremely well and when it is\nnot particularly effective. In addition, we plot the average values of states in memory (Figure 8) for better revealing the performance difference on game scores. Across most games, ERLAM is significantly faster at learning than EMDQN and DQN, but ERLAM only has a slightly better performance than EMDQN on BankHeist. The reasons lie in two folds. Firstly, there are more crossed experiences on Atlantis, BattleZone, StarGunner than BankHeist. Thus on the first three games, the values computed by associative memory are significantly larger than those in episodic memory. Secondly, we observe that the background objects in BankHeist have abnormally changeable appearance and complex behaviors, which are intractable for memory-based methods (e.g., MFEC, NEC, EMDQN, and ERLAM), especially with a simple random projection embedding function for state feature abstraction (we also discuss this in Conclusion Section). It also accounts for the reason why ERLAM and EMDQN have similar performance with DQN on this game.\nWe also add experiments to verify our superior performance benefits from associative memory rather than representations (e.g., random projection). As shown in Appendix Figure 9, DQN with only random projections as inputs has much worse performance than ERLAM and the vanilla DQN, which suggests that it is associative memory that matters." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a biologically inspired sample efficient reinforcement learning framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM). Our method explicitly organizes memorized states as a graph. We develop an efficient reverse-trajectory propagation strategy to allow the values of new experiences to propagate to all memory items through the graph rapidly. Experiments in the navigation domain and Atari games demonstrate that our proposed framework can significantly improve the sample efficiency of current reinforcement learning algorithms.\nIn the future, there are some interesting research directions that can be pursued within our proposed framework. Firstly, in this paper, following the work of Blundell et al. (2016) and Lin et al. (2018), our state embedding function φ is implemented as random projection. It is possible to incorporate advanced representation learning approaches that can capture useful features into our framework to support more efficient memory retrieval and further boost up performance. Secondly, existing episodic reinforcement learning algorithms mainly focus on value-based methods. It will be an interesting future work to extend episodic memory to policy gradient methods. Thirdly, we instantiate our associative memory in the learning phase in this paper. However, associative memory can also be used in explicit episodic control to enhance exploitation further. Fourthly, at the current stage, ERLAM, as a kind of episodic RL approach, is only good at improving sample-efficiency in neardeterministic environments. To deal with completely stochastic environments, our model can be potentially extended by storing the distribution of Q values (Bellemare et al., 2017; Dabney et al., 2018) instead of the maximum Q value in the associative memory." }, { "heading": "A THEORETICAL CONVERGENCE", "text": "Proof. Note that our graph-based value propagation is similar to the proof of value iteration (Bellman, 1966; Bertsekas et al., 1995; Sutton & Barto, 2011). For any estimate of our graph-based action-value function Q̂G\nBQ̂G(s, a) = R(s, a) + γmax a′∈A ∑ s′∈S PG(s ′|s, a)Q̂G(s′, a′),\nwhere PG(s′|s, a) defines transition probability given graph G. For any action-value function estimates Q̂1G , Q̂ 2 G ,\n|BQ̂1G(s, a)−BQ̂2G(s, a)|\n=γ ∣∣∣∣maxa′∈A∑ s′∈S PG(s ′|s, a)Q̂1G(s′, a′)−max a′∈A ∑ s′∈S PG(s ′|s, a)Q̂2G(s′, a′) ∣∣∣∣ ≤γmax\na′∈A ∣∣∣∣ ∑ s′∈S PG(s ′|s, a)Q̂1G(s′, a′)− ∑ s′∈S PG(s ′|s, a)Q̂2G(s′, a′) ∣∣∣∣ =γmax\na′∈A ∑ s′∈S PG(s ′|s, a)|Q̂1G(s′, a′)− Q̂2G(s′, a′)|\n≤γ max s∈S,a∈A |Q̂1G(s, a)− Q̂2G(s, a)|\nSo the contraction property of Bellman operator holds:\nmax s∈S,a∈A |BQ̂1G(s, a)−BQ̂2G(s, a)| ≤ γ max s∈S,a∈A |Q̂1G(s, a)− Q̂2G(s, a)| (7)\nFor the fixed point Q∗G , we have:\nmax s∈S,a∈A |BQ̂G(s, a)−BQ̂∗G(s, a)| ≤ γ max s∈S,a∈A |Q̂G(s, a)− Q̂∗G(s, a)| =⇒ Q̂G → Q∗G (8)\nTherefore, we prove that our graph-based value propagation algorithm can converge to unique optimal value." }, { "heading": "B RAW SCORES ON ATARI GAMES", "text": "Table 2: Raw scores on Atari games at 10 million frames. All agents are trained using 10 million frames except for EMDQN which is trained with 40 million frames.\nDQN A3C Prior.DQN MFEC NEC EVA EMDQN(40M) ERLAM Alien 634.80 415.50 800.50 1717.70 3460.60 1007.93 1662.00 2070.85 Amidar 126.80 96.30 99.10 370.90 811.30 231.19 374.10 980.47 Assault 1489.50 720.80 1339.90 510.20 599.90 550.77 2566.80 3230.18 Atlantis 14210.50 36383.00 12579.10 95499.40 51208.00 180367.20 290953.30 359530.00 BankHeist 29.30 15.80 70.10 163.70 343.30 4022.45 348.00 702.92 BattleZone 6961.00 2354.20 13500.00 19053.60 13345.50 14000.47 28300.00 33095.24 BeamRider 3741.70 450.20 3249.60 858.80 749.60 1914.30 5980.90 7116.67 Boxing 31.30 2.50 64.70 10.70 72.80 58.43 89.30 87.77 ChopperCommand 827.20 1036.70 1426.50 3075.60 5070.30 1612.93 3106.70 4172.83 CrazyClimber 66061.60 70103.50 76574.10 9892.20 34344.00 90656.27 107038.70 106538.71 Defender 2877.90 4596.00 3486.40 10052.80 6126.10 2890.44 14408.00 705833.33 DemonAttack 5541.90 346.80 6503.60 1081.80 641.40 504.52 5603.10 11056.75 Enduro 364.90 0.00 1125.80 0.00 1.40 1106.35 659.00 912.73 FishingDerby -81.60 -89.50 -48.20 -90.30 -72.20 -68.10 8.40 -30.20 Frostbite 339.10 218.90 711.30 925.10 2747.40 1005.44 596.30 3193.83 Hero 1050.70 4598.20 5164.50 14767.70 16265.30 12075.89 7247.80 13615.00 Jamesbond 165.90 31.50 203.80 244.70 376.80 252.18 586.70 518.42 Krull 6015.10 3627.60 6700.70 4555.20 5179.20 4030.04 7798.30 7755.80 KungFuMaster 17166.10 6634.60 21456.20 12906.50 30568.10 25005.15 23890.00 20353.49 Riverraid 3144.90 2312.60 4871.80 4195.00 5498.10 4026.74 7728.30 8138.66 RoadRunner 7285.40 759.90 24746.60 5432.10 12661.40 28194.17 27856.60 37318.63 Robotank 14.60 2.40 8.50 7.30 11.10 15.13 5.30 25.44 Seaquest 618.70 514.10 1192.20 711.60 1015.30 1714.15 4235.90 2693.96 StarGunner 604.80 613.60 1131.40 14843.90 1171.40 2006.22 23933.30 9432.97 Tutankham 148.70 108.30 194.00 86.30 121.60 171.33 148.00 146.94 YarsRevenge 7614.10 9953.00 9228.50 5956.70 21490.50 11010.90 13236.70 14259.26\nFigure 9: Learning curves on 10 million frames compared with ERLAM, DQN with random project and vanilla DQN." } ]
2,021
null
SP:24d509d2318c32e01958afb57b50ec166fdb872f
[ "This paper considers the autoencoder model combining the usual information bottleneck and the Gaussian mixture model (GMM). Using an approximation to deal with GMMs, the authors derive a bound on the cost function generalizing the ELBO. The performance of the proposed method is tested on three benchmark datasets and compared with existing methods combining VAE with GMM.", "The author(s) posit a Mixture of Gaussian's prior for a compressed latent space representation of high-dimensional data (e.g. images and documents). They propose fitting this model using the Variational Information Bottleneck paradigm and explicate its derivation and tie it to the variational objective used by similar models. They empirically showcase their model and optimization methodology on the MNIST, STL-10, and Reuters10k benchmarks." ]
In this paper, we develop an unsupervised generative clustering framework that combines variational information bottleneck and the Gaussian Mixture Model. Specifically, in our approach we use the variational information bottleneck method and model the latent space as a mixture of Gaussians. We derive a bound on the cost function of our model that generalizes the evidence lower bound (ELBO); and provide a variational inference type algorithm that allows to compute it. In the algorithm, the coders’ mappings are parametrized using neural networks and the bound is approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on real datasets are provided to support the efficiency of our method.
[]
[ { "authors": [ "Alexander A. Alemi", "Ian Fischer", "Joshua V. Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "In Proceedings of the 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, pp", "year": 2011 }, { "authors": [ "A.P. Dempster", "N.M. Laird", "D.B. Rubin" ], "title": "Maximum likelihood from incomplete data via the EM algorithm", "venue": "Journal of the Royal Statistical Society,", "year": 1977 }, { "authors": [ "Nat Dilokthanakul", "Pedro A.M. Mediano", "Marta Garnelo", "Matthew C.H. Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahani" ], "title": "Deep unsupervised clustering with Gaussian mixture variational autoencoders", "venue": null, "year": 2017 }, { "authors": [ "Chris Ding", "Xiaofeng He" ], "title": "K-means clustering via principal component analysis", "venue": "In Proceedings of the 21st International Conference on Machine Learning,", "year": 2004 }, { "authors": [ "Xifeng Guo", "Long Gao", "Xinwang Liu", "Jianping Yin" ], "title": "Improved deep embedded clustering with local structure preservation", "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "J.A. Hartigan", "M.A. Wong" ], "title": "Algorithm AS 136: A k-means clustering algorithm", "venue": "Journal of the Royal Statistical Society,", "year": 1979 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "John R. Hershey", "Peder A. Olsen" ], "title": "Approximating the Kullback Leibler divergence between Gaussian mixture models", "venue": "In Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2007 }, { "authors": [ "Thomas Hofmann", "Bernhard Schölkopf", "Alexander J. Smola" ], "title": "Kernel methods in machine learning", "venue": "The Annals of Statistics,", "year": 2008 }, { "authors": [ "Zhuxi Jiang", "Yin Zheng", "Huachun Tan", "Bangsheng Tang", "Hanning Zhou" ], "title": "Variational deep embedding: An unsupervised and generative approach to clustering", "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence,", "year": 1965 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of the 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Yann Lecun", "Leon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "David D. Lewis", "Yiming Yang", "Tony G. Rose", "Fan Li" ], "title": "A new benchmark collection for text categorization research", "venue": "The Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Erxue Min", "Xifeng Guo", "Qiang Liu", "Gen Zhang", "Jianjing Cui", "Jun Long" ], "title": "A survey of clustering with deep learning: From the perspective of network architecture", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Danilo J. Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In Proceedings of the 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Sam Roweis" ], "title": "EM algorithms for PCA and SPCA", "venue": "In Advances in Neural Information Processing Systems", "year": 1997 }, { "authors": [ "Ravid Schwartz-Ziv", "Naftali Tishby" ], "title": "Opening the black box of deep neural networks via information", "venue": null, "year": 2017 }, { "authors": [ "Noam Slonim" ], "title": "The information bottleneck: Theory and applications", "venue": "PhD dissertation, Hebrew University,", "year": 2002 }, { "authors": [ "Naftali Tishby", "Fernando C. Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "In Proceedings of the 37th Annual Allerton Conference on Communication, Control and Computing, pp", "year": 1999 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research", "year": 2008 }, { "authors": [ "Svante Wold", "Kim Esbensen", "Paul Geladi" ], "title": "Principal component analysis", "venue": "Chemometrics and Intelligent Laboratory Systems,", "year": 1987 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In Proceedings of the 33rd International Conference on Machine Learning,", "year": 2016 } ]
[ { "heading": null, "text": "In this paper, we develop an unsupervised generative clustering framework that combines variational information bottleneck and the Gaussian Mixture Model. Specifically, in our approach we use the variational information bottleneck method and model the latent space as a mixture of Gaussians. We derive a bound on the cost function of our model that generalizes the evidence lower bound (ELBO); and provide a variational inference type algorithm that allows to compute it. In the algorithm, the coders’ mappings are parametrized using neural networks and the bound is approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on real datasets are provided to support the efficiency of our method." }, { "heading": "1 INTRODUCTION", "text": "Clustering consists in partitioning a given data set into various groups (clusters) based on some similarity metric, such as Euclidean distance, L1 norm, L2 norm, L∞ norm, the popular logarithmic loss measure or others. The principle is that each cluster should contain elements of the data that are closer to each other than to any other element outside that cluster, in the sense of the defined similarity measure. If the joint distribution of the clusters and data is not known, one should operate blindly in doing so, i.e., using only the data elements at hand; and the approach is called unsupervised clustering. Unsupervised clustering is perhaps one of the most important tasks of unsupervised machine learning algorithms nowadays, due to a variety of application needs and connections with other problems.\nExamples of unsupervised clustering algorithms include the so-popular K-means (Hartigan & Wong, 1979) and expectation maximization (EM) (Dempster et al., 1977). TheK-means algorithm partitions the data in a manner that the Euclidean distance among the members of each cluster is minimized. With the EM algorithm, the underlying assumption is that the data comprises a mixture of Gaussian samples, namely a Gaussian Mixture Model (GMM); and one estimates the parameters of each component of the GMM while simultaneously associating each data sample to one of those components. Although they offer some advantages in the context of clustering, these algorithms suffer from some strong limitations. For example, it is well known that the K-means is highly sensitive to both the order of the data and scaling; and the obtained accuracy depends strongly on the initial seeds (in addition to that it does not predict the number of clusters or K-value). The EM algorithm suffers mainly from low convergence, especially for high dimensional data.\nRecently, a new approach has emerged that seeks to perform inference on a transformed domain (generally referred to as latent space), not the data itself. The rationale is that because the latent space often has fewer dimensions it is more convenient computationally to perform inference (clustering) on it rather than on the high dimensional data directly. A key aspect then is how to design a latent space that is amenable to accurate low-complex unsupervised clustering, i.e., one that preserves only those features of the observed high dimensional data that are useful for clustering while removing out all redundant or non-relevant information. Along this line of work, we can mention (Ding & He, 2004) which utilizes Principal Component Analysis (PCA) (Wold et al., 1987) for dimensionality reduction followed by K-means for clustering the obtained reduced dimension data; or (Roweis, 1997) which uses a combination of PCA and the EM algorithm. Other works that use alternatives\nfor the linear PCA include Kernel PCA (Hofmann et al., 2008), which employs PCA in a non-linear fashion to maximize variance in the data.\nThe usage of deep neural networks (DNN) for unsupervised clustering of high dimensional data on a lower dimensional latent space has attracted considerable attention, especially with the advent of autoencoder (AE) learning and the development of powerful tools to train them using standard backpropagation techniques (Kingma & Welling, 2014; Rezende et al., 2014). Advanced forms include Variational autoencoders (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) which are generative variants of AE that regularize the structure of the latent space and the more general Variational Information Bottleneck (VIB) of (Alemi et al., 2017) which is a technique that is based on the Information Bottleneck method (Tishby et al., 1999) and seeks a better trade-off between accuracy and regularization than VAE via the introduction of a Lagrange-type parameter s which controls that trade-off and whose optimization is similar to deterministic annealing (Slonim, 2002) or stochastic relaxation.\nIn this paper, we develop an unsupervised generative clustering framework that combines VIB and the Gaussian Mixture Model. Specifically, in our approach we use the variational information bottleneck method and model the latent space as a mixture of Gaussians. The encoder and decoder of the model are parametrized using neural networks (NN). The cost-function is calculated approximatively by Markov sampling and optimized with stochastic gradient descent. Furthermore, the application of our algorithm to the unsupervised clustering of various datasets, including the MNIST (Lecun et al., 1998), REUTERS (Lewis et al., 2004) and STL-10 (Coates et al., 2011), allows a better clustering accuracy than previous state of the art algorithms. For instance, we show that our algorithm performs better than the variational deep embedding (VaDE) algorithm of (Jiang et al., 2017) which is based on VAE and performs clustering by maximizes the ELBO and can be seen as a specific case of our algorithm (Section 3.1). Our algorithm also generalizes the VIB of (Alemi et al., 2017) which models the latent space as an isotropic Gaussian which is generally not expressive enough for the purpose of unsupervised clustering. Other related works, but which are of lesser relevance to the contribution of this paper, are the deep embedded clustering (DEC) of (Xie et al., 2016), the improved deep embedded clustering (IDEC) of (Guo et al., 2017) and (Dilokthanakul et al., 2017). For a detailed survey of clustering with deep learning, the readers may refer to (Min et al., 2018).\nTo the best of our knowledge, our algorithm performs the best in terms of clustering accuracy by using deep neural networks without any prior knowledge regarding the labels (except the usual assumption regarding the number of the classes) compared to the state-of-the-art algorithms of this category. In order to achieve the aforementioned accuracy, i) we derive a cost-function that contains the IB hyper parameter s that controls the trade-off between over-fit and generalization of the model and we used an approximation of KL divergence that avoid assumptions which do not hold in the beginning of the learning process and lead to convergence issues; ii) evaluate the hyper-parameter s by following an annealing approach that improves both the convergence and the accuracy of the proposed algorithm." }, { "heading": "2 PROBLEM DEFINITION AND MODEL", "text": "Consider a dataset that is composed of N samples {xi}Ni=1 which we wish to partition into |C| ≥ 1 clusters. Let C = {1, . . . , |C|} be the set of all possible clusters; and C designate a categorical random variable that lies in C and stands for the index of the actual cluster. If X is a random variable that models elements of the dataset, given X = xi induces a probability distribution on C which the learner should learn. Thus, mathematically the problem is that of estimating the values of the\nunknown conditional probability PC|X(·|xi) for all elements xi of the dataset. The estimates are sometimes referred to as the assignment probabilities.\nAs mentioned previously, we use the VIB framework and model the latent space as a GMM. The resulting model is depicted in Figure 1, where the parameters πc, µc, Σc, for all values of c ∈ C, are to be optimized jointly with those of the employed NNs as instantiation of the coders. Also, the assignment probabilities are estimated based on the values of latent space vector instead of the observation themselves, i.e., PC|U = QC|U. In the rest of this section, we elaborate on the inference and generative network models for our method, which are illustrated below.\nC X U PX|C PU|X\nFigure 2: Inference Network\nC U X QU|C QX|U\nFigure 3: Generative Network" }, { "heading": "2.1 INFERENCE NETWORK MODEL", "text": "We assume that an observed data x is generated from a GMM with |C| components. Then, the latent representation u is inferred according the following procedure:\n1. One of the components of the GMM is chosen according to a categorical variable C.\n2. The data x is generated from the c-th competent of the GMM, i.e., PX|C ∼ N (x; µ̃c, Σ̃c). 3. Encoder maps x to a latent representation u according to PU|X ∼ N (µθ,Σθ).\n3.1. The encoder is modeled with a DNN fθ which maps x to the parameters of a Gaussian distribution, i.e., [µθ,Σθ] = fθ(x). 3.2. The representation u is sampled from N (µθ,Σθ).\nFor the inference network, shown in Figure 2, the following Markov chain holds\nC − −X− −U . (1)" }, { "heading": "2.2 GENERATIVE NETWORK MODEL", "text": "Since encoder extracts useful representations of the dataset and we assume that the dataset is generated from a GMM, we model our latent space also with a mixture of Gaussians. To do so, the categorical variable C is embedded with the latent variable U. The reconstruction of the dataset is generated according to the following procedure:\n1. One of the components of the GMM is chosen according to a categorical variable C, with a prior distribution QC .\n2. The representation u is generated from the c-th component, i.e., QU|C ∼ N (u;µc,Σc). 3. The decoder maps the latent representation u to x̂ which is the reconstruction of the source\nx by using the mapping QX|U.\n3.1. The decoder is modeled with a DNN gφ, that maps u to the estimate x̂, i.e., [x̂] = gφ(u).\nFor the generative network, shown in Figure 3, the following Markov chain holds\nC − −U− −X . (2)" }, { "heading": "3 PROPOSED METHOD", "text": "In this section we present our clustering method. First, we provide a general cost function for the problem of the unsupervised clustering that we study here based on the variational IB framework; and we show that it generalizes the ELBO bound developed in (Jiang et al., 2017). We then parametrize our model using NNs whose parameters are optimized jointly with those of the GMM. Furthermore, we discuss the influence of the hyper-parameter s that controls optimal trade-offs between accuracy and regularization." }, { "heading": "3.1 BRIEF REVIEW OF VARIATIONAL INFORMATION BOTTLENECK FOR UNSUPERVISED LEARNING", "text": "As described in Chapter 2, the stochastic encoder PU|X maps the observed data x to a representation u. Similarly, the stochastic decoder QX|U assigns an estimate x̂ of x based on the vector u. As per the IB method (Tishby et al., 1999) a suitable representation U should strike the right balance between capturing all information about the categorical variable C that is contained in the observation X and using the most concise representation for it. This leads to maximizing the following Lagrange problem Ls(P) = I(C; U)− sI(X; U) , (3) where s ≥ 0 designates the Lagrange multiplier and for convenience P denotes the conditional distribution PU|X.\nInstead of equation 3 which is not always computable in our unsupervised clustering setting, we use a modified version of it (so-called unsupervised IB objective (Alemi et al., 2017)) given by\nL̃s(P) : = −H(X|U)− s[H(U)−H(U|X)] (4) = EPX [ EPU|X [logPX|U + s logPU − s logPU|X] ] . (5)\nFor a variational distribution QU on U (instead of the unknown PU) and a variational stochastic decoder QX|U (instead of the unknown optimal decoder PX|U), let Q := {QX|U, QU}. Also, let\nLVBs (P,Q) := EPX [ EPU|X [logQX|U]− sDKL(PU|X‖QU) ] . (6)\nLemma 1. For given P, we have\nLVBs (P,Q) ≤ L̃s(P), for all Q . In addition, there exists a unique Q that achieves the maximum maxQ LVBs (P,Q) = L̃s(P), and is given by\nQ∗X|U = PX|U , Q ∗ U = PU .\nUsing Lemma 1, maximization of equation 4 can be written in term of the variational IB cost as follows\nmax P L′s(P) = max P max Q LVBs (P,Q) . (7)\nRemark 1. As we already mentioned in the beginning of this chapter, the related work (Jiang et al., 2017) performs unsupervised clustering by combining VAE with GMM. Specifically, it maximizes the following ELBO bound\nLVaDE1 := EPX [ EPU|X [logQX|U]−DKL(PC|X‖QC)− EPC|X [DKL(PU|X‖QU|C)] ] . (8)\nLet, for an arbitrary non-negative parameter s, LVaDEs be a generalization of the ELBO bound in equation 8 of (Jiang et al., 2017) given by\nLVaDEs := EPX [ EPU|X [logQX|U]− sDKL(PC|X‖QC)− sEPC|X [DKL(PU|X‖QU|C)] ] . (9)\nInvestigating the RHS of equation 9, we get\nLVBs (P,Q) = LVaDEs + sEPX [ EPU|X [DKL(PC|X‖QC|U)] ] . (10)\nThus, by the non-negativity of relative entropy it is clear that LVaDEs is a lower bound on LVBs (P,Q). Also, if variational distribution Q is such that the conditional marginal QC|U is equal to PC|X the bound is tight since the relative entropy term is zero in this case." }, { "heading": "3.2 PROPOSED ALGORITHM: VIB-GMM", "text": "In order to compute equation 7, we parametrize the distributions PU|X and QX|U using DNNs. For instance, let the stochastic encoder PU|X be a DNN fθ and the stochastic decoder QX|U be a DNN gφ. That is\nPθ(u|x) = N (u;µθ,Σθ) , where [µθ,Σθ] = fθ(x) , Qφ(x|u) = gφ(u) = [x̂] ,\n(11)\nwhere θ and φ are the weight and bias parameters of the DNNs. Furthermore, the latent space is modeled as a GMM with |C| components with parameters ψ := {πc,µc,Σc}|C|c=1, i.e.,\nQψ(u) = ∑\nc\nπc N (u;µc,Σc) . (12)\nUsing the parametrizations above, the optimization of equation 7 can be rewritten as\nmax θ,φ,ψ\nLNNs (θ, φ, ψ) (13)\nwhere the cost function LNNs (θ, φ, ψ) given by\nLNNs (θ, φ, ψ) := EPX [ EPθ(U|X)[logQφ(X|U)]− sDKL(Pθ(U|X)‖Qψ(U)) ] . (14)\nThen, for a given observations of N samples, i.e., {xi}Ni=1, equation 13 can be approximated in terms of an empirical cost as follows\nmax θ,φ,ψ\n1\nn\nn∑\ni=1\nLemps,i (θ, φ, ψ) , (15)\nwhere Lemps,i (θ, φ, ψ) is the empirical cost for the i-th observation xi, and given by\nLemps,i (θ, φ, ψ) = EPθ(Ui|Xi)[logQφ(Xi|Ui)]− sDKL(Pθ(Ui|Xi)‖Qψ(Ui)) . (16)\nFurthermore, the first term of the RHS of equation 16 can be computed using Monte Carlo sampling and the re-parametrization trick (Kingma & Welling, 2014). In particular, Pθ(u|x) can be sampled by first sampling a random variable Z with distribution PZ, i.e., PZ = N (0, I), then transforming the samples using some function f̃θ : X × Z → U , i.e., u = f̃θ(x, z). Thus,\nEPθ(Ui|Xi)[logQφ(Xi|Ui)] = 1\nM\nM∑\nm=1\nlog q(xi|ui,m), ui,m = µθ,i+Σ 1 2 θ,i· m, m ∼ N (0, I) ,\nwhere M is the number of samples for the Monte Carlo sampling step.\nThe second term of the RHS of equation 16 is the KL divergence between a single component multivariate Gaussian and a Gaussian Mixture Model with |C| components. An exact closed-form solution for the calculation of this term does not exist. However, a variational lower bound approximation (Hershey & Olsen, 2007) of it can be obtained as\nDKL(Pθ(Ui|Xi)‖Qψ(Ui)) = − log |C|∑\nc=1\nπc exp (−DKL(N (µθ,i,Σθ,i)‖N (µc,Σc)) . (17)\nIn particular, in the specific case in which the covariance matrices are diagonal, i.e., Σθ,i := diag({σ2θ,i,j}nuj=1) and Σc := diag({σ2c,j}nuj=1), with nu denoting the latent space dimension, equation 17 can be computed as follows\nDKL(Pθ(Ui|Xi)‖Qψ(Ui))\n= − log |C|∑\nc=1\nπc exp\n( − 1\n2\nnu∑\nj=1\n[ (µθ,i,j − µc,j)2 σ2c,j + log σ2c,j σ2θ,i,j − 1 + σ2θ,i,j σ2c,j ]) , (18)\nwhere µθ,i,j and σ2θ,i,j are the mean and variance of the i-th representation in the j-th dimension of the latent space. Furthermore, µc,j and σ2c,j represent the mean and variance of the c-th component of the GMM in the j-th dimension of the latent space.\nFinally, we train NNs to maximize the cost function equation 14 over the parameters θ, φ, as well as those ψ of the GMM. For the training step, we use the ADAM optimization tool (Kingma & Ba, 2015). The training procedure is detailed in Algorithm 1.\nOnce our model is trained, we assign the given dataset into the clusters. As mentioned in Section 2, we do the assignment from the latent representations, i.e., QC|U = PC|X. Hence, the probability that the observed data xi belongs to the c-th cluster is computed as follows\np(c|xi) = q(c|ui) = qψ?(c)qψ?(ui|c)\nqψ?(ui) = π?cN (ui;µ?c ,Σ?c)∑ c π ? cN (ui;µ?c ,Σ∗c) , (19)\nwhere ? indicates optimal values of the parameters as found at the end of the training phase. Finally, the right cluster is picked based on the largest assignment probability value.\nAlgorithm 1 VIB-GMM algorithm for unsupervised learning\n1: input: Dataset D := {xi}ni=1, parameter s ≥ 0. 2: output: Optimal DNN weights θ?, φ? and GMM parameters ψ? = {π?c , µ?c , Σ?c}|C|c=1. 3: initialization Initialize θ, φ, ψ. 4: repeat 5: Randomly select b mini-batch samples {xi}bi=1 from D. 6: Draw m random i.i.d samples {zj}mj=1 from PZ. 7: Compute m samples ui,j = f̃θ(xi, zj) 8: For the selected mini-batch, compute gradients of the empirical cost equation 15. 9: Update θ, φ, ψ using the estimated gradient (e.g. with SGD or ADAM).\n10: until convergence of θ, φ, ψ.\nRemark 2. It is worth to mention that with the use of the KL approximation in equation 17, our algorithm does not use the assumption PC|U = QC|U (not as in Jiang et al. (2017)), which does not hold in the beginning of the training phase and leads to convergence issues. This assumption is only used in the final assignment after the training phase is over." }, { "heading": "3.3 EFFECT OF THE HYPER-PARAMETER", "text": "Algorithm 2 Annealing Algorithm Pseudo-Code\ninput: Dataset D := {xi}ni=1, hyper-parameter interval [smin, smax]. output: Optimal DNN weights θ?, φ?, GMM parameters ψ? = {π?c , µ?c , Σ?c}|C|c=1, assignment probability PC|X. initialization Initialize θ, φ, ψ. repeat\nApply VIB-GMM algorithm. Update ψ, θ, φ. Update s, e.g., s = (1 + s)sold.\nuntil s does not exceed smax. As we already mentioned, the hyper-parameter s controls the trade-off between the relevance of the representation U and its complexity. As it can be seen from equation 14 for small values of s, it is the cross-entropy term that dominates, i.e., the algorithm trains the parameters so as to reproduce X as accurate as possible. For large values of s, however, it is most important for the NN to produce an encoded version of X whose distribution matches the prior distribution of the latent space, i.e., the term DKL(Pθ(U|X)‖Qψ(U)) is nearly zero. In the beginning of the training process, the GMM components are randomly selected; and so starting with a large value of the hyper-parameter s is likely to steer the solution towards an irrelevant prior. Hence, for the tunning of the hyper-parameter s in practice it is more efficient to start with a small value of s and gradually increase it with the number of epochs. This has the advantage to avoid possible local minimas, an aspect that is reminiscent of deterministic annealing (Slonim, 2002), where s plays the role of the temperature parameter. The experiments that will be reported in the next section show that proceeding in the above described manner for the selection of the parameter s helps getting better accuracy results and better robustness to the initialization (i.e., no need for a strong pretraining). A pseudo-code for annealing is given in Algorithm 2. We note that tuning s is very critical, such that the step size s in update of s should be chosen carefully, otherwise phase transitions might be skipped that would cause a bad ACC score." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DESCRIPTION OF USED DATASETS", "text": "In our empirical experiments, we apply our algorithm to the clustering of the following datasets.\nMNIST: A dataset of gray-scale images of 70000 handwritten digits of dimensions 28× 28 pixel. STL-10: A dataset of color images collected from 10 categories. Each category consists of 1300 images of size of 96× 96 (pixels) ×3 (rgb code). Hence, the original input dimension nx is 27648. For this dataset, we use a pretrained convolutional NN model, i.e., ResNet-50 (He et al., 2016) to reduce the dimensionality of the input. This preprocessing reduces the input dimension to 2048. Then, our algorithm and other baselines are used for clustering.\nREUTERS10K: A dataset that is composed of 810000 English stories labeled with a category tree. As in (Xie et al., 2016), 4 root categories (corporate/industrial, government/social, markets, economics) are selected as labels and all documents with multiple labels are discarded. Then, tf-idf features are computed on the 2000 most frequently occurring words. Finally, 10000 samples are taken randomly, which are referred to as REUTERS10K dataset." }, { "heading": "4.2 NETWORK SETTINGS AND OTHER PARAMETERS", "text": "We use the following network architecture: the encoder is modeled with NNs with 3 hidden layers with dimensions nx−500−500−2000−J , where nx is the input dimension and nu is the dimension of the latent space. The decoder consists of NNs with dimensions nu − 2000− 500− 500− nx. All layers are fully connected. For comparison purposes, we chose the architecture of the hidden layers as well as the dimension of the latent space nu = 10 to coincide with those made for the DEC algorithm of (Xie et al., 2016) and the VaDE algorithm of (Jiang et al., 2017). All except the last layers of the encoder and decoder are activated with ReLU function. For the last (i.e., latent) layer of the encoder we use a linear activation; and for the last (i.e., output) layer of the decoder we use sigmoid function for MNIST and linear activation for the remaining datasets. The batch size is 100 and the variational bound equation 15 is maximized by the Adam optimizer of (Kingma & Ba, 2015). The learning rate is initialized with 0.002 and decreased gradually every 20 epochs with a decay rate of 0.9 until it reaches a small value (0.0005 is our experiments). The reconstruction loss is calculated by using the cross-entropy criterion for MNIST and mean squared error function for the other datasets." }, { "heading": "4.3 CLUSTERING ACCURACY", "text": "We evaluate the performance of our algorithm in terms of the so-called unsupervised clustering accuracy (ACC), which is a widely used metric in the context of unsupervised learning (Min et al., 2018). For comparison purposes, we also present those of algorithms from previous art.\nFor each of the aforementioned datasets, we run our VIB-GMM algorithm for various values of the hyper-parameter s inside an interval [smin, smax], starting from the smaller valuer s1 and gradually increasing the value of s every nepoch epochs. For the MNIST dataset, we set (smin, smax, nepoch) = (1, 5, 500); and for the STL-10 dataset and the REUTERS10K datset we choose these parameters to be (1, 20, 500) and (1, 5, 100), respectively. The obtained ACC accuracy results are reported in the Table 1 from which it can be seen that our algorithm outperforms significantly the DEC algorithm of (Xie et al., 2016) as well as the VaDE algorithm of (Jiang et al., 2017) and GMM on the same datsets. Important to note, for the MNIST dataset the reported ACC accuracy of 96.2% using our VIBGMM algorithm is obtained as the best case run out of ten times run all with random initializations. For instance, we do not use any pretrained values for the initialization of our algorithm in sharp contrast with the VaDE of (Jiang et al., 2017) and the DEC of (Xie et al., 2016). For the STL-10 dataset, none of the compared algorithms use a pretrained network except the intimal ResNet-50 for dimensionality reduction. For REUTERS10K, we used the same pretrain parameters as DEC and VaDE. Figure 4 depicts the evolution of the ACC accuracy with iterations (number of epochs) for the four compared algorithms.\nFigure 5 shows the evolution of the reconstruction loss of our VIB-GMM algorithm for the STL-10 dataset, as a function of simultaneously varying values of the hyper-parameter s and the number of epochs (recall that, as per-the described methodology, we start with s = s1 and we increase its value gradually every nepoch = 500 epochs). As it can be seen from the figure, the few first epochs are spent\nalmost entirely on reducing the reconstruction loss (i.e., a fitting phase) and most of the remaining epochs are spent in making the found representation more concise (i.e., smaller KL-divergence). This is reminiscent of the two-phase (fitting v.s. compression) that was observed for supervised learning using VIB in (Schwartz-Ziv & Tishby, 2017)." }, { "heading": "4.4 VISUALIZATION ON THE LATENT SPACE", "text": "In this section, we investigate the evolution of the unsupervised clustering of the STL-10 dataset on the latent space using our VIB-GMM algorithm. For this purpose, we find it convenient to visualize the latent space through application of the t-SNE algorithm of (van der Maaten & Hinton, 2008) in order to generate meaningful representations in a two-dimensional space. Figure 6 shows 4000 randomly chosen latent representations before the start of the training process and respectively after 1, 5 and 500 epochs. The shown points (with a · marker in the figure) represent latent representations of data samples whose labels are identical. Colors are used to distinguish between clusters. Crosses (with an x marker in the figure) correspond to the centroids of the clusters. More specifically, Figure 6-(a) shows the initial latent space before the training process. If the clustering is performed on the initial representations it allows ACC accuracy of as small as 10%, i.e., as bad as a random assignment. Figure 6-(b) shows the latent space after one epoch, from which a partition of some of the points starts to be already visible. With five epochs, that partitioning is significantly sharper and the associated clusters can be recognized easily. Observe, however, that the cluster centers seem still not to have converged. With 500 epochs, the ACC accuracy of our algorithm reaches %91.6 and the clusters and their centroids are neater as visible from Figure 6-(d)." }, { "heading": "A THE PROOF OF LEMMA 1", "text": "First, we expand L̃s(P) as follows L̃s(P) =−H(X|U)− sI(X; U)\n=−H(X|U)− s[H(U)−H(U|X)]\n=\n∫∫\nux\np(u,x) log p(x|u) du dx\n+ s\n∫\nu\np(u) log p(u) du− s ∫∫\nux\np(u,x) log p(u|x) du dx.\nThen, LVBs (P,Q) is defined as follows\nLVBs (P,Q) := ∫∫\nux\np(u,x) log q(x|u) du dx\n+ s\n∫\nu\np(u) log q(u) du− s ∫∫\nux\np(u,x) log p(u|x) du dx. (20)\nHence, we have the following relation\nL̃s(P)− LVBs (P,Q) = EPX [DKL(PX|U‖QX|U)] + sDKL(PU‖QU) ≥ 0 where equality holds under equalities QX|U = PX|U and QU = PU. We note that s ≥ 0.\nB THE PROOF OF ALTERNATIVE EXPRESSION LVADEs Here we show how we obtained equation 10.\nTo do so,\nLVaDEs = EPX [ EPU|X [logQX|U]− sDKL(PU|X‖QU)− sEPU|X [ DKL(PC|X‖QC|U) ]\n= EPX [ EPU|X [logQX|U] ] − s\n∫\nx\np(x)\n∫\nu\np(u|x) log p(u|x) q(u) du dx\n− s ∫\nx\np(x)\n∫\nu\np(u|x) ∑\nc\np(c|x) log p(c|x) q(c|u) du dx\n(a) = EPX [ EPU|X [logQX|U] ] − s\n∫∫\nux\np(x)p(u|x) log p(u|x) q(u) du dx\n− s ∫∫\nux\n∑\nc\np(x)p(u|c,x)p(c|x) log p(c|x) q(c|u) du dx\n= EPX [ EPU|X [logQX|U] ] − s\n∫∫\nux\n∑\nc\np(u, c,x) log p(u|x)p(c|x) q(u)q(c|u) du dx\n= EPX [ EPU|X [logQX|U] ] − s\n∫∫\nux\n∑\nc\np(u, c,x) log p(c|x) q(c) p(u|x) q(u|c) du dx\n= EPX [ EPU|X [logQX|U] ] − s\n∫\nx\n∑\nc\np(c,x) log p(c|x) q(c) dx\n− s ∫∫\nux\n∑\nc\np(x)p(c|x)p(u|c,x) log p(u|x) q(u|c) du dx\n(b) = EPX [ EPU|X [logQX|U]− sDKL(PC|X‖QC)− sEPC|X [DKL(PU|X‖QU|C)] ]\n(c) = LVBs (P,Q)− sEPX [ EPU|X [ DKL(PC|X‖QC|U) ]] ,\nwhere (a) and (b) follow due to the Markov chain C − −X− −U; (c) follows from the definition of LVBs (P,Q) in equation 6." }, { "heading": "C KL DIVERGENCE BETWEEN MULTIVARIATE GAUSSIAN DISTRIBUTIONS", "text": "The KL divergence between two multivariate Gaussian distributions P1 ∼ N (µ1,Σ1) and P2 ∼ N (µ2,Σ2) in RJ is\nDKL(P1‖P2) = 1\n2\n( (µ1 −µ2)TΣ−12 (µ1 −µ2) + log |Σ2| − log |Σ1| − J + tr(Σ−12 Σ1) ) . (21)\nFor the case in which Σ1 and Σ2 covariance matrices are diagonal, i.e., Σ1 := diag({σ21,j}Jj=1) and Σ2 := diag({σ22,j}Jj=1), equation 21 boils down to the following\nDKL(P1‖P2) = 1\n2\n( J∑\nj=1\n(µ1,j − µ2,j)2 σ22,j + log σ22,j σ21,j − 1 + σ21,j σ22,j\n) . (22)" }, { "heading": "D KL DIVERGENCE BETWEEN GAUSSIAN MIXTURE MODELS", "text": "An exact close form for the calculation of the KL divergence between two Gaussian mixture models does not exist. In this paper, we use a variational lower bound approximation for calculations of KL between two Gaussian mixture models. Let f and g be GMMs and the marginal densities of x under f and g are\nf(x) =\nM∑\nm=1\nωmN (x;µfm,Σfm) = M∑\nm=1\nωmfm(x)\ng(x) =\nC∑\nC=1\nπcN (x;µgc ,Σgc) = C∑\nc=1\nπcgc(x).\nThe KL divergence between two Gaussian mixtures f an g can be approximated as follows\nDvKL(f‖g) := M∑\nm=1\nωm log ∑ m′∈M\\m ωm′ exp (−DKL(fm‖fm′))∑C\nc=1 πc exp (−DKL(fm‖gc)) . (23)\nIn this paper, we are interested, in particular, M = 1. Hence, equation 23 simplifies to\nDvKL(f‖g) = − log C∑\nc=1\nπc exp (−DKL(f‖gc)) , (24)\nwhere DKL(·‖·) is the KL divergence between single component multivariate Gaussian distribution, defined as in equation 21." } ]
2,019
null
SP:61e38c36fc69f1cc6f7867971e75e06f7248283b
[ "This paper proposed a novel architecture to tackle the match prediction problem. There are two/three modules in the architecture, the R/P modules and the G module. R/P modules take the current utility estimates of the individuals in a given group comparison as input and produce the current R/P estimates for the individuals as output. The G module takes the final utility estimates of the individuals in a given group comparison as input and produces the winning probability estimate of one group preferred over the other in the given group comparison as output.", "This paper attempts to solve match prediction problem, i.e., whether a group is preferred over the other. The key challenge is \"consistency\" since it's hard to find the universal pattern over tasks. Instead, this paper propose to learn reward and penalty modules and both vary when the underlying model changes. Experiment results show that the proposed method consistently works the best. " ]
We explore the match prediction problem where one seeks to estimate the likelihood of a group ofM items preferred over another, based on partial group comparison data. Challenges arise in practice. As existing state-of-the-art algorithms are tailored to certain statistical models, we have different best algorithms across distinct scenarios. Worse yet, we have no prior knowledge on the underlying model for a given scenario. These call for a unified approach that can be universally applied to a wide range of scenarios and achieve consistently high performances. To this end, we incorporate deep learning architectures so as to reflect the key structural features that most state-of-the-art algorithms, some of which are optimal in certain settings, share in common. This enables us to infer hidden models underlying a given dataset, which govern in-group interactions and statistical patterns of comparisons, and hence to devise the best algorithm tailored to the dataset at hand. Through extensive experiments on synthetic and real-world datasets, we evaluate our framework in comparison to state-of-the-art algorithms. It turns out that our framework consistently leads to the best performance across all datasets in terms of cross entropy loss and prediction accuracy, while the state-of-the-art algorithms suffer from inconsistent performances across different datasets. Furthermore, we show that it can be easily extended to attain satisfactory performances in rank aggregation tasks, suggesting that it can be adaptable for other tasks as well.
[]
[ { "authors": [ "A. Agarwal", "P. Patil", "S. Agarwal" ], "title": "Accelerated spectral ranking", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "R.A. Bradley", "M.E. Terry" ], "title": "Rank analysis of incomplete block designs: I. the method of paired comparisons", "venue": null, "year": 1952 }, { "authors": [ "C. Burges", "T. Shaked", "E. Renshaw", "A. Lazier", "M. Deeds", "N. Hamilton", "G. Hullender" ], "title": "Learning to rank using gradient descent", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "X. Chen", "S. Gopi", "J. Mao", "J. Schneider" ], "title": "Optimal instance adaptive algorithm for the top-K ranking problem", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Y. Chen", "C. Suh" ], "title": "Spectral MLE: Top-K rank aggregation from pairwise comparisons", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Y. Chen", "J. Fan", "C. Ma", "K. Wang" ], "title": "Spectral method and regularized MLE are both optimal for top-K ranking", "venue": "arXiv preprint arXiv:1705.09971,", "year": 2017 }, { "authors": [ "Z. Chen", "X. Li", "J. Bruna" ], "title": "Supervised community detection with line graph neural networks", "venue": "arXiv preprint arXiv:1705.08415,", "year": 2018 }, { "authors": [ "O. Delalleau", "E. Contal", "E. Thibodeau-Laufer", "R.C. Ferrari", "Y. Bengio", "F. Zhang" ], "title": "Beyond skill rating: Advanced matchmaking in ghost recon online", "venue": "IEEE Transactions on Computational Intelligence and AI in Games,", "year": 2012 }, { "authors": [ "C. DeLong", "N. Pathak", "K. Erickson", "E. Perrino", "K. Shim", "J. Srivastava" ], "title": "Teamskill: Modeling team chemistry in online multi-player games", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2011 }, { "authors": [ "X. Glorot", "Y Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "J. Guo", "Y. Fan", "Q. Ai", "W.B. Croft" ], "title": "A deep relevance matching model for ad-hoc retrieval", "venue": "In ACM International on Conference on Information and Knowledge Management,", "year": 2016 }, { "authors": [ "T.K. Huang", "C.J. Lin", "R.C. Weng" ], "title": "Generalized Bradley-Terry models and multi-class probability estimates", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "T.K. Huang", "C.J. Lin", "R.C. Weng" ], "title": "Ranking individuals by group comparisons", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "D.R. Hunter" ], "title": "MM algorithms for generalized Bradley-Terry models", "venue": "Annals of Statistics,", "year": 2004 }, { "authors": [ "M. Jang", "S. Kim", "C. Suh", "S. Oh" ], "title": "Optimal sample complexity ofM -wise data for top-K ranking", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "K. Järvelin", "J. Kekäläinen" ], "title": "Cumulated gain-based evaluation of IR techniques", "venue": "ACM Transactions on Information Systems,", "year": 2002 }, { "authors": [ "M.G. Kendall" ], "title": "A new measure of rank correlation", "venue": "Biometrika, 30(1/2):81–93,", "year": 1938 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Y. Li", "M. Cheng", "K. Fujii", "F. Hsieh", "Cho-Jui Hsieh" ], "title": "Learning from group comparisons: Exploiting higher order interactions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "L. Maystre", "M. Grossglauser" ], "title": "Just sort it! A simple and effective approach to active preference learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "J.E. Menke", "T.R. Martinez" ], "title": "A Bradley–Terry artificial neural network model for individual ratings in group competitions", "venue": "Neural computing and Applications,", "year": 2008 }, { "authors": [ "S. Negahban", "S. Oh", "D. Shah" ], "title": "Rank centrality: Ranking from pair-wise comparisons", "venue": "Operations Research,", "year": 2016 }, { "authors": [ "A. Rajkumar", "S. Agarwal" ], "title": "A statistical convergence perspective of algorithms for rank aggregation from pairwise data", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "M. Richardson", "A. Prakash", "E. Brill" ], "title": "Beyond PageRank: Machine learning for static ranking", "venue": "In International conference on World Wide Web,", "year": 2006 }, { "authors": [ "F. Scarselli", "M. Gori", "A.C. Tsoi", "M. Hagenbuchner", "G. Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "M. Schlichtkrull", "T.N. Kipf", "P. Bloem", "R. Van den Berg", "I. Titov", "M. Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "N.B. Shah", "M.J. Wainwright" ], "title": "Simple, robust and optimal ranking from pairwise comparisons", "venue": "arXiv preprint arXiv:1512.08949,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The most elementary form of comparisons is pairwise: we often compare a pair of items and make judgments as to which one is of higher utility or simply preferable over the other. With a large amount of such comparison data, one can consider various interesting tasks. One may wish to predict future outcomes of unseen matches, and also to rank alternatives in order of utility or preference.\nChallenges arise in carrying out these tasks. Almost all existing state-of-the-art algorithms have been developed under the assumption that given a scenario, there exists a certain underlying model which governs statistical patterns of comparison data (see Section 2 for details). As such, we have different best-performing algorithms across distinct scenarios. This traditional approach, which begins by assuming certain models to develop algorithms, comes with limitations in practice.\nFirst, it gives rise to inconsistent performances. No single algorithm can perform consistently well in a wide range of scenarios, since it has been tailored to a specific model. Second, it is hard to know the underlying model without expert domain knowledge. In its absence, we have little choice but to find an appropriate algorithm via trial-and-error. Third, the model can be inherently complex for any existing algorithm to be effective. Sometimes groups of items are compared, thus the effects of interactions among in-group items come into play, further complicating the model.\nIn this work, we propose a unified algorithmic framework aimed to overcome these barriers. We focus on the match prediction problem where one wishes to estimate the likelihood of a group of M items preferred over another, based on partially observed group comparison data among a collection of n items. One can imagine that such group comparison data may bear complex statistical patterns due to a combination of two underlying models: the interaction model which governs the effects of in-group interactions in determining the utility or preference of a group; and the comparison model which governs the statistical patterns of pairwise group comparison data. Hence, designing a novel framework hinges heavily upon accurate inference of these underlying models.\nMain contribution. We incorporate deep learning techniques into our framework design. This enables us to infer the underlying models from real-world data obtained from a given application, and thus to achieve consistently high performances on a variety of datasets from diverse real-world applications where match prediction tasks are of interest.\nTo this end, we build on progress made through analysis in well-defined statistical models. We gain insights instrumental to the progress by looking into existing state-of-the-art algorithms in related and long-studied tasks such as rank aggregation (Negahban et al., 2016; Hunter, 2004; Huang et al., 2006; 2008). We find that most of them share a key element. They all exhibit so-called reward-andpenalty mechanisms in estimating the utilities of individual items.\nTo be more specific, they reward an item more greatly for winning (or being more preferred) in a disadvantageous comparison where its group is weaker than the counterpart. Likewise, they penalize it more greatly for losing (or being less preferred) in an advantageous one. In addition, the magnitudes of rewards and penalties are proportional to the contribution of the individual item to its group.\nThis structural similarity across the state-of-the-art algorithms has attracted our attention. Through some manipulation, we find that they all employ the same basic rule for estimating individual utilities (see (6) in Section 4 for details). The terms corresponding to rewards and penalties turn out to vary as either one of the two underlying models changes. This observation has inspired us to incorporate neural networks into our framework design.\nThe novelty of our design is salient in an ablation study where we compare it with a simple design. As an initial effort, a single-layer neural network has been employed to predict winning probabilities of unseen group matches (Menke & Martinez, 2008). It has shown a promising result, demonstrating prediction accuracy to be improved on a real-world online game dataset, but also exhibited a scalability issue. It requires one input node per item, making it prohibitive to be extended to realworld applications with a large number of items. Leveraging more advanced architectures (see Figures 1 and 2) motivated by observant analysis as emphasized, our design not only addresses such a scalability issue by design, but also outperforms the single-layer neural network. The merits of our design are evaluated against the single-layer neural network and other state-of-the-art algorithms through extensive experiments on a variety of synthetic and real-world datasets (see Section 5).\nUsing synthetic datasets, we demonstrate that our approach can achieve the performances of the state-of-the-art algorithms in the models for which they have been specifically developed. We investigate four models. Three consider various extensions of the Bradley-Terry-Luce model (Bradley & Terry, 1952) to the group comparison scenario. The other is a generalized version of the Thurstone model (Herbrich et al., 2007) widely used in skill rating systems of online games. As a result, we show that our framework consistently yields the best performances across all of these datasets (nearbest in some cases), while the other state-of-the-art algorithms suffer from inconsistent performances across different models.\nUsing real-world datasets, we also demonstrate that our framework performs consistently well across diverse real-world applications. We investigate five real-world datasets (sources in Footnote 6). One is a crowd-sourced image classification dataset, another is a collection of movie ratings, and the other three are match records from online games. We consider, in addition to the cross entropy loss, the prediction accuracy as another metric (defined in (9)). As a result, we show that our framework consistently yields almost the best performances across all of these datasets in terms of both metrics.\nWe also show that our framework can be easily extended to achieve the best performance in rank aggregation tasks where one seeks to rank items in order of utility or preference. Using a realworld dataset of movie ratings, we demonstrate that our framework yields the best performances in terms of two well-known metrics (see Footnote 10): Kendall tau distance (Kendall, 1938) and normalized discounted cumulative gain (Järvelin & Kekäläinen, 2002). This result suggests that it can potentially be adaptable for other tasks as well." }, { "heading": "2 RELATED WORK", "text": "The most related prior works are (Huang et al., 2006; 2008; Li et al., 2018; Herbrich et al., 2007) where plausible statistical models (some long-established and some widely used in practice) for in-\ngroup interactions and group comparisons are assumed, and statistical analysis is carried out to a great extent. We use the algorithms developed in these models and their variants for main baselines.\nThe problem of estimating individual utilities from group comparison data has been investigated in (Huang et al., 2006; 2008). They considered extensions of the BTL model where the group utility is either the sum or the product of individual utilities, and group comparison data follow the BTL model in terms of two group utilities (which we call the BTL-sum and BTL-product models respectively).\nA more advanced in-group interaction model has been explored in (Li et al., 2018). They considered a scenario where a pair of individuals in a group leads to a synergy, which contributes to the group. The group utility is represented as the sum of two quantities (the HOI model): (1) the sum of individual utilities and (2) the sum of the products of all pairs of individual utilities. A general scenario, where any k-tuple of individuals in a group leads to a synergy, has been considered in (DeLong et al., 2011).\nIn (Herbrich et al., 2007), they assumed individual utilities to be centered around a mean following a Gaussian distribution and viewed the group utility as their sum (the Thurstone model). The algorithm therein is widely used in skill rating systems of online games where groups of users compete.\nEmploying neural networks has been considered as an initial effort to predict winning probabilities of unseen group matches (Menke & Martinez, 2008). It has been shown that a single-layer neural network can fit some variants of the BTL model (Huang et al., 2008) and improve prediction accuracy through experiments on a real-world online game dataset.\nSome other works have also employed neural networks to exploit hidden models, for example, in information retrieval (Burges et al., 2005; Richardson et al., 2006; Guo et al., 2016) and community detection (Chen et al., 2018b) in graph domains (Scarselli et al., 2009; Schlichtkrull et al., 2018)." }, { "heading": "3 PROBLEM SETUP", "text": "We investigate the match prediction problem where we seek to predict comparison outcomes for unobserved pairs of groups given a collection of pairwise group comparison data. We consider the setting where each group consists of M individual items.\nWe are given comparison observations between two groups of size M . To be more specific, each comparison consists of (A,B, yAB). A and B are groups of M individuals, respectively. yAB indicates which group wins (or is preferred):\nyAB = { 1 if A B; 0 otherwise, (1)\nwhere A B indicates that group A wins over group B. We denote by Dobs the set of observed comparisons, so all information we are given is {(A,B, yAB)}(A,B)∈Dobs . The unobserved set of comparisons, which we wish to predict, is denoted by Dunobs. We consider the cross entropy loss as our metric, as it serves to quantify the discrepancy between two probability distributions. We define ŷAB as the estimate of Pr [yAB = 1]. Our goal is to minimize\n−1 |Dobs| ∑ (A,B)∈Dobs yAB log ŷAB + (1− yAB) log(1− ŷAB). (2)\nWe develop our algorithm primarily based on the cross entropy loss, but for evaluation purposes, we also consider other metrics, such as prediction accuracy, the Kendall tau distance, and normalized discounted cumulative gain (see Footnote 10).\nNotation. We denote by [n] = {1, 2, . . . , n} the set of all individual items. We use lowercase letters such as i and j to represent individual items, and caligraphic letters such as A and B to represent sets of items. We denote by {wi}i∈A the set of wi’s for all i ∈ A. We denote by [wi]i∈A a vector of wi’s for all i ∈ A and its ordering information is provided in the context. We denote by ŷ an estimate of y. We use subscripts as in yAB when y concerns a comparison between A and B. We use superscripts as in w(t) when w is updated iteratively. We use boldsymbols as in w to represent the set of wi’s for all i ∈ [n]." }, { "heading": "4 PROPOSED ALGORITHM", "text": "Overall architecture: pdf version\nWe propose a neural network architecture that learns from observed group comparison data to predict unobserved group comparison outcomes. As presented in Figure 1, it consists of three modules, which we denote by R, P and G. Figure 2 presents the detailed architecture of R and P (left) which have the same structure but are separate modules with different weights, and that of G (right)." }, { "heading": "4.1 MOTIVATION", "text": "Our decision to incorporate two modules R and P into our architecture has been inspired by state-ofthe-art algorithms that have been shown optimal (either achieving the minimal sample complexity or the global minima of cross entropy loss; details presented soon) under extensions of the wellestablished BTL model. Our main contribution lies in this design choice. Examining the algorithms in detail, we discover that they all share a similar mechanism in estimating individual utilities:\n(a) They all exhibit “reward” and “penalty” terms in the estimation process (details in (3) and (4)). They update an individual’s utility estimate by rewarding the individual for contributing to its group’s winning and likewise penalizing it for contributing to its group’s losing.\n(b) These reward and penalty terms vary in form as the underlying models change. (c) The magnitudes of rewards and penalties depend on the power dynamics between groups. A\ngreater reward is given to an individual when its group is relatively weaker compared to the opponent group, and likewise a greater penalty is given when its group is relatively stronger.\n(d) Their magnitudes also depend on the portion of an individual’s contribution within its group. Suppose an individual’s group wins (or loses) in a group comparison. The individual is given a greater reward (or penalty) when its contribution among others in the group is relatively greater.\nOur algorithm design principles are centered around these key observations. We introduce two separate modules to represent rewards and penalties respectively. We employ deep neural networks into the modules so that they can serve to infer underlying models, which are unknown in practice.\nReward and Penalty in State-of-the-art Algorithms. We present details of the state-of-the-art algorithms developed under extensions of the BTL model. They illustrate all of the key observations.\n• To begin with a simple case, Rank Centrality (Negahban et al., 2016) has been developed under the well-known BTL model where individual items are compared in pairs. It has been shown to achieve the minimal sample complexity for top-K rank aggregation, whose task is to estimate the set of top-K items, in certain regimes (Jang et al., 2017; Chen et al.,\n2017). As in (1), we define yij as 1 if i j and 0 otherwise, given a pair of individual items i and j. Then, its individual utility update rule is1:\nw (t+1) i ← w (t) i + α ∑ j:(i,j)∈Dobs ( yijw (t) j − (1− yij)w (t) i ) . (3)\nFor item i, one can consider w(t)j (next to yij) as the reward because it increases w (t+1) i when i j (yij = 1), andw(t)i (next to (1−yij)) as the penalty because it decreasesw (t+1) i\nwhen i ≺ j (yij = 0). One can consider α as a step size in the update. Note that the reward is large when the opponent’s utility estimate is large, since it can be considered that it has won in a tough match. Likewise, the penalty is large when its own utility estimate is large, since it has lost in an easy match. We can see the reward and penalty mechanisms, and the magnitudes of their influences based on the dynamics between groups. These correspond to observations (a) and (c). • Majorization-Minimization (MM) for the BTL-sum model has been developed in (Hunter,\n2004; Huang et al., 2006). We define w(t)A := ∑ i∈A w (t) i . Then, its individual utility update rule is2:\nw (t+1) i ← w (t) i + αi ∑ (A,B)∈Dobs,i∈A ( yAB · R(t)AB,i − (1− yAB) · P (t) AB,i ) (4)\nwhere\nR (t) AB,i =\nw (t) B\nw (t) A + w (t) B · w\n(t) i\nw (t) A , P\n(t) AB,i =\nw (t) A\nw (t) A + w (t) B · w\n(t) i\nw (t) A . (5)\nNote that the update rule (4) is similar to (3) of Rank Centrality in form, but the reward and penalty terms (5) are different. The interpretation is similar. The reward for item i is large when the opponent group’s utility estimate is large (w(t)B ). Note another factor: the larger the contribution of item i within its own group (w(t)i /w (t) A ), the greater the reward.\nThe same holds for the penalty. In addition to observations (a) and (c), we can also see the varying reward and penalty terms due to a different underlying model, and the effect of an individual’s contribution within its group for rewards and penalties. These correspond to observations (b) and (d). • MM for the BTL-product model has been developed in (Huang et al., 2008) and shown\nto achieve global minima in terms of cross entropy loss. Its individual utility update rule is described as in (4) but the reward and penalty terms are different3. Here, a similar interpretation applies again, and we can also see observations (a)–(d) at play.\nOur decision to incorporate another module G is as follows. To perform the match prediction task, we need not only individual utility estimates, but also in-group interaction and group comparison models that use such estimates to determine winning probabilities for pairs of groups. The role of G is to fit these underlying models during training. It takes as input the individual utility estimates from a pair of groups, which R and P help quantify, and predicts as output the probability of one group preferred over the other. The three modules interact as a whole to perform the task." }, { "heading": "4.2 MODULES R AND P", "text": "The input and output of modules R (or P) are of dimension 2M (see Figure 2). M can be an arbitrary integer greater than unity (M > 1). We set its value according to the dataset at hand.\n1As in (Negahban et al., 2016), α = 1/dmax where dmax := maxi di and di is the number of distinct items to which item i is compared. Also, we describe Rank Centrality as an iterative algorithm (one way to obtain the stationary distribution of the empirical pairwise preference matrix) in order to highlight its inherent reward-and-penalty mechanisms.\n2αi = (∑ (A,B)∈Dobs,i∈A ( w (t) A + w (t) B )−1)−1 .\n3w (t) A := ∏ i∈A w (t) i , αi = (∑ w(t)A w\n(t) A +w (t) B\n)−1 , R(t)AB,i = w (t) B\nw (t) A +w (t) B ·w(t)i and P (t) AB,i =\nw (t) A w (t) A +w (t) B ·w(t)i .\nThey take the current utility estimates of the individuals in a given group comparison (A,B) as input, and produce the current R (or P) estimates for the individuals as output4. Note that the input and output dimensions are independent of the total number of items. Hence, our framework does not suffer from scalability issues in contrast to prior approach (Menke & Martinez, 2008) also based on employing neural networks. To make our algorithm robust against arbitrary orderings of the items within a group, we apply data augmentation. Given a sample, we create extra samples which represent the same outcome but have different item orderings. For example, given a sample (A = (1, 2),B = (3, 4), yAB = 1), we create an extra sample such as (A′ = (2, 1),B′ = (4, 3), yA′B′ = 1). We also make our algorithm robust against arbitrary orderings of the two groups. That is, we create extra samples by changing the order of two sets A and B as well as A′ and B′. This technique helps train modules R and P in such a way that they become robust against two kinds of arbitrary orderings: item orderings within a group and group orderings.\nAll layers are fully connected. The activation functions between two layers are rectified linear units. The final activation function is the sigmoid function whose output ranges between 0 and 1.\nStarting with an initial vector w(0) ∈ Rn (reflecting the current utility estimates), we finally obtain w(T ) ∈ Rn by applying R and P repeatedly T times. Each iteration is described as follows:\nw (t+1) i ← w (t) i + α ∑ (A,B)∈Dobs,i∈A ( yAB · R(t)AB,i − (1− yAB) · P (t) AB,i ) , (6)\nwhere5 α = c/maxi di and di = |{(A,B) : i ∈ A ∪ B}|.\nAt the end of each iteration, we transform w(t+1) to be zero-mean as in w(t+1)i ← w (t+1) i −∑n\ni=1 w (t+1) i /n, and unity-norm as in w (t+1) i ← w (t+1) i /‖w(t+1)‖2. At iteration t, given w(t),\nR and P produce positive real values in [0, 1]:\nR (t) AB,i = R\n( i, [w\n(t) j ]j∈A, [w (t) k ]k∈B\n) , P\n(t) AB,i = P\n( i, [w\n(t) j ]j∈A, [w (t) k ]k∈B\n) . (7)\nModules R and P take as input a concatenation of two vectors [w(t)j ]j∈A and [w (t) k ]k∈B. As they are vectors, not sets, ordering matters. Recall that given a sample, we create multiple additional samples by employing data augmentation techniques. In doing so, we randomly mix the item ordering within a group and also the group ordering between the two groups. We preserve the resulting orderings in the created samples for the input to R and P. Resorting to the previous example, if the created sample by data augmentation is (A′ = (2, 1),B′ = (4, 3), yA′B′ = 1), the first element of the input vector for R and P concerns w(t)2 , the second w (t) 1 , the third w (t) 4 and the fourth w (t) 3 ." }, { "heading": "4.3 MODULE G", "text": "The input and output of module G are of dimension 2M and a scalar respectively (see Figure 2). As in modules R and P, we set the value of M according to the dataset at hand. Since the dimensions are independent of the total number of items, the module does not suffer from scalability issues. The module takes the final utility estimates of the individuals in a given group comparison (A,B) as input (see Footnote 4), and produces the winning probability estimate of one group preferred over the other in the given group comparison as output:\nŷAB = G ( [w (T ) i ]i∈A, [w (T ) j ]j∈B ) . (8)\nSimilarly as in (7), module G takes as input a concatenation of two vectors [w(t)i ]i∈A and [w (t) j ]j∈B. The item and group orderings used for R and P are preserved through the input to G.\nAll layers are fully connected. The activation functions between two layers are rectified linear units. The final activation function is the sigmoid function whose output ranges between 0 and 1.\n4To be accurate, the produced output is a mix of the individual utility estimates, resulting from being passed through fully-connected layers. Conceptually speaking, we refer to them simply as individual utility estimates.\n5Prior work (see Footnote 1) motivates the choice of α and hyperparameter tuning determines its scaling c in the numerator.\nWe now describe our training procedure. We first split available data Dobs randomly into Dtrain and Dval. We let Dval be a small fraction (1%–2%) of Dobs and use it for validation purposes. Training Procedure.\n(1) Initialize w(0) randomly using a Gaussian distribution whose mean is 0 and variance is the normalized identity matrix. Also, initialize the parameters of R, P and G using the Xavier initialization (Glorot & Bengio, 2010).\n(2) Obtain w(T ) through T iterations in each of which we use modules R and P, and also group comparison samples in Dtrain.\n(3) Obtain {ŷAB}(A,B)∈Dtrain for each group comparison sample in Dtrain by using w (T ) obtained\nin (2) above and module G.\n(4) Update the parameters of R, P, and G via the Adam optimizer (Kingma & Ba, 2014) to minimize the cross entropy loss in (2) replacing Dobs therein by Dtrain. We apply weight decay regularization with a factor of 0.01.\nFor each training epoch, we repeat (2)–(4) above. We use 500 epochs, in each of which we calculate a validation loss using Dval. We apply early stopping, choosing the model parameters in the epoch with the lowest validation loss. To avoid terminological confusion, we make it clear that we use batch gradient descent. That is, we update the model parameters once at the end of each epoch. Hence, the T iterations in (2) do not mean the number of (mini-)batches per epoch, in each of which the model parameters are updated, as in the conventional way. They are our architectural constructs intended to obtain well-refined estimates w(T ), from which we fit the underlying model." }, { "heading": "5 EXPERIMENT RESULTS", "text": "To verify the broad applicability of our algorithm, we conduct extensive experiments using synthetic and real-world datasets. We compare it with five other algorithms: MM-sum (Huang et al., 2006), MM-prod (Huang et al., 2008), SGD-HOI (Li et al., 2018), TrueSkill (Herbrich et al., 2007) and Rank Centrality (Negahban et al., 2016). SGD-HOI has been developed for the factorization HOI model in (Li et al., 2018) and TrueSkill for the Thurstone model in (Herbrich et al., 2007). As Rank Centrality has been developed for the case of comparing two individual items, we consider its natural extensions depending on the dataset. For example, in the sum model, we replace the summation in (3) by ∑ (A,B)∈Dobs,i∈A ( yAB ∑ k∈B w (t) k + (1− yAB) ∑ k∈A w (t) k ) ." }, { "heading": "5.1 SYNTHETIC DATA EXPERIMENTS", "text": "We use four synthetic datasets: BTL-sum, BTL-product, HOI and a generalized Thurstone. In all, we set n = 300 and M = 5. In the HOI model, we generate the ground truth utilities and dimension-7 features using Gaussian distributions. In the others, we generate the ground truth utilities uniformly at random. We generate 5n log n distinct paired groups and each pair is compared 10 times.\nWe split generated datasets randomly into Dobs (90%) and Dunobs (10%). All algorithms use Dobs to predict unobserved group comparisons in Dunobs. They use a fraction (1%–2%) of Dobs for validation purposes if necessary. We use T = 20 and the learning rate of 10−2. We use four hidden layers with 7M nodes each for modules R and P, and four hidden layers with 9M nodes each for module G.\nAs our performance metric, we consider the cross entropy loss in (2). MM-sum and MM-prod have been developed to achieve maximum likelihood, which can be shown to be equivalent to minimizing the cross entropy loss, and SGD-HOI is tailored for the cross entropy loss as it can adopt an arbitrary loss function. It may seem somewhat unfair for Rank Centrality and TrueSkill, which have not been developed to minimize the cross entropy loss. In Section 5.2, some of our experiment results on real-world datasets compare the algorithms in terms of other metrics, for which none of them are tailored. They include prediction accuracy, the Kendall tau distance, and normalized discounted cumulative gain (see Footnote 10), which may be considered more relevant for practical use.\nFigure 3 shows our result. The algorithms that underperform by large gaps are not presented.\nBTL-sum model: In most settings where the amount of data samples is sufficient, our algorithm achieves the performance promised by MM-sum, which has been shown in (Huang et al., 2006) to achieve local minima in terms of cross entropy loss.\nBTL-product model: Our algorithm achieves the optimal performance promised by MM-prod, which has been shown in (Huang et al., 2008) to achieve global minima in terms of cross entropy loss.\nHOI model: SGD-HOI performs best in most settings. Our algorithm is second-best with a slight gap to the best performance in those settings, but performs best when the amount of data samples is ample. This is because our algorithm using neural networks is affected by overfitting when the amount of data is insufficient.\nThurstone model: MM-prod and our algorithm perform best. TrueSkill comes next with a gap, but it clearly outperforms the others. It is interesting to observe that TrueSkill, which has been developed specifically for the Thurstone model, does not lead to the best performance. However, this result does not run counter to theory, as its optimality has not been shown in the literature.\nOur algorithm performs consistently best (or near-best with a negligible gap) across all datasets, while the others perform inconsistently across them. Some perform well in one, but poorly in the others (for example, MM-sum performs best only in the BTL-sum model but poorly in all others).\nThis result has an important implication. Our algorithm is shown to achieve consistently the best performances, matching those of the state-of-the-art algorithms specifically developed for the models that underlie the synthetic datasets. This implies that our algorithm can be universally applied to achieve consistently high performances in a wide range of real-world match prediction applications. We corroborate its universality further in the following extensive real-world data experiments." }, { "heading": "5.2 REAL-WORLD DATA EXPERIMENTS", "text": "As in Section 5.1, we split real-world datasets randomly into Dobs (90%) and Dunobs (10%), and use a fraction (1%–2%) of Dobs for validation purposes if necessary. We use five different realworld datasets6: GIFGIF, HOTS, DOTA 2, LoL, IMDb 5000. We use T = 30, 15, 15, 20, 20 and the learning rates of 10−3, 10−3, 10−2, 10−2, 10−2 respectively. We use four hidden layers with 7M nodes each for modules R and P, and four hidden layers with 9M nodes each for module G.\nGIFGIF: A crowd-sourcing project6. We use the dataset pre-processed in (Maystre & Grossglauser, 2017). A participant is presented with two images and asked to choose one which better describes a given emotion7. This dataset belongs to a special case of our interest as individual comparisons are concerned. We consider the emotion of happiness. We have 6,120 images and 106,886 samples.\nHOTS: A collection of HOTS match records from 10/26/17 to 11/26/17 collected by HOTS logs6. Each match consists of two groups with five players each. The players choose heroes (characters) for each match out of a pool of 84. We choose high-quality matches only where all players are highly-skilled according to some available statistics. There are 26,486 match records.\n6Source: gifgif.media.mit.edu (GIFGIF); hotslogs.com/Info/API (HOTS); kaggle.com/devinanzelmo/dota2-matches (DOTA 2); kaggle.com/chuckephron/leagueoflegends (LoL); kaggle.com/carolzhangdc/imdb-5000movie-dataset (IMDb 5000).\n7One is allowed to choose “neither” but we exclude such data.\nDOTA 2: A collection of DOTA 2 match records6. Each match consists of two groups with five players each, and they choose heroes out of a pool of 113. There are 50,000 match records.\nLoL: A collection of LoL professional match records6. Two groups with five players each compete. The players choose heroes for each match out of a pool of 140. There are 7,610 match records.\nIMDb 5000: A collection of meta-data for 5,000 movies6. Each movie has a score and is associated with keywords. To fit our purpose, we generate match records for movie pairs from the collection. We consider each movie as a group and its five associated keywords as its items. Given a pair, we declare a win for the one with a higher score. We have 8,021 keywords and 123,420 samples.\nIn addition to the cross entropy loss, we consider another metric relevant in the real-world: prediction accuracy. In real-world data experiments, we measure cross entropy loss and prediction accuracy. We declare it a win for a group if the estimate of its winning probability is above a certain threshold, which we set as 0.5 (Delalleau et al., 2012). Thus, the prediction accuracy is expressed as follows8:\n1 |Dunobs| ∑\n(A,B)∈Dunobs yABI≥0.5(ŷAB) + (1− yAB)I<0.5(ŷAB). (9)\nThe rationale behind using both cross entropy loss and prediction accuracy is that they serve as complementary metrics. To see this, let us consider a toy example. Suppose we have three group comparisons: (A,B), (B, C) and (C,A). The ground-truth is assumed (yAB, yBC , yCA) = (0.6, 0.7, 0.8). Consider two algorithms: Algorithm 1 estimates (ŷAB, ŷBC , ŷCA) = (0.55, 0.6, 0.7), and Algorithm 2 yields (ŷAB, ŷBC , ŷCA) = (0.45, 0.7, 0.8). Using (2) and (9), we can check that Algorithm 1 achieves a cross entropy loss of 0.6122 and a prediction accuracy of 0.7, and Algorithm 2 achieves 0.6098 and 0.6333. Note that better cross entropy losses do not necessarily translate into better prediction accuracies, and vice versa. This is because the cross entropy loss measures the closeness between the estimation and the ground-truth, whereas the prediction accuracy measures the frequency at which the estimation predicts the same group being more preferable as the ground-truth.\nWe also conduct an ablation study where we consider an algorithm based on a single-layer neural network (Menke & Martinez, 2008). It has a scalability problem, however, as it requires at least one input node per item. This prevents us from measuring its performance in our setup9 for some datasets with a large number of items such as GIFGIF and IMDb 5000.\nTable 1 shows our result. The best performances are boldfaced, and the second-best are underlined. The numbers in the parentheses indicate the ranks among the algorithms being compared in a given setup of dataset and metric. Our algorithm consistently yields the top performances in all cases. In contrast, the state-of-the-art algorithms suffer from inconsistent performances across different datasets and/or metrics.\nHere lies the value of our algorithm. In practice, it is difficult to choose which algorithm to deploy since we do not know the underlying model for a given scenario without expert domain knowledge. Even when we have a reasonably accurate estimate of the underlying model, multiple algorithms known to perform equally well in one model can lead to noticeably different performances, making it difficult to choose one given multiple alternatives. This is demonstrated in the GIFGIF case where MM and Rank Centrality, known to perform well in the BTL model, show different performances. More importantly, models in a variety of real-world applications can potentially be so complicated that all existing algorithms tailored to specific models may perform poorly. As demonstrated in the extensive real-world data experiments, our algorithm has the potential to be universally applicable.\n8We define I≥0.5(x) as 1 if x ≥ 0.5 and as 0 otherwise. I<0.5(x) equals to 1 if x < 0.5 and to 0 otherwise. 9Intel Core i7-6850K @ 3.6GHz (CPU) and GeForce GTX 1080 Ti (Single GPU).\nExtension to Rank Aggregation. We have so far focused on the match prediction problem. However, we believe that our algorithm can be easily extended for other tasks as well. In an attempt to demonstrate its potential flexibility, we present preliminary results in rank aggregation tasks.\nIn rank aggregation tasks, one seeks to rank all items in order of utility (Rajkumar & Agarwal, 2014; Agarwal et al., 2018), or to identify only a few top-ranked items (Chen & Suh, 2015; Chen et al., 2018a) as in top-K ranking. In both tasks, one needs to obtain a collection of individual utility estimates inferred from comparison data, which is close to the ground truth that one postulates.\nWe use the IMDb 5000 dataset as it comes with IMDb movie scores which can be regarded as the ground-truth. The other algorithms produce individual utility estimates and use them to compute group utility estimates based on the models they assume. In extending our algorithm, we let R, P and G stay the same. We use its final winning probabilities for pairs of groups to compute group utility estimates. In doing so, we adopt associated scores proposed in (Shah & Wainwright, 2015): a group’s score is the probability that it is preferred over another group chosen uniformly at random.\nWe measure the performance in terms of two well-known metrics10: the Kendall tau distance (Kendall, 1938) and normalized discounted cumulative gain (NDCG@K) (Järvelin & Kekäläinen, 2002). Table 2 shows our result. It turns out that our algorithm performs best in both metrics. This result suggests that with slight adjustments of our framework to fit the purpose, it can potentially lead to satisfactory performances for other related tasks as well." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We investigate the match prediction problem where the task is to predict the preference of one group over the other given an unseen pair of groups based on the observed group comparison data. Facing with real-world challenges that underlying models that govern in-group interactions and group comparisons are unknown and complex, we develop an algorithm that employs deep neural networks to infer such latent models from data specific to a given application. As a result, we show that our algorithm can show consistently best prediction performances compared to other state-ofthe-art algorithms on multiple datasets across various domains. We also demonstrate that it can be applied to the rank aggregation task, which implies its potentially broader application to other tasks.\nIn view of it, we consider the following task as one possible direction for future work. The task is to predict whether multiple items that constitute a group would make an effective combination producing positive synergies, and thus lead to a desired outcome. Bundling strategies in e-commerce can be a real-world example: multiple items are bundled as a package and offered to the potential buyer with a discount. The goal is to figure out which set of items would appeal most to the buyer given past sales data. We expect that our current architecture can be extended to this task. Among a number of items, some will contribute positively to the group (rewards) and some negatively (penalties). Our modules R and P can be applied to measure them. Our module G can be applied to govern how these rewards and penalties manifest collectively as a group outcome. We also expect that our work can appear frequently in other tasks as well where in-group interactions are critically concerned, but their statistical patterns are unknown in practice.\n10Kendall tau distance is defined as |{(i, j) : i < j, (τ1(i) < τ1(j) ∧ τ2(i) > τ2(j)) ∨ (τ1(i) > τ1(j) ∧ τ2(i) < τ2(j))}|where τ1(i) and τ2(i) are the rankings of item i in τ1 and τ2. In words, it counts the number of item pairs that are ranked reversely in the two rankings. In NDCG, items are associated with relevance scores. In our case, items ranked higher in the ground-truth ranking have higher scores. Let reli be the score of the item ranked i-th in a ranking. NDCG discounts reli by log2(i+1) to “penalyze” the quality of a ranking for placing a high-relevance item at a low rank. NDCG@K is normalized DCG@K defined as ∑K i=1 reli/log2(i+ 1)." } ]
2,019
null
SP:3c637265c7844256d8e73afb0d1ac811db505c73
[ "The aim of this paper is to provide a theoretical analysis of adversarial training under the linear classification setting. The main result states that, under many technical assumptions, adversarial training using gradient descent may converge to the hard margin SVM classifier with a fast rate. Here \"fast\" is not the standard 1/T fast rates but, rather, a rate of o(1/log T) (in comparison to recent results that looked into the convergence of gradient descent with logistic loss to the hard-margin SVM solution).", "This paper provides some analyses of the difference between adversarial training and standard training for linear classification problem. In particular, it proves that when the data is \\eps linearly separable, adversarial training converges faster than standard trading. It also argues that when the data is not \\eps linearly separable, adversarial training is more robust to outlier. Simulations are constructed to verify the arguments in the paper but there is no experiments on real dataset. " ]
It has widely shown that adversarial training (Madry et al., 2018) is effective in defending adversarial attack empirically. However, the theoretical understanding of the difference between the solution of adversarial training and that of standard training is limited. In this paper, we characterize the solution of adversarial training for linear classification problem for a full range of adversarial radius ε. Specifically, we show that if the data themselves are ”ε-strongly linearly separable”, adversarial training with radius smaller than ε converges to the hard margin solution of SVM with a faster rate than standard training. If the data themselves are not ”ε-strongly linearly separable”, we show that adversarial training with radius ε is stable to outliers while standard training is not. Moreover, we prove that the classifier returned by adversarial training with a large radius ε has low confidence in each data point. Experiments corroborate our theoretical finding well.
[]
[ { "authors": [ "Sanjeev Arora", "S Simon Hu Wei Du", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": null, "year": 1904 }, { "authors": [ "Stephen Boyd", "Lieven Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2004 }, { "authors": [ "Chenyi Chen", "Ari Seff", "Alain Kornhauser", "Jianxiong Xiao" ], "title": "Deepdriving: Learning affordance for direct perception in autonomous driving", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Moustapha Cisse", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "arXiv preprint arXiv:1704.08847,", "year": 2017 }, { "authors": [ "Francesco Croce", "Maksym Andriushchenko", "Matthias Hein" ], "title": "Provable robustness of relu networks via maximization of linear regions", "venue": "Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Simon Du", "Jason Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "F. Gamaleldin Elsayed", "Dilip Krishnan", "Hossein Mobahi", "Kevin Regan", "Samy Bengio" ], "title": "Large margin deep networks for classification", "venue": "arXiv preprint arXiv:", "year": 2018 }, { "authors": [ "J. Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Suriya Gunasekar", "Jason Lee", "Daniel Soudry", "Nathan Srebro" ], "title": "Characterizing implicit bias in terms of optimization geometry", "venue": "Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Ren Shaoqing", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "arXiv preprint arXiv:1512.03385,", "year": 2015 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "arXiv preprint arXiv:1705.08475,", "year": 2017 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": null, "year": 1905 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clement Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Ziwei Ji", "Matus Telgarsky" ], "title": "Risk and parameter convergence of logistic regression", "venue": "arXiv preprint arXiv:1803.07300,", "year": 2018 }, { "authors": [ "A Krizhevsky", "Sutskever I", "E.G. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Alexey Kurakin", "J. Ian Goodfellow", "Samy Bengio" ], "title": "dversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Guang-He Lee", "David Alvarez-Melis", "S. Tommi Jaakkola" ], "title": "Towards robust, locally linear deep networks", "venue": "arXiv preprint arXiv:1907.03207,", "year": 2019 }, { "authors": [ "Jaehoon Lee", "S Samuel Xiao", "Lechao Schoenholz", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "arXiv preprint arXiv:1902.06720,", "year": 2019 }, { "authors": [ "Kaifeng Lyu", "Jian Li" ], "title": "Gradient descent maximizes the margin of homogeneous neural networks", "venue": "arXiv preprint arXiv:1906.05890,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "Proceedings of the International Conference on Representation Learning (ICLR),", "year": 2018 }, { "authors": [ "Mor Nacson", "Shpigel", "Nathan Srebro", "Daniel Soudry" ], "title": "Stochastic gradient descent on separable data: Exact convergence with a fixed learning rate", "venue": "arXiv preprint arXiv:1806.01796,", "year": 2018 }, { "authors": [ "Thiago Serra", "Christian Tjandraatmadja", "Srikumar Ramalingam" ], "title": "Bounding and counting linear regions of deep neural networks", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "arXiv preprint arXiv:1710.10345,", "year": 2017 }, { "authors": [ "Weijie Su", "Stephen Boyd", "Candes J. Emmanuel" ], "title": "A differential equation for modeling nesterov’s accelerated gradient method: Theory and insights", "venue": "arXiv preprint arXiv:1503.01243,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2017 }, { "authors": [ "Vladimir Vapnik" ], "title": "Convex Optimization. Springer, Data mining and knowledge discovery", "venue": null, "year": 1995 }, { "authors": [ "Arora" ], "title": "2019) suggest that over-parameterized neural networks of sufficient width", "venue": null, "year": 2019 }, { "authors": [ "Madry" ], "title": "We use PGD to find the arg maxx:‖x−xi‖≤ε `(x,θ) for each xi", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the impressive performance of deep neural networks on various learning tasks, the widely existing adversarial examples (Goodfellow et al., 2014; Szegedy et al., 2017) has thwarted its application in the safety-sensitive scenarios (Kurakin et al., 2016; Chen et al., 2015). A well trained neural network can be vulnerable to certain small adversarial perturbation added to the original data, despite the perturbations is almost imperceptible to human.\nAdversarial training (Madry et al., 2018) is an effective method to train a robust deep neural network that can resist adversarial samples to some extent. However, the theoretical understanding of adversarial training is quite limited. In this paper, we try to unveil the mystery of adversarial training. Specifically, in this paper for adversarial training, we consider the attack in l2 norm, i.e. the following objective\nmin θ\n1\nN N∑ i=1 max ‖x−xi‖2≤ε `(x,θ). (1)\nThis is in contrast with the standard training objective minθ 1N ∑N i=1 `(x,θ). To obtain a clear theoretical characterization, we focus on the linear classifier.\nIn fact, for linearly separable data, Soudry et al. (2017); Ji & Telgarsky (2018) have proven that the linear classifier trained by gradient descent (GD) with logistic loss converges to the hard margin solution of SVM with a rate of O((log t)−1). However, we find things become much different for adversarial training. We first prove that for ”ε-strongly linearly separable” data (Definition 1), GD can find the hard margin classifier with a faster rate of O((log t)−(1+ε\n∗)), with the same exponential tail loss used in Soudry et al. (2017). Here ε∗ = (|S|ε(ε1−ε))/N , where ε1 is the distance between the support vectors and the hard margin solution of SVM, and |S| is the number of support vectors corresponding to hard margin classifier. This result shows that we can find a robust solution much faster by adversarial training if the data themselves are more separable than the adversarial radius.\nWhen the data are not ε-strongly linearly separable, adversarial training gives significantly different solutions from the standard training. To further illustration, we consider the following case, most of the data are ε-strongly linearly separable while there several outliers are not or even not linearly separable. Then the classifier returned by standard training is heavily affected by these outliers, since the classifier returned by standard training converges to the hard margin classifier while it can\nbe sensitive to outliers. However, we can show the stability of adversarial training to outliers, i.e. the classifier returned by adversarial training is slightly affected by outliers. Next, we also show that adversarial training leads to a classifier with relatively lower confidence in each data point than that of standard training. We then give a formal characterization for this phenomenon under the case of a large ε. The low confidence in each training data naturally induces a high training loss. A simple generalization error bound informs the high loss on test set, which interprets the widely observed poor test performance of adversarial training." }, { "heading": "1.1 RELATED WORK", "text": "Plenty of work trying to obtain a large margin solution to promote model robustness. Cisse et al. (2017); Hein & Andriushchenko (2017) regularize the training with the Lipschitz constant of the model to enhance robustness. Another line of work (Elsayed et al., 2018; Lee et al., 2019a; Serra et al., 2018; Croce et al., 2019) uses a first order approximation to compute the margin of deep neural network, and then find a large margin solution by setting the approximation to be the optimization objective. However, none of these works come up with a theoretical guarantee.\nOur paper is also related with Soudry et al. (2017); Ji & Telgarsky (2018) who prove that gradient descent with logistic loss converge to the hard margin classifier for linearly separable data but with a rate of O((log t)−1) or on non-linearly separable data with rate of O(log log t/ log t). Nacson et al. (2018) extend this result to stochastic gradient descent. Gunasekar et al. (2018) study the convergence under other optimization methods. Lyu & Li (2019) provides a similar result for the homogeneous neural networks. However, they all target the standard optimization problem rather than the adversarial training.\nIlyas et al. (2019) studies adversarial training via the lens of robust/non-robust features. They claim that adversarial attacks are attributed to the presence of non-robust features and adversarial training can be viewed as explicitly preventing the classifier from learning useful but non-robust features. Their theory is presented under the parameter estimation framework, which is different from the conventional adversarial training. Moreover, they do not discuss the convergence direction of adversarial training." }, { "heading": "2 NOTATIONS AND ASSUMPTIONS", "text": "In this section, we introduce the notations and assumptions we used in this paper. We consider a binary classification problem.\nNotations. Dataset is represented as {xi, yi}Ni=1, where xi ∈ Rd and yi ∈ {−1, 1}. The loss function is represented by `(·). ‖ · ‖ means l2 norm in this paper. The objective of standard training can be represented as\nL0(w) = 1\nN N∑ i=1 `(wT yixi), (2)\nwhere w ∈ Rd is a linear classifier. Adversarial training has the objective of\nL(w) = 1 N N∑ i=1 max ‖x−xi‖≤ε `(wT yix), (3)\nwhere ε is the adversarial radius. An intuitive explanation of adversarial training is that it requires the classifier perform well in the ε-ball centered at each data xi. In this paper, wt is the iterate at step t of gradient descent (GD) under adversarial training. We use ŵ to represent the solution of SVM (8). Now, we give the definition of ε-strongly linearly separable data. Definition 1 (ε-strongly linearly separable). 1 The dataset is ε-strongly linearly separable, if there exists w∗ such that ∀xi,w∗T yixi > 0 and w∗T yixi > ε‖w∗‖.\nThe ε-strongly linearly separable means there exists a linear classifier that does not only give a right prediction to each data point but also ensures all data are away from the linear classifier larger than\n1Please notice that ε-strongly linearly separable is a stronger condition compared with linearly separable. It distinguishes with the separability under soft margin which allows some w∗T yixi < 0 but w∗T yixi > ε‖w∗‖.\nε. In addition, linearly separable refers to choosing ε = 0 in Definition 1. We now define confidence of classifier. Definition 2 (Confidence). For a given linear classifier w of binary classification problem, the logits is 1/(1 + e−w Txi) and e−w Txi/(1 + e−w\nTxi) for category 1 and −1. Then max{1/(1 + e−w Txi), e−w Txi/(1 + e−w Txi)} is the confidence of w on data xi.\nThe confidence measures how confident the classifier is about the prediction. In the sequel, we use xi to represent yixi for simplify the notations. We introduce the assumptions used in this paper. Assumption 1. The loss function `(·)2 satisfies that ∀u, `(u) > 0, `′(u) < 0, l′′(u) ≥ 0, limu→∞ `(u) = limu→∞ ` ′(u) = 0, and the `(u) has Lipschitz gradient.3 Assumption 2. The l(·) and l′(·) have exponential tail, which means there exists some constant u0 and C1, C2 satisfies that ∀u > u0, C1e−u ≤ l(u) ≤ C2e−u as well as l′(u).4\nAssumption 3. ThewTxi−ε‖w‖ has the range of [c1,∞) for eachxi, where c1 > −∞. ‖w‖ ≥ c2 for some constant c2.5.\nDue to the properties of loss function, the adversarial training objective (3) with radius ε has an explicit formula\nL(w) = 1 N N∑ i=1 ` ( wTxi − ε‖w‖ ) . (4)\nThe adversarial training objective equation 4 is optimized by the gradient descent, i.e.,\nwt+1 = wt − η∇L(wt) = wt − η N∑ i=1 `′ ( wTt xi − ε‖wt‖ )( xi − ε wt ‖wt‖ ) , (5)\nwhere η is the learning rate." }, { "heading": "3 ADVERSARIAL TRAINING CONVERGES FASTER TO HARD MARGIN FOR", "text": "ε-STRONGLY LINEARLY SEPARABLE DATA\nIn this section, we theoretically characterize where adversarial training converges when the data themselves are ε-strongly linearly separable. We first have the following key lemma which ensures the loss of adversarial training with radius ε can converge to zero on ε-strongly linearly separable data. The proof of this lemma is delegated to Appendix A. Lemma 1. The adversarial training objective equation 4 is convex and L-smooth for some positive constant L. Then if the data are ε-strongly linearly separable (1), the iterates {wt} returned by GD under adversarial training satisfy 1): limt→∞ L(wt) = 0, 2): limt→∞ ‖wt‖ = ∞. and 3): limt→∞w T t xi − ε‖wt‖ =∞\nThe distance of xi away from linear classifier wt is |wTt xi|/‖wt‖. Then, the convergence result in the Lemma 1 does not inform us the robustness of trained classifier. Fortunately, the results in Soudry et al. (2017); Ji & Telgarsky (2018) claim that the iterates of standard training updated by GD can converge to hard margin classifier with the rate of O((log t)−1). Now, we give the main result of this section. The conclusion is similar to Theorem 3 in Soudry et al. (2017) while the convergence is much sharper. Theorem 1. For any ε-strongly linearly separable data (Definition 1), and loss function `(·) satisfies Assumption 1 and 2. If the Assumption 3 holds, then the gradient flow iterates w(t) ,\ndw(t)\ndt = −∇L(w(t)) (6)\nsatisfies w(t) = ŵ ·O (log t) + h(t), (7)\n2The loss is set to be positive, differentiable, monotonically decreasing convex function. Lots of general used loss functions are satisfied, such as e−u, log (1 + e−u) etc.\n3It means that |`′(u)− `′(v)| ≤ L|u− v| for some positive constant L. 4Lots of loss functions we used such as log (1 + e−u), e−u are satisfied with this assumption. 5This assumption can be attained by re-scale the norm of w if necessary.\nfor a large t. Here ŵ is the hard margin solution of SVM:\nŵ = arg min w∈Rd\n‖w‖ s.t.wTxi ≥ 1. (8)\n‖h(t)‖ is in the order of o(log t). Then\nlim t→∞ ∥∥∥∥ w(t)‖w(t)‖ − ŵ‖ŵ‖ ∥∥∥∥ ≤ O((log t)−(1+ε∗)), (9)\nwhere ε∗ = |S|ε(ε1−ε)N , {εi} is the sorted distance of data away from the hard margin solution of SVM, and S = {i : ŵTxi = ε1‖ŵ‖}; |S| is the number of elements in set S. Finally, iterates {wt} of adversarial training updated by GD with step size η satisfies\n‖wk −w(kη)‖ ≤ O(η), (10) for k ∈ N+. Then we can conclude\nlim t→∞ ∥∥∥∥ wt‖wt‖ − ŵ‖ŵ‖ ∥∥∥∥ ≤ limt→∞ ∥∥∥∥ w(tη)‖w(tη)‖ − ŵ‖ŵ‖ ∥∥∥∥+O(η). (11)\nWe give a brief discussion to hard margin solution of SVM. Since the dataset is ε-strongly linearly separable, by KKT condition, we have ε < 1/‖ŵ‖ which informs us all the distance between xi and ŵ are larger than ε. Then, we present that perturbations smaller than ε can be defended by ŵ. Hence, the linear classifier wt becomes robust for a large t, since it has the same direction with ŵ. We then conclude adversarial training on ε-strongly linearly separable data helps iterates {wt} converge to a robust solution with a fast rate. A detailed proof of this Theorem is delegated to Appendix C.\nFor the theorem, we have several remarks about the adversarial training on ε-strongly linearly separable data.\n• Adversarial training converges to a robust solution, the same as standard training, with a faster rate. The acceleration is determined by the constant ε∗ = (|S|ε(ε1 − ε))/N which related with the true margin ε1 and the number of support vectors |S|. • Large proportion of support vectors |S|/N , and an appropriate choice of ε (ε1/2 is the best)\nincrease the convergence rate.\nWe next study where adversarial training converges when the data are not ε-strongly linearly separable.\n4 ADVERSARIAL TRAINING WHEN DATA ARE NOT ε-STRONGLY LINEARLY SEPARABLE\nIt’s hard to obtain the exact separability of data without verifying the hard margin solution of SVM before adversarial training. Hence, understanding adversarial training on non-ε-strongly linearly separable data is also crucial." }, { "heading": "4.1 THE INFLUENCE OF OUTLIERS", "text": "We consider the case that clean data are ε-strongly linearly separable but the whole data set is not ε-strongly linearly separable because of outliers. We investigate how the outliers affect standard training and adversarial training." }, { "heading": "4.1.1 STANDARD TRAINING IS UNSTABLE TO OUTLIERS", "text": "We now illustrate how standard training can be easily altered by outliers for both linearly and nonlinearly separable data.\nAs shown in Soudry et al. (2017); Ji & Telgarsky (2018), gradient descent converges to the hard margin solution for the exponential loss and linear separable data. Although the hard margin solution is believed to be a reasonably good classifier, it has potential to be non-robust, if one can intentionally insert outliers to be support vectors. The following proposition presents that hard margin classifier somehow can be non-robust.\nProposition 1. For two linearly separable datasets {x1i } N1 i=1 and {x2i } N2 i=1, let ŵ1 and ŵ2 be their hard margin classifier respectively. Then, the hard margin classifier ŵ of union dataset {x1i } N1 i=1 ⋃ {x2i } N2 i=1 satisfies ‖ŵ‖ ≥ max{‖ŵ1‖, ‖ŵ2‖}.\nThis proposition can be derived from the definition of hard margin classifier, which suggests that adding some extra outliers to dataset can easily effect the robustness of hard margin classifier, since ‖ŵ‖ ≥ max{‖ŵ1‖, ‖ŵ2‖} and one of wi can be extremely large. Hence, the non-robustness of hard margin classifier can be attributed to some outliers. We use the next example to illustrate it.\nExample 1. Suppose a dataset {( x1i , 0 )}N1 i=1\nis well linearly separable with large margin, which means the hard margin classifier ŵ1 has a small norm. We insert some outliers which have the form{(\n0,x2i , )}N2 i=1\nand N1 N2. If the outlier is linearly separable with relatively small margin, the corresponding hard margin solution is ŵ2, then we have ‖ŵ1‖ ‖ŵ2‖.\nWe can show that hard margin classifier on the union dataset {( x1i , 0 )}N1 i=1 ⋃{( 0,x2i )}N2 i=1\nis (ŵ1, ŵ2). Then for each ( x1i , 0 ) , we have∣∣∣(ŵ1, ŵ2)T (x1i , 0)∣∣∣\n‖ (ŵ1, ŵ2) ‖ = ∣∣ŵT1 x1i ∣∣√ ‖ŵ1‖2 + ‖ŵ2‖2 ≤ ∣∣ŵT1 x1i ∣∣ ‖ŵ2‖\n∣∣ŵT1 x1i ∣∣ ‖ŵ1‖ . (12)\nThis shows the distance between each {( x1i , 0 )}N1 i=1 and new hard margin classifier (ŵ1, ŵ2) be-\ncome extremely small due to a few outliers {( 0,x2i )}N2 i=1 .\nThis indicates the solution returned by standard training is somewhat very sensitive to a few outliers because outliers can easily alter the hard margin classifier.\nWe now give a more general description of outliers based on the understanding of Example 1. Given two datasets {x1i } N1 i=1 and {x2i } N2 i=1 with N1, N2 data respectively, where N1 N2. {x1i } N1 i=1 themselves are ε-strongly linearly separable , but {x2i } N2 i=1 are not. Then, dataset {x2i } N2 i=1 is considered as outliers. Outliers themselves can even be not linearly separable. We use the next example to illustrate another behavior of standard training to non-linearly separable outliers.\nExample 2. Dataset {(x11i ,x12i )} N1 i=1 ∈ Rd1+d2 is ε-strongly linearly separable but {(0,x22i )} N2 i=1 ∈ Rd1+d2 is not linearly separable. Let S = span{x121 , · · · ,x12N1} = span{x221 , · · · ,x22N2}. Without loss of generality, we assume {x 11 i } N1 i=1 themselves are linearly separable but the hard margin classifier of them ŵ11 has relatively small margin compared to the {x12i } N1 i=1, i.e., ‖ŵ11‖ ‖ŵ12‖, where ŵ12 is the hard margin classifier on {x12i } N1 i=1 6.\nThe union datasets {(x11i ,x12i )} N1 i=1 ⋃ {(0,x22i )} N2 i=1 is not linearly separable because of the outliers set {(0,x22i )} N2 i=1. Even worse, we show in this case, the classifier returned by standard training can make the prediction most based on the useful but non-robust information {x11i } N1 i=1 (Ilyas et al., 2019).\nThe reason is as follows. The outliers {(0,x22i )} N2 i=1 happens to be the ”Strongly Convex Part” as in Theorem 4.1 of Ji & Telgarsky (2018), then the iterates {wt} updated by gradient descent of standard training converge to a direction of the hard margin classifier ŵ11 of dataset {ΠS⊥(x11i ,x12i )} N1 i=1 = {(x11i , 0)} N1 i=1, where ΠS⊥(·) is the projection operator of space S⊥. Then, the presence of outliers {(0,x22i )} N2 i=1 renders the classifier returned by standard training on union datasets with poor robustness while the true hard margin classifier on dataset {(x11i ,x12i )} N1 i=1 is robust.\nThe two examples that standard training converges to non-robust solution at the presence of outliers also demonstrates that standard training can easily capture the ”non-robust but useful information”. The first is to classify outliers {(0,x22i )} N2 i=1 into correct category renders the classifier (ŵ1, ŵ2) to be non-robust. The second describes that the outliers mute the information from {(0,x12i )} N1 i=1 for dataset {(x11i ,x12i )} N1 i=1. Then, the classifier turns into a non-robust but useful classifiers.\n6The hard margin classifier of {(x11i ,x12i )}N1i=1 ŵ1 satisfies ‖ŵ1‖ ≤ min{‖ŵ11‖, ‖ŵ12‖} which leads to a even more robust solution." }, { "heading": "4.1.2 ADVERSARIAL TRAINING IS STABLE TO OUTLIERS", "text": "Now, we turn to the behavior of adversarial training on the above two examples. Lemma 1 shows that the iterates {wt} of adversarial training on ε-strongly linearly separable data updated by GD can satisfy limt→∞ ‖wt‖ = ∞. But the non-ε-strongly linearly separability of data ensures there exists some xki such that w T t x k i − ε‖wt‖ ≤ 0 for each specific t. Hence, ‖wt‖ can not go to infinity, otherwise we end up with an infinite loss. This is the essential distinction caused by non-εstrongly linearly separability. Now, we give a formal analysis to the stability of adversarial training to outliers.\nDue to Lemma 1, iterates of adversarial training updated by GD converge to a minimum w∗. Then, let l′ ( w∗Txki − ε‖w∗‖ ) = pki , we have\n∇L(w∗) = 1 N1 +N2 2∑ k=1 Nk∑ i=1 pki ( xki − ε w∗ ‖w∗‖ ) = 0. (13)\nIt gives\nε w∗\n‖w∗‖ 2∑ k=1 Nk∑ i=1 pki = 2∑ k=1 Nk∑ i=1 pki x k i = 2∑ k=1 Xkpk, (14)\nwhere Xk = (xk1 , · · · ,xkNk) and pk = (p k 1 , · · · pkNk). This implies that the direction of vector w\n∗\nis decided by a linear combination of data points. Xkpk represents the contribution brought by {xki } Nk i=1.\nWhen data are ε-strongly linearly separable, by Assumption 2, we see pki /p k′\nj = o(1) (‖wt‖ goes to infinity) for any support vector xki and non-support vector x k′ j of w ∗/‖w∗‖. Hence, the direction of w∗ is mostly decided by support vectors. This is consistent with the conclusion that hard margin solution of SVM ŵ satisfies (KKT condition) ŵ = ∑2 k=1 ∑Nk i=1 α k i x k i , where α k i = 0 for nonsupport vectors. Then, hard margin classifier is only decided by the linear combination of support vectors, which could be potential instability of standard training to outliers, since outliers can easily alter support vectors.\nHowever, things become different for non-ε-strongly linearly separable data. Since ‖wt‖ can not go to infinity, we have the relation pki /p k′ j = O(1). Then the following inequality\nλ 1 2 min ( XT2 X2 ) ‖p2‖ λ 1 2 max (X T 1 X1) ‖p1‖ ≤ ‖X2p2‖‖X1p1‖ ≤ λ 1 2 max ( XT2 X2 ) ‖p2‖ λ 1 2 min (X T 1 X1) ‖p1‖ , (15)\ninforms us the stability of adversarial training to outliers, where λmin(A), λmax(A) are respectively the smallest and largest eigenvalue of matrix A. From equation 13, we see the direction of w∗ can be decided by both dataset {x1i } N1 i=1 and {x2i } N2 i=1. Since p 1 i has the same scale with p 2 j for every i, j and N1 N2, we see ‖p1‖ ‖p2‖, then ‖X1p1‖ ‖X2p2‖. Hence, equation 15 implies that X2p2 has little contribution to the direction of w∗. Thus, w∗ can not be the hard margin classifier on union dataset. In addition, w∗ is mostly decided by data {x1i } N1 i=1 which tells the stability of adversarial training to outliers.\nWe now provide a discussion to the stability of adversarial training via the lens of robust/non-robust but useful features proposed by Ilyas et al. (2019). First, we quote the definition of non-robust but useful feature here. Definition 3 (non-robust but useful feature (Ilyas et al., 2019)). For a given distribution (x, y), if E(x,y)[y ·f(x)] ≥ ρ for some ρ > 0, but E(x,y) [ inf‖δ‖≤ε y · f(x+ δ) ] ≤ 0, then f(·) is non-robust but useful feature.\nThe definition of robust feature can be generalized for f(·) with E(x,y) [ inf‖δ‖≤ε y · f(x+ δ) ] ≥ 0. We see the feature is defined by classifier f(·) which is w for linear case. Then, the hard margin solution of SVM ŵ1, ŵ2 for {x1i } N1 i=1 and {x2i } N2 i=1 in Example 1 and 2 are respectively robust feature and non-robust feature. Further more, let w∗ be the classifier (feature) captured by adversarial training. We note thatw∗ being stable to outliers means adversarial training can prevent the classifier capture information from non-robust but useful features in sharp contrast with standard training. This interprets the conclusion in Ilyas et al. (2019) while they only empirically verify it.\nWe can also understand other scenarios from equation 15 that are not limited to N1 N2. If N1 ≈ N2, we see adversarial training can preserve a part of information from robust features. Moreover, N2 N1 means that most features are non-robust, and then adversarial training is also helpless." }, { "heading": "4.2 CONFIDENCE OF ADVERSARIAL TRAINING", "text": "In this subsection, we give a characterization to the confidence of classifier returned by adversarial training where data is not ε-strongly linearly separable. As shown in Section 4.1.2, ‖wt‖ converges to some constant when data is not ε-strongly linearly separable. According to Definition 2, the confidence of classifier is decided by |wTt xi| on each xi. Then, a bounded ‖wt‖ corresponds to a relatively low confidence in each xi.\nTo give a formal description to the confidence of wt, we discuss an extremely large ε ≥ 1 2 maxi,j ‖xi − xj‖, which means each ε-ball centered at xi has overlap with each other. For a xi, larger |w∗Txi| corresponds with higher prediction confidence. For a given dataset {xi}Ni=1, let N1, N2 respectively be the number of data in each category and xk,i represents the data from the k-th category. Then we have following theorem descriping the confidence of classifier.\nTheorem 2. Let l(u) = e−u, for classifier w∗ returned by adversarial training on non-linearly separable data, |w∗Txk,i| is either smaller than ε‖w∗‖ or satisfies\ne(|w ∗Txk,i|−ε‖w∗‖) ≤ 1\nNk′ Nk′∑ j=1 max x:‖x−xk′,j‖≤ε e(−w ∗Tx+ε‖w∗‖), if w∗Txk,i ≥ ε‖w∗‖,\ne(|w ∗Txk,i|) ≤ 1\nNk′ Nk′∑ j=1 e(−w ∗Txk′,j−ε‖w ∗‖), if w∗Txk,i ≤ −ε‖w∗‖,w∗Txk′,j ≥ 0,\n(16)\nfor k′ 6= k.\nOur results also applies to l(·) discussed in this paper. A more explicit upper bound for those w∗Txk,i ≥ ε‖w∗‖ is referred to equation 56. We notice that ‖w∗‖ is bounded on non-ε-strongly linearly separable data. Then, this theorem implies that |w∗Txi| on each xi can not be extremely large. Hence, a relatively low confidence in each xi. A detailed proof of this Theorem is delegated to Appendix D. Here we use ε ≥ 12 maxi,j ‖xi − xj‖ is out of simplicity, a large ε for adversarial training can also correspond to a similar result.\nTheorem 2 informs the training loss of classifier w∗ is at a relatively high level. Then, a simple generalization error bound (Vapnik, 1995) can tell that w∗ can also end up with a high error on test data. This is an interpretation to the model trained by adversarial training always face a poor test accuracy (Madry et al., 2018)." }, { "heading": "5 EMPIRICALLY STUDY", "text": "In this section, we compare adversarial training with standard training via linear classifier for two scenarios, i.e., ε-strongly linearly separable data and non-ε-strongly linearly separable data. All experiments are conducted with loss function `(u) = log (1 + e−u), and the update rule is GD with learning rate 0.01.\n5.1 EXPERIMENTS ON ε-STRONGLY LINEARLY SEPARABLE DATA\nWe first conduct a series of simulations to verify our conclusion on ε-strongly linearly separable data. The dataset includes support and non-support vectors. We generate non-support vectors {xi}Ni=1 ∼ N (yi · 2 · 12, 0.3 · I2) and support vectors on the 1T2 xi = yi · 2. If there exists at least one support vector in each category, the hard margin solution of SVM ŵ is 12. See Figure 1 for an example. The distance of support vectors away from the hard margin classifier is\n√ 2. Hence, the data are at most√\n2-linearly separable. We respectively verify our conclusions in Theorem 1 of the convergence rate, support vector number |S| and adversarial radius ε.\nWe conduct adversarial training with different ε and standard training on three datasets: number of support vectors and non-support vectors in each category are respectively (5000, 0); (2500, 2500) and (1, 4999) (see Figure 1). The results can be referred to Figure 2. Some extra experiments are delegated to Appendix E. We should highlight the direction of {wt} here can converge with rate of O((log 0.01 · t)−1) and O((log 0.01 · t)−(1+ε∗)), respectively for standard training and adversarial training due to equation 11.\nThe ”margin gap” in Figure 2 means distance between the direction of iterates wt/‖wt‖ and hard margin solution of SVM 12/ √ 2. In Figure 2, the label, for example ”Adv: 2500 support 2500 non support: 0.1 varepsilon” means adversarial training with ε = 0.1 on dataset with 2500 support vectors and 2500 non-support vectors in each category. From the results, we see that adversarial training indeed accelerates convergence rate of wt/‖wt‖. On the other hand, both adversarial training and standard training can benefit from the number of support vectors. Finally, as we claimed in Theorem 1, the best adversarial radius ε is ε1/2 which is √ 2/2 here.\n5.2 EXPERIMENTS ON NON-ε-STRONGLY LINEARLY SEPARABLE DATA\nIn this subsection, we conduct experiments on linearly separable (but non-ε-strongly linearly separable) and non-linearly separable data. More extra experiments can be referred to Appendix E.\nWe first discuss the stability of adversarial training to outliers on linearly separable data corresponding to Example 1. We generate {(x1i , 02)}10000i=1 ∼ (N (yi · 2 · 12, 0.3 · I2), 02) and {(02,x2i )}100i=1 ∼ (02,N (yi · 0.5 · 12, 0.1 · I2)). The number of the two datasets in each category are 5000 and 50. The dataset {(02,x2i )}100i=1 is linearly separable while non-ε-strongly linearly separable for ε = 1.5. {(x1i , 02)}10000i=1 is ε-strongly linearly separable for ε = 1.5. {(02,x2i )}100i=1 can be viewed as outliers. Then, we compare the stability of adversarial training and standard training conducted on the union dataset to outliers. The adversarial radius ε is 1.5. Detailed results can be referred to Figure 3a.\nFor linearly separable data, the solid lines in Figure 3a are the norm of iterates returned by adversarial training and standard training. The dash lines in Figure 3a, for example, ”Adv-sep/whole” means the ratio between the norm of classifier decided by the ε-strongly linearly separable part that is the first two dimensions of iterates and whole norm of iterates (‖w1‖/‖w‖, where w ∈ R4 = (w1,w2) ∈ R2+2). The first and the last two dimensions of iterates are with respect to robust and non-robust features. We have two observations for adversarial training, first, the norm of iterates can converge to some constant rather than infinity. Second, The direction of iterates is mostly decided by {(x1i , 02)}10000i=1 , since the ”Adv-sep/whole” can converge to 1 as the iterate steps growing up. However, we have an opposite observation for standard training, which means the direction of classifier obtained by standard training is easily effected by some outliers {(02,x2i )}100i=1. Then, we present the results on non-linearly separable data with respect to Example 2, we generate 5000 {(x11i ,x12i )}10000i=1 ∼ (N (yi · 0.5 · 12, 0.1 · I2),N (yi · 2 · 12, 0.3 · I2)) and 50 {(02,x2i )}100i=1 ∼ (02,N (−yi · 0.5 · 12, 0.5 · I2)) in each category. Here {(02,x2i )}100i=1 is the source of non-linearly separability. {x12i }10000i=1 corresponds with a robust hard margin classifier while {x11i }10000i=1 is not. {(02,x2i )}100i=1 can be viewed as outliers. We conduct adversarial training with ε = 0.7. The results can be referred to Figure 3.\nFigure 3b refers to non-linearly separable data, where the solid lines in it are also the norm of iterates. The first and the last two dimensions of iterates are respectively decided by the ”useful but non-robust” part and ”robust” part. The non-linearly separability is from the last two dimensions of data brought by outliers {(02,x2i )}100i=1. The dash lines in Figure 3b, for example, ”Adv:sep-nrob” and ”Adv-non sep-rob” are respectively the proportion in norm for the norm of the first and the last two dimensions of iterates (‖w1‖/‖w‖ and ‖w2‖/‖w‖, where w ∈ R4 = (w1,w2) ∈ R2+2).\nWe see standard training can easily capture the information from ”useful but non-robust” (first two dimensions) part while the information from the ”robust” part (last two dimensions) is muted attributes to outliers {(02,x2i )}100i=1. But adversarial training can oppositely capture the information from ”mostly robust” part.\nFinally, we see the confidence of the classifier obtained by adversarial training with ε ≥ 1 2 maxi,j ‖xi − xj‖. We generate 5000 samples {xi} 10000 i=1 ∼ N (±0.5 · 12, 0.1 · I2) in each category. The data are linearly separable but non-ε-strongly linearly separable for ε = 1, besides that, 1 ≥ 12 maxi,j ‖xi − xj‖. We compare the confidence in each data point for classifiers returned by adversarial training with ε = 1 and standard training. The distributions of confidence via classifiers in data can be referred to Figure 4. We see the classifier obtained by adversarial training corresponds with nearly 50% confidence in all data. However, the confidence of classifier returned by standard training is almost 100% in all data. Hence, a large ε for adversarial training can hurt the prediction confidence of classifier." }, { "heading": "6 CONCLUSION", "text": "In this paper, we give a theoretical characterization to adversarial training for linear classifiers under various settings. We conclude that on ε-strongly linearly separable data, adversarial training helps iterates converge to the hard margin classifier with a rapid rate compared with standard training. It means iterates of adversarial training can be robust with less update steps. Furthermore, we characterize the adversarial training on non-ε-strongly linearly separable data. We show both theoretically and empirically that adversarial training can be more stable to the outliers of the dataset but standard training is not. Finally, we discuss the confidence of the classifier obtained by adversarial training. We prove that under the condition of ε ≥ 12 maxi,j ‖xi − xj‖, the confidence of classifier obtained by adversarial training keeps in a low level. This reveals that a large ε for adversarial training is not a wise choice." }, { "heading": "A PROOF OF LEMMA 1", "text": "Proof. We first give convexity to the adversarial training objective. For any two w1,w2, and 0 ≤ λ ≤ 1, by the convexity and monotone decreasing property of `(·), we have\n1\nN N∑ i=1 ` ( λwT1 xi + (1− λ)wT2 xi − ε‖λw1 + (1− λ)w2‖ ) ≤ 1 N N∑ i=1 ` ( λwT1 xi + (1− λ)wT2 xi − ελ‖w1‖ − (1− λ)‖w2‖\n) ≤ 1 N N∑ i=1 λ` ( wT1 xi − ε‖w1‖ ) + (1− λ)` ( wT2 xi − ε‖w2‖ ) ,\n(17)\nwhich implies the convexity of adversarial training objective. On the other hand, if uT∇2L(w)u can be bounded by L‖u‖2 for some L, then L(w) is L-smooth. We see that\n∇2L(w) = 1 N N∑ i=1 `′′(wTxi − ε‖w‖) ( xi − ε w ‖w‖ )( xi − ε w ‖w‖ )T − ε‖w‖3 ` ′(wTxi − ε‖w‖)(‖w‖2I−wwT ). (18)\nThen, uT∇2L(w)u can be divided into two parts, we respectively compute them∣∣∣∣∣ 1N N∑ i=1 l′′(wTxi − ε‖w‖) ( (uTxi) 2 − 2 ε‖w‖u Txiw Tu+ ε2 ‖w‖2 (u Tw)2 )∣∣∣∣∣ ≤ L0 N ‖u‖2 ( λmax ( XTX ) + 2ε N∑ i=1 ‖xi‖+ 2ε2 ) .\n(19)\nHere X is matrix (xT1 , · · · ,xTN )T and λmax ( XTX ) is the largest eigenvalue of XTX. On the other hand, according to Assumption 3,\nuT ε ‖w‖3 ` ′(wTxi − ε‖w‖)(‖w‖2I−wwT )u ≤ 2ε C1 c2 ‖u‖2. (20)\nCombining the two above equations, we conclude that largest eigenvalue of∇2L(w) is bounded for some constant, which result in the L-smoothness of adversarial training objective. It’s a well known (Boyd & Vandenberghe, 2004) that for a convex and L-smooth function f(·), GD with step size η = 1L will ensure the iterates {xt} satisfies f(xt)− f(x∗) ≤ O ( 1 t ) . Since the adversarial training loss goes to zero only if wTt xi − ε‖wt‖ goes to infinity for each xi. It reveals our two conclusions limt→∞ ‖wt‖ =∞ and limt→∞wTt xi − ε‖wt‖ =∞." }, { "heading": "B ORDER OF NORM", "text": "In this section we give the order of ‖w(t)‖ on ε-strongly linearly separable data, which is highly related to the convergence rate of ∥∥∥ w(t)‖w(t)‖ − ŵ‖ŵ‖∥∥∥. Our proof is based on the gradient flow, we first bound the difference between flow iterates w(t) and wt.\nLemma 2. Let wt be the iterates updated by GD equation 5, then for dw(t)dt = −∇L(w(t)), we have\n‖w(kη)−wk‖ ≤ O(η). (21)\nProof. Let w̄(kη) = w((k − 1)η)− η∇L(w((k − 1)η)). (22)\nThen by Lipschitz gradient of L(·), we see\n‖w(kη)− w̄(kη)‖ = ∥∥∥∥∥ ∫ kη (k−1)η ∇L(w(u))−∇L(w((k − 1)η))du ∥∥∥∥∥ ≤ ∫ kη (k−1)η L ‖w(u)−w((k − 1)η)‖ du\n= ∫ kη (k−1)η L ∥∥∥∥∥ ∫ u (k−1)η ∇L(w(s))ds ∥∥∥∥∥ du = O(η2).\n(23)\nThen we have\n‖w(kη)−wk‖ ≤ ‖wk − w̄(kη)‖+ ‖w(kη)− w̄(kη)‖ ≤ ‖w((k − 1)η)−wk−1‖+ η ‖∇L(w((k − 1)η))−∇L(wk−1)‖+O(η2) ≤ (1 + ηL) ‖w((k − 1)η)−wk−1‖+O(η2)\n≤ · · · ≤ (1 + ηL)k ‖w(0)−w0‖+O(η) = O(η),\n(24)\nfor w(0) = w0, which results in the conclusion.\nBefore proving the Theorem 1, we give a lemma to illustrate thatw(t) will converge to the direction of ŵ.\nLemma 3. There exists a t0, for t > t0, we have\nw(t) = ρ(t)ŵ + h(t), (25)\nfor some ρ(t) and h(t). Here ρ(t) goes to infinity and ‖h(t)‖ is in the order of o(ρ(t)).\nProof. Let\nr(t) = w(t)− ‖w(t)‖‖ŵ‖ ŵ, (26)\n‖r(t)‖ = o (‖w(t)‖) will implies the conclusion. Since `′(·) has exponential tail Assumption 2, there exists a t0, for t > t0, we have\nd dt ‖r(t)‖ = ṙ\nT (t)r(t)\n‖r(t)‖ = − 1 ‖r(t)‖\n( 1− w(t) T ŵ\n‖w(t)‖‖ŵ‖\n) ∇L(w(t))Tr(t)\n≤ C N‖r(t)‖ (1− cos(w(t), ŵ)) N∑ i=1 exp ( −w(t)Txi + ε‖w(t)‖ )( xi − ε w(t) ‖w(t)‖ )T r(t).\n(27) Then, we see N∑ i=1 exp ( −w(t)Txi + ε‖w(t)‖ )( xi − ε w(t) ‖w(t)‖ )T r(t)\n= N∑ i=1 exp ( −r(t)Txi − ‖w(t)‖ŵTxi ‖ŵ‖ + ε‖w(t)‖ )( xi − ε w(t) ‖w(t)‖ )T r(t) ≤ N∑ i=1 exp ( −‖w(t)‖ŵ Txi ‖ŵ‖ + ε‖w(t)‖ ) − ε exp ( −r(t)Txi − ‖w(t)‖ŵTxi ‖ŵ‖ + ε‖w(t)‖ ) w(t)T ‖w(t)‖r(t)\n≤ N exp (−(ε1 − ε)‖w(t)‖) , (28)\nwhere ŵ is the hard margin classifier and ε is the distance between support vectors and ŵ. Here we use the relationship that ze−z ≤ 1 for any z and w(t)Tr(t) ≥ 0. Since\n(1− cos(w(t), ŵ)) = − r(t) T ŵ\n‖w(t)‖‖ŵ‖ , (29)\nand r(t)T ŵ ≤ 0, we have\nd dt ‖r(t)‖ ≤ −CN r(t)\nT ŵ\n‖r(t)‖‖w(t)‖‖ŵ‖ exp (−(ε1 − ε)‖w(t)‖) ≤ CN ‖w(t)‖ exp (−(ε1 − ε)‖w(t)‖) . (30)\nSimilar to Theorem 3 in Su et al. (2015), by exponential tail of l(·) Assumption 2, we have\nC1 exp ( −w(t)Txi + ε‖w(t)‖ ) ≤ ` ( w(t)Txi − ε‖w(t)‖ ) ≤ N‖ŵ −w0‖\n2t , (31)\nfor some constant C1 and any xi. Then we see\nlog 2C1\nN‖ŵ −w0‖ + log t ≤ w(t)Txi − ε‖w(t)‖. (32)\nOn the other hand, by the definition of hard margin classifier and Lemma 1, there exists xi such that w(t)Txi ≤ ε1‖w(t)‖. Combining this and equation 32, we have\n1\nε1 − ε\n( log\nC1 N‖ŵ −w0‖ + log t\n) ≤ ‖w(t)‖. (33)\nPlugging this into equation 30, and we see that there exists a t1 such that 12 log t ≥ log N‖ŵ−w0‖ C1 , then\n‖r(t)‖ ≤ ‖r(t0 ∧ t1)‖+ (ε1 − ε)N2‖ŵ −w0‖\nCC1\n∫ t t0∧t1 1 t log t dt = O(log log t), (34)\nfor t ≥ t0 ∧ t1. Combining these, we conclude that ‖r(t)‖ is in the order of o(‖w(t)‖).\nNext, we use a lemma to illustrate the explicit order of ρ(t)\nLemma 4. ρ(t) in Lemma 3 is on the scale of log tε1−ε , where {εi} is the sorted distance of {xi} N i=1 to hard margin solution of SVM ŵ.\nProof. From Lemma 3 and 3, we have\nw(t) = ρ(t) ŵ\n‖ŵ‖ + h(t), (35)\nwhere ρ(t)→∞ and ‖h(t)‖ = o(ρ(t)). Specifically, the h(t) can be chosen to be orthogonal with ŵ, otherwise we can use a decomposition in a direct sum. For ‖w(t)‖, we have\n‖w(t)‖ = √ ρ(t)2 + ‖h(t)‖2\n= √ ρ(t)2 + ‖h(t)‖2 − ρ(t) + ρ(t)\n=\n(√ ρ(t)2 + ‖h(t)‖2 + ρ(t) )(√ ρ(t)2 + ‖h(t)‖2 − ρ(t) ) √ ρ(t)2 + ‖h(t)‖2 + ρ(t) + ρ(t)\n= ‖h(t)‖2 ρ(t) + √ ρ(t)2 + ‖h(t)‖2 + ρ(t).\n(36)\nOn the other hand, for t large enough,\nd dt ‖w(t)‖ = −∇L(w(t))T w(t)‖w(t)‖\n≤ C1 N N∑ i=1 exp ( −w(t)Txi + ε‖w(t)‖ )( xi − ε w(t) ‖w(t)‖ )T w(t) ‖w(t)‖\n= C1 N N∑ i=1 exp\n( −ρ(t) ŵ\nTxi ‖ŵ‖ + h(t) Txi + ερ(t) + ε ‖h(t)‖2 ρ(t) + √ ρ(t)2 + ‖h(t)‖2\n)( ŵTxi − ε‖ŵ‖\n‖ŵ‖\n)\n≤ C2 N N∑ i=1 exp (−(εi − ε)ρ(t)) (εi − ε).\n(37)\nfor some constant C1, C2. Since ρ(t) goes to infinity and ‖h(t)‖ = o(ρ(t)), the derivation of ‖h(t)‖2 ρ(t)+ √ ρ(t)2+‖h(t)‖2\nwill o(ρ′(t)). Similar to equation 37, we can get the lower bound of ddt‖w(t)‖. In summary, we have\nC3 N exp (−(ε1 − ε)ρ(t)) (ε1 − ε) ≤ ρ′(t) ≤ C2 exp (−(ε1 − ε)ρ(t)) (εN − ε) (38)\nfor constant some C3. Hence we can conclude that\nlog t\nε1 − ε + C5 ≤ ρ(t) ≤\nlog t\nε1 − ε + C4, (39)\nfor some constant C4, C5. It results in ρ(t) = O(log t).\nWe have proven that w(t) will converge to the direction of ŵ. Now, we will show w(t) will have the same support vector with ŵ when t is large. It’s a key fact of proving Theorem 1. Lemma 5. There exists t0 such that w(t) will have the same support vectors with ŵ for t > t0.\nProof. Let S = {i : ŵTxi = ε1‖ŵ‖}, for i ∈ S, j /∈ S, we have\nw(t)T ‖w(t)‖ (xi − xj) =\n( ρ(t) ŵ‖ŵ‖ + h(t) )T ‖w(t)‖ (xi − xj)\n≤ 1‖w(t)‖\n( ρ(t) ŵT ‖ŵ‖ (xi − xj) + ‖h(t)‖‖xi − xj‖ ) (40)\nSince xi is the support vector of ŵ, ŵ T\n‖ŵ‖ (xi − xj) > 0. Then, ‖h(t)‖ = o(ρ(t)) shows that w(t)T ‖w(t)‖ (xi − xj) ≤ 0 for a large t. Then we get the conclusion." }, { "heading": "C PROOF OF THEOREM 1", "text": "In this section, we give a fully characterization to the proof of Theorem 1.\nRestate of Theorem 1. For any ε-strongly linearly separable data (Definition 1), and loss function `(·) satisfies Assumption 1 and 2. If the Assumption 3 holds, then the gradient flow iterates w(t) ,\ndw(t)\ndt = −∇L(w(t)) (41)\nsatisfies w(t) = ŵ ·O (log t) + h(t), (42)\nfor a large t. Here ŵ is the hard margin solution of SVM:\nŵ = arg min w∈Rd\n‖w‖ s.t.wTxi ≥ 1. (43)\n‖h(t)‖ is in the order of o(log t). Then\nlim t→∞ ∥∥∥∥ w(t)‖w(t)‖ − ŵ‖ŵ‖ ∥∥∥∥ ≤ O((log t)−(1+ε∗)), (44)\nwhere ε∗ = |S|ε(ε1−ε)N , {εi} is the sorted distance of data away from the hard margin solution of SVM, and S = {i : ŵTxi = ε1‖ŵ‖}; |S| is the number of elements in set S. Finally, iterates {wt} of adversarial training updated by GD with step size η will satisfy\n‖wk −w(kη)‖ ≤ O(η), (45)\nfor k ∈ N+. Then we can conclude\nlim t→∞ ∥∥∥∥ wt‖wt‖ − ŵ‖ŵ‖ ∥∥∥∥ ≤ limt→∞ ∥∥∥∥ w(tη)‖w(tη)‖ − ŵ‖ŵ‖ ∥∥∥∥+O(η). (46)\nProof. Let w(t) = ρ(t) ŵ‖ŵ‖ + h(t). Denoting the support vectors set by S, S = {i : ŵ Txi = ε1‖ŵ‖}. By the exponential tail of `′(u) (Assumption 2), there exists a t0, for t > t0, we have\n1\n2\nd dt ‖h(t)‖2\n= − ( ∇L(w(t)) + ρ′(t) ŵ‖ŵ‖ )T h(t) = −∇L(w(t))Th(t)\n≤ C N (∑ i∈S + ∑ i/∈S ) exp (−w(t)xi + ε‖w(t)‖) ( xi − ε w(t) ‖w(t)‖ )T h(t)\n= C\nN (∑ i∈S + ∑ i/∈S ) exp ( −ρ(t) ŵ Txi ‖ŵ‖ − h(t) Txi + ερ(t) + ‖h(t)‖2 ρ(t) + √ ρ(t)2 + ‖h(t)‖2 )( xi − ε w(t) ‖w(t)‖ )T h(t)\n≤ C N ∑ i∈S exp ( −ρ(t) ŵ Txi ‖ŵ‖ − h(t) Txi + ερ(t) )( xi − ε w(t) ‖w(t)‖ )T h(t)\n≤ − ε N ∑ i∈S exp (−‖h(t)‖‖xi‖ − (εi − ε)ρ(t)) ‖h(t)‖2 ρ(t) ,\n(47) 7Here we use a fact that xTi h(t) ≤ 0 for i ∈ S, due to ŵ is the hard margin classifier. In addition,\n1 + ‖h(t)‖2 ρ(t) + √ ρ(t)2 + ‖h(t)‖2 ≤ exp\n( ‖h(t)‖2\nρ(t) + √ ρ(t)2 + ‖h(t)‖2\n) . (48)\nWith this, we can choose a t0 such that\n∑ i∈S exp ( −ρ(t) ŵ Txi ‖ŵ‖ − h(t) Txi + ερ(t) )( xi − ε w(t) ‖w(t)‖ )T h(t)\n‖h(t)‖2 ρ(t) + √ ρ(t)2 + ‖h(t)‖2\n+ ∑ i/∈S exp (−w(t)xi + ε‖w(t)‖) ( xi − ε w(t) ‖w(t)‖ )T h(t) ≤ 0,\n(49)\nfor t > t0. This concludes the equation equation 47. Let ‖h(t)‖‖xi‖ be C(t), we can first derive that ‖h(t)‖ will converge to zero, then C(t) can be close to zero. By Gronwall’s inequality, we have\n‖h(t)‖2 ≤ ‖h(t0)‖2 exp (∫ t t0 −εe −C(u) Nρ(u) ∑ i∈S exp (−(εi − ε)ρ(u)) du ) , (50)\nC(t) can be arbitrary small when t > t0. Plugging this into equation 50, and combining Lemma 4, we have\n‖h(t)‖2 ≤ ‖h(t0)‖2 exp (∫ t t0 −ε(ε1 − ε) N log u ∑ i∈S u − ( εi−ε ε1−ε ) du )\n= ‖h(t0)‖2 exp ( − ∑ i∈S ε(ε1 − ε) N ∫ t t0 1 u log u du )\n= ‖h(t0)‖2 exp ( −|S|ε(ε1 − ε)\nN log log t ) = O(log t)−ε ∗ ,\n(51)\nwhere ε∗ = |S|ε(ε1−ε)N . Since we have\nw(t) ‖w(t)‖ = ρ(t)\nρ(t) + ‖h(t)‖ 2\nρ(t)\nŵ ‖ŵ‖ + h(t)\nρ(t) + ‖h(t)‖ 2\nρ(t)\n, (52)\n7Here we hide the constant C in the last inequality, which is decided by loss function l(·). C will usually smaller than 1 in fact.\nthen we see ∥∥∥∥ w(t)‖w(t)‖ − ŵ‖ŵ‖ ∥∥∥∥ ≤ 1− ρ(t) ρ(t) + ‖h(t)‖ 2\nρ(t)\n+ ‖h(t)‖ ρ(t) + ‖h(t)‖ 2\nρ(t)\n= ‖h(t)‖2 ρ(t)2 + ‖h(t)‖2 + ‖h(t)‖\nρ(t) + ‖h(t)‖ 2\nρ(t)\n= O (log)−(1+ε ∗)\n(53)\nwhen t goes to infinity. Since h(t) ≤ O(log t)−ε∗ , for ε∗ = |S|ε(ε1−ε)N . Combining Lemma 2, we can get the conclusion." }, { "heading": "D PROOF OF THEOREM 2", "text": "Proof. The xk,i will either satisfies |w∗Txk,i| ≤ ε‖w∗‖ nor |w∗Txk,i| ≥ ε‖w∗‖. We know the first inequality represents the distance between xk,i and w∗ is smaller than ε. Then for w∗Txk,i ≥ ε‖w∗‖, due to ε ≥ 12 maxi,j ‖xk,i − xk′,j‖ for any i, j, there exists x ′ k′,j ∈\nB2(xk′,j , ε) ⋂ B2(xk,i, ε) with k′, k = 1, 2 and k′ 6= k. By triangle inequality, the distance between x′k′,j andw ∗ is smaller than w ∗Txk,i ‖w∗‖ − ε. Since xk′,j locates in different category with xk,i, by the monotone decreasing property of `(·), we have\n`(−w∗Txk,i + ε‖w∗‖) ≤ `(x′k′j) ≤ max ‖x−xk′,j‖≤ε `(w∗Tx). (54)\nHence we have\n`(−w∗Txk,i + ε‖w∗‖) ≤ 1\nNk′ Nk′∑ j=1 max ‖x−xk′,j‖≤ε `(w∗Tx− ε‖w∗‖). (55)\nThen, we have N2` ( −|w∗Tx1,i|+ ε‖w∗‖ ) +N1` ( −|w∗Tx2,i|+ ε‖w∗‖ ) ≤ (N1 +N2)`(0), (56)\nif w∗ can correctly predict a point far from margin larger than ε in each category, for those w∗Txk,i ≥ ε‖w∗‖ due to w∗ is minimum, then L(w∗) ≤ L(0). On the other hand, for w∗Txk,i ≤ −ε‖w∗‖, we can immediately derive\n`(w∗Txk,i) ≤ `(−w∗Txk′,j − ε‖w∗‖), (57)\nby triangle inequality where w∗Txk′,j ≥ 0. Then we get the conclusion." }, { "heading": "E EXTRA EXPERIMENTS", "text": "E.1 ADVERSARIAL TRAINING IN lp SPACE\nOur conclusions are obtained for adversarial training in l2 space while it can be conducted in a more general lp space (Goodfellow et al., 2014; Madry et al., 2018). It is meaningful to verify whether these conclusions are still hold in lp space. Hence, we extend our experiments to lp space.\nThe adversarial training objective within linear classifier can be formulated as\nL(w) = 1 N N∑ i=1 max ‖x−xi‖p≤ε `(wTx), (58)\nin lp space, where ‖ · ‖p is lp norm. It has an explicit formulation\nL(w) = 1 N N∑ i=1 ` ( wTxi − ε‖w‖q ) , (59)\nwhere 1p + 1 q = 1. We conduct experiments for p = 4 and p =∞.\nWe empirically verify that adversarial training in lp space helps accelerating the convergence to hard margin solution of SVM on ε-strongly linearly separable data. Besides that, adversarial training can benefit from more support vectors and an appropriate adversarial radius ε. For non-ε-strongly linearly separable data, we try to validate that adversarial training in lp space is stable to outliers while it will obtain a classifier with low confidence on each data. All the experiments we conducted here are following the settings in Section 5, expect for the p are respectively chosen as 4 and ∞. In addition, we should notice that the distance between support vectors locate in 1Td x = ±d and hard margin classifier 1d in lp space is d‖1d‖q . Hence we should adjust the adversarial radius ε for different p.\nWe first verify the conclusion for ε-strongly linearly separable data. The results for p = 4 and p = ∞ can be respectively referred to Figure 5 and 6. Noticing that the distance between support vectors and hard margin solution of SVM is d‖1d‖q = d 1 p . Then for p = 4, p = ∞ and d = 2, we have 2 1 4 ≈ 1.18, 20 = 1. From the results, we see that an appropriate adversarial radius (ε = ε1/2) and more support vectors are helpful for adversarial training even in lp space.\nNow, we pay our attention to non-ε-strongly linearly separable data. We respectively verify our conclusions about stability and confidence of adversarial training. The experimental settings are again same with Section 5. The results can be referred to Figure 7 and 8. We see that our conclusions are still hold in lp space, i.e. classifier obtained by adversarial training is stable to outliers and the confidence of it will consist keep in a low level for a large ε.\nAs a matter of fact, we can generalize our theoretical results to lp space by following the methods in this paper. The inner product 〈·, ·〉 of linear classifier is derived from ‖ · ‖2 while l2 is a Hilbert space, but lp space is not. The inner product we used does not math the lp space. Hence, we need adding some extra bounded conditions to extend our conclusions to lp space.\nE.2 ADVERSARIAL TRAINING FOR NEURAL NETWORK\nOur conclusions are derived from linear classifier, but Du et al. (2019); Jacot et al. (2018); Lee et al. (2019b); Arora et al. (2019) suggest that over-parameterized neural networks of sufficient width (or infinite width) evolve as linear models with Neural Tangent Kernel (NTK). Hence our conclusions can somehow represent the performance of adversarial training for neural network models. In this subsection, we try to empirically verify our conclusions about adversarial training within neural network model.\nWe use CIFAR10 (Krizhevsky et al., 2012) to conduct our experiments. CIFAR10 is a dataset with 10 categories. To simplify the experiments, we only keep the first two categories, which turns the experiments into binary classification problem. The model we used for adversarial training is ResNet20 (He et al., 2015), which is a CNN with 20 layers. The experiments are conducted in l2 and l∞ space.\nSince we can not compute the exact distance between data and classifier, we use the loss on training data 8 with perturbation to represent the robustness of trained model. The perturbations are founded by running 10 steps projected gradient descent (PGD). We handle adversarial training by following the settings in Madry et al. (2018). We use PGD to find the arg maxx:‖x−xi‖≤ε `(x,θ) for each xi.\n8Here we only focus on training data because we do not discuss the generalization of model obtained by adversarial training.\nThe steps of PGD is 10 for each xi. All the models are trained by stochastic gradient descent with 0.1 learning rate and 0.9 momentum parameter for 100 epochs. The loss function is set to be cross entropy.\nWe first validate our conclusions for ε-strongly linearly separable data. Since neural network is not a linear model, it is hard to construct extra support vectors like in Section 5. Hence, we can only verify our conclusion that iterates of adversarial training converge faster to a robust solution, and it will benefit from an appropriate choice of adversarial radius ε. To compare the influence of ε, we choose ε = 0.25, 1.0, 2.0 in l2 space and ε = 2/255, 8/255, 16/255 in l∞ space. The size of perturbations are from Madry et al. (2018). Besides that, the perturbations on training set 9 to verify the robustness of trained model are 1.0 and 8/255 respectively in l2 space and l∞ space. The experimental results in l2 and l∞ space can be referred to Figure 9a 10 and 10a. The label of curves, for example, ”Adv: 0.25 varepsilon” means adversarial training with ε = 0.25. From the figures, we see that the models returned by adversarial training with ε = 1 and ε = 8/255 are respectively the most robust in l2 and l∞ space. Hence, the conclusion that adversarial training can benefit from an appropriate choice of ε also holds for neural network.\nThen, for non-ε-strongly linearly separable data, we respectively discuss the influence of outliers and a large ε. First, we construct 50 outliers {xi}100i=1 ∼ N (±0.078125 · 132×32×3, 0.045 · I32×32×3) in each category in both l2 and l∞ space. The influence of outliers for adversarial training in l2 and l∞ space can be respectively referred to Figure 9a and 10a. ”Adv with outliers: 0.25 varepsilon”\n9We should highlight that the perturbations are recalculated after each epoch of training. 10We only list the results of adversarial leaning, because the loss on data with perturbation of standard\ntraining will not converge (always larger than 2.0).\nmeans adversarial training with outliers added into dataset and ε = 0.25. From the results, we see that outliers will barely effect the performance of adversarial training. Hence, we can conclude that adversarial training is still stable to outliers, even the model is neural network. On the other hand, we respectively choose ε equal to 4 and 32/255 in l2 and l∞ space to verify the confidence of classifier returned by adversarial training. The distributions of confidence via adversarial training in l2 and l∞ space can be respectively referred to Figure 9b and 10b. The results reveal that the confidence of neural network trained by adversarial training with large ε is at a fairly low level in each data.\nTo summary, although our conclusion about adversarial training are derived from linear classifier, the empirically results suggest it can be somehow extended to the case of neural networks. Thus, a theoretical exploration for neural network is crucial in the future work." } ]
2,019
null
SP:8939f2377046904b82dd2b219dc2df9b008078b4
[ "The authors point out that CNNs can develop collapsed channels that limit their capacity. They propose to remedy this with batch decorrelation (BD), which focuses on ensuring that channels play an equal role in the feature map and are less likely to collapse. The claim is supported with experiments on CIFAR10, ImageNet and COCO.", "This paper studies the channel-collapsed problem in CNNs using 'BN+ReLU' . The Channel Equilibrium block which consists of batch decorrelation branch and adaptive instance inverse branch are proposed to reduce the channel-level sparsity. Experiments on ImageNet and COCO demonstrate that the proposed CE block can achieve higher performance than the conventional CNNs by introducing little computational complexity. The author also discuss the relationship between the proposed method and Nash Equilibrium." ]
Convolutional Neural Networks (CNNs) typically treat normalization methods such as batch normalization (BN) and rectified linear function-like activations (e.g. ReLU) as building blocks. Previous work pointed out that learning feature channels with equal magnitudes is important for a CNN to achieve good generalization ability. However, the above “Norm+ReLU-like” basic block often learns inhibited channels that has small magnitude (i.e. contributes little to the feature representation), impeding both learning and generalization ability of CNNs. This problem is seldom explored in the literature. To mitigate the inhibited channels and encourage channels to contribute equally to the feature representation, we propose a new building block, Channel Equilibrium (CE), which is able to prevent inhibited channels in both experiments and theory. CE has several appealing properties. First, CE can be stacked after many different normalization methods such as BN and Group Normalization (GN), as well as integrated into many advanced CNN architectures such as ResNet and MobileNet V2 to form a series of CE networks (CENets), outperforming existing network architectures. Second, CE has an interesting connection with the Nash Equilibrium, a well-known solution of a non-cooperative game. Third, extensive experiments show that CE achieves state-of-the-art results on various challenging benchmarks such as ImageNet and COCO. The models and codes will be released.
[]
[ { "authors": [ "Horace B Barlow" ], "title": "Possible principles underlying the transformation of sensory messages", "venue": "Sensory communication,", "year": 1961 }, { "authors": [ "Yoshua Bengio", "James S Bergstra" ], "title": "Slow, decorrelated features for pretraining complex cell-like networks. In Advances in neural information processing", "venue": null, "year": 2009 }, { "authors": [ "Dario A Bini", "Nicholas J Higham", "Beatrice Meini" ], "title": "Algorithms for the matrix pth root", "venue": "Numerical Algorithms,", "year": 2005 }, { "authors": [ "Yue Cao", "Jiarui Xu", "Stephen Lin", "Fangyun Wei", "Han Hu" ], "title": "Gcnet: Non-local networks meet squeeze-excitation networks and beyond", "venue": null, "year": 1904 }, { "authors": [ "Kai Chen", "Jiaqi Wang", "Jiangmiao Pang", "Yuhang Cao", "Yu Xiong", "Xiaoxiao Li", "Shuyang Sun", "Wansen Feng", "Ziwei Liu", "Jiarui Xu" ], "title": "Mmdetection: Open mmlab detection toolbox and benchmark", "venue": null, "year": 1906 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "arXiv preprint arXiv:1511.07289,", "year": 2015 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Nicholas J Higham" ], "title": "Newton’s method for the matrix square root", "venue": "Mathematics of Computation,", "year": 1986 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Lei Huang", "Dawei Yang", "Bo Lang", "Jia Deng" ], "title": "Decorrelated batch normalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Lei Huang", "Yi Zhou", "Fan Zhu", "Li Liu", "Ling Shao" ], "title": "Iterative normalization: Beyond standardization towards efficient whitening", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Amir Laufer", "Amir Leshem", "Hagit Messer" ], "title": "Game theoretic aspects of distributed spectral coordination with application to dsl networks", "venue": "arXiv preprint cs/0602014,", "year": 2006 }, { "authors": [ "Amir Leshem", "Ephraim Zehavi" ], "title": "Game theory and the frequency selective interference channel", "venue": "IEEE Signal Processing Magazine,", "year": 2009 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Zhuang Liu", "Jianguo Li", "Zhiqiang Shen", "Gao Huang", "Shoumeng Yan", "Changshui Zhang" ], "title": "Learning efficient convolutional networks through network slimming", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Lu Lu", "Yeonjong Shin", "Yanhui Su", "George Em Karniadakis" ], "title": "Dying relu and initialization: Theory and numerical examples", "venue": null, "year": 1903 }, { "authors": [ "Ping Luo", "Jiamin Ren", "Zhanglin Peng", "Ruimao Zhang", "Jingyu Li" ], "title": "Differentiable learning-tonormalize via switchable normalization", "venue": "arXiv preprint arXiv:1806.10779,", "year": 2018 }, { "authors": [ "Andrew L Maas", "Awni Y Hannun", "Andrew Y Ng" ], "title": "Rectifier nonlinearities improve neural network acoustic models", "venue": "In Proc. icml,", "year": 2013 }, { "authors": [ "Dushyant Mehta", "Kwang In Kim", "Christian Theobalt" ], "title": "On implicit filter level sparsity in convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Poorya Mianjy", "Raman Arora", "Rene Vidal" ], "title": "On the implicit bias of dropout", "venue": "arXiv preprint arXiv:1806.09777,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Ari S Morcos", "David GT Barrett", "Neil C Rabinowitz", "Matthew Botvinick" ], "title": "On the importance of single directions for generalization", "venue": "arXiv preprint arXiv:1803.06959,", "year": 2018 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Xingang Pan", "Xiaohang Zhan", "Jianping Shi", "Xiaoou Tang", "Ping Luo" ], "title": "Switchable whitening for deep representation learning", "venue": "Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Josip Pečarić" ], "title": "Power matrix means and related inequalities", "venue": "Mathematical Communications,", "year": 1996 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Yi Sun", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deeply learned face representations are sparse, selective, and robust", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "However", "Lu" ], "title": "ReLU neurons may become inactive and output 0 values for any input. Previous work tackled this issue by designing new activation functions, such as ELU (Clevert et al., 2015) and Leaky ReLU", "venue": "(Maas et al.,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Normalization is an important technique for a wide range of tasks such as image classification (Ioffe & Szegedy, 2015), object detection (He et al., 2017a; Wu & He, 2018), and image generation (Miyato et al., 2018). In recent years, a lot of work improved normalization methods, such as batch normalization (BN) (Ioffe & Szegedy, 2015), layer normalization (LN) (Ba et al., 2016) and switchable normalization (SN) (Luo et al., 2018). These methods are often used together with the ReLU-like activation functions such as ReLU (Glorot et al., 2011; Nair & Hinton, 2010), ELU (Clevert et al., 2015) and Leaky ReLU (LReLU) (Maas et al., 2013), making the “Norm+ReLU-like” module become one of the most widely-used building blocks of modern CNNs. This work investigates and alleviates the inhibited channels emerged in the “Norm+ReLU-like” building block, which consists of a normalization layer and a ReLU-like activation function given by\nyncij = g(x̃ncij), x̃ncij = γcx̄ncij + βc, (1)\nwhere subscript n, c, i, and j denote indices of a sample, a channel, height and width of a feature channel respectively. For instance, yncij indicates output value of the location (i, j) in the c-th channel of the n-th sample. And x̃ncij and x̄ncij represent normalized channel features and standardized channel features respectively. γ and β are two vectors of parameters, where each element re-scales and re-shifts the standardized features for each channel c. Moreover, g(·) denotes a ReLU-like activation function.\nAs pointed out in (Morcos et al., 2018), a CNN that generalizes well would have its channels contributed equally in its feed-forward computations. We term this desired property “channel equalization”. However, a recent study disclosed a critical problem of the “Norm+ReLU-like” builiding block, known as “channel collapse”, where certain channels always produce small output values given any input. For example, the lottery hypothesis (Frankle & Carbin, 2018) claimed that one over-parameterized CNN always contains unimportant channels that contribute little to the network’s prediction. This paper shows that these unimportant channels are inhibited channels, which usually\nassociated with small values of γ in Eqn.(1) or small values of the channel features (In this paper, inhibited channel is defined as channel whose all feature values are less than 1e− 2). For example, as shown in Fig.1(a,b), the inhibited channels exist in many “Norm+ReLU-like” basic block such as ‘LN+ReLU’, ‘BN+ELU’ and ‘BN+ReLU’. This phenomenon has motivated many investigations to directly remove these inhibited channels, such as network slimming (Liu et al., 2017; Yu et al., 2018) and channel pruning (He et al., 2017b). However, disequilibrium among channels caused by the inhibited channels would do harm to generalization ability of the network and simply removing the inhibited channels that are inactive during training does not help improve learning capacity.\nInstead of removing the inhibited channels, we present an alternative perspective by proposing a novel building block, named Channel Equilibrium (CE), to recover and equalize the inhibited channels by encouraging channels to contribute equally in the feature representation learning process, thus enhancing the representation and generalization of CNNs. To this end, a key observation from Eqn.(1) is that the dependency (covariance) matrix of all the feature channels after normalization is scaled by γγT. Let this covariance matrix be Σ. We will see that by applying a decorrelation operator, i.e. Σ− 1 2 , we can not only effectively eliminate the correlations between computations of channel features and the magnitude of γ, but also equalize the channels’ magnitudes (Barlow et al., 1961; Bengio & Bergstra, 2009). This operator enables all the channels to play an equal role in the computations of a CNN, improving its generalization ability. For example, as shown in Fig.1, the VGGNet (Simonyan & Zisserman, 2014) equipped with CE is able to effectively prevent inhibited channels and achieves channel equalization in various of “Norm+RelU-like” blocks, consistently improving their recognition performance.\nThe main contributions of this work are three-fold. First, we introduce an efficient and effective building block, Channel Equilibrium (CE), which encourages channel equalization and enhances representation learning of CNNs. Second, CE blocks can be stacked after common normalizers and plugged into various advanced architectures, consistently improving their performance by a large margin. For example, CE can be integrated into ResNet and MobileNet V2, forming a series of CE-Networks by replacing the ordinary ‘BN-ReLU’ block by ‘BN-CE-ReLU’ block, which merely introduces subtle extra computational complexity. As a result, the CE-ResNet50 and CE-MobileNet V2 outperform their counterparts by 1.7% and 2.1% top-1 accuracy with nearly the same FLOPs. We also show that combining CE with synchronization across GPUs increases the AP metric on the MS-COCO dataset to 42.0, surpassing its counterpart by 3.4. Third, the learned equalized feature representation of CENet can be better transferred to many other tasks like object detection and segmentation." }, { "heading": "2 RELATED WORK", "text": "Channel equalization. Channel equalization indicates that channels in a layer of CNNs contribute equally to network’s computation. The success of the two most commonly used regularization techniques, i.e. BN (Ioffe & Szegedy, 2015) and Dropout (Srivastava et al., 2014), is attributed to channel or neuron equalization. For example, Mianjy et al. (2018) showed that Dropout makes the norm of incoming/outgoing weight vectors of all the hidden nodes equal, indicating a kind of equalization between neurons. Moreover, Morcos et al. (2018) pointed out that BN implicitly discourages single direction reliance, indicating that equalizing different channels is able to enhance the generalization of learned feature representation. Note that the squeeze-and-excitation (SE) network (Hu et al., 2018) is a pioneer work that explicitly modeling interdependencies among channels by investigating network design. However, SE selectively emphasizes informative channels and suppresses less useful ones. In contrast, the proposed CE block encourages all the channels to play an equal role in network’s computation, which, as will be shown, can be linked with Nash Equilibrium. More related work on sparsity in ReLU and normalization methods are provided in Sec.A of Appendix." }, { "heading": "3 METHOD", "text": "In this section, we first review the normalization method and then introduce the proposed Channel Equilibrium (CE) block. CE contains two complementary branches, i.e. batch decorrelation (BD) and adaptive instance inverse (AII). We show how BD and AII benefit from each other through parameter γ and how CE is linked with Nash Equilibrium.\nNotations. For CNNs, we use x ∈ RN×C×H×W to represent the feature in a layer, specifically, xncij denotes a pixel (i, j) in the c-th channel of the n-th sample. Sometimes, we ignore the subscript ‘n’ and denote it as xcij for clarity of notation. xnij ∈ RC is obtained by stacking elements in all channels of xncij into a column vector. Diag(·) returns a matrix with the given diagonal and zero offdiagonal entries, and diag(·) extracts the diagonal of the given matrix. γ, β ∈ RC are normalization parameters." }, { "heading": "3.1 OVERVIEW OF NORMALIZATION", "text": "Normalization is usually employed after convolution layers to stabilize the training of CNNs. Given a hidden feature x ∈ RN×C×H×W , a normalizer first standardizes it to x̄, and then maps it to x̃ by an affine transformation, as written by\nx̃ncij = γcx̄ncij + βc, x̄ncij = (xncij − µs)/σs (2) where s ∈ Ω = {IN,BN,LN, · · · } indicates a normalizer and µs, σs are the mean and standard deviation of the given normalizer. For simplicity, in original formulation is omitted (Ioffe & Szegedy, 2015). From Eqn.(2), we claim that normalization would lead to an unequal feature representation in a channel basis, based on the fact that common-used normalizers like IN and BN are performed channel-wisely. It is known that importance of channels are quantified by the learned parameters γ with its magnitude since channel features are scaled by γ in a channel basis. Previous work (Frankle & Carbin, 2018; Mehta et al., 2019) revealed that some inhibited channels emerge when the associated γ or feature map gets small (Mehta et al., 2019; Lu et al., 2019). Obviously, the inhibited channels would cause disequilibrium among channels, resulting in limited generalization ability. To alleviate such a disequilibrium, Channel Equilibrium (CE) block is proposed in the following section." }, { "heading": "3.2 CHANNEL EQUILIBRIUM (CE) BLOCK", "text": "A Channel Equilibrium (CE) block is a computational unit which aims to equalize feature representation capacity among channels. To this end, decorrelation method is adopted. Different from previous methods (Huang et al., 2018; 2019) decorrelating features by a single batch estimated covariance matrix Σ, the proposed method brings in an adaptive instance variance, Sn, on the diagonal of the covariance matrix Σ, considering that channel dependency is specific to each input (Hu et al., 2018), as formulated in the following,\nDn = λΣ + (1− λ)Diag(Sn), Sn = F (σ2(x̃n)), (3)\nwhere the subscript n is the sample index, λ ∈ (0, 1) is a trainable ratio used to switch between batch and instance statistics, F : RC → RC is a transformation conditioned on the current input x̃ and σ2(x̃n) computes instance variance of x̃n within each channel. On the issue of channel disequilibrium, CE block works by decorrelating feature maps after normalization using D− 1 2\nn . Further, the Jensen inequality for matrix functions (Pečarić, 1996) can be employed to obtain a relaxed decorrelation operator D− 1 2 n :\nD − 12 n = [λΣ + (1− λ)Diag(Sn)]− 1 2 λΣ− 12 + (1− λ) [Diag(Sn)]− 1 2 , (4)\nwhere A B indicates B − A is semi-definite positive. We introduce this relaxation for the following two reasons. (1) Computation reduction. It allows less computational cost for each training step since the relaxed form only needs to calculate the inverse of square root Σ− 1 2 once, and the other branch Diag(Sn)− 1 2 is easy to compute. (2) Inference acceleration. Σ− 1 2 is a moving-average statistic in inference which can be absorbed into previous layer, therefore, enabling fast inference. Note that Eqn.(4) transforms the combination of covariance and adaptive instance variance into the combination of their inverse square roots.\nIn the following, we refer Σ− 1 2 in Eqn.(4) as batch decorrelation (BD) and refer [Diag(Sn)] − 12 as adaptive instance inverse (AII). The former decorrelates channels by a batch covariance, while the latter adjusts the extend of inverse for each channel and instance in an adaptive manner. Integrating both of them yields the forward representation of CE block:\npnij = D − 12 n (Diag(γ)x̄nij + β) (5)\nwhere pnij ∈ RC denotes the output of CE, as illustrated in Fig.2(b). Note that CE is performed after the normalization layer, BN is taken as an example to introduce these two branches in the following sections." }, { "heading": "3.2.1 BATCH DECORRELATION (BD)", "text": "Although a lot of previous work (Huang et al., 2018; 2019; Pan et al., 2019) has investigated whitening method using covariance matrix, all of them are applied after the convolution layer. Thus, inhibited channels still exist since whitened channel features are also scaled by γ. Instead, decorrelation in CE is applied after the normalization layer to equalize magnitude of all channels. Consider a tensor x̃ after a BN layer, it can be reshaped as x̃ ∈ RC×M and M = N · H · W . Then the covariance matrix Σ of x̃ can be written as (details are presented in Sec. B of Appendix)\nΣ = γγT 1 M x̄x̄T (6)\nwhere x̄ is a standardized feature with zero mean and unit variance and indicates elementwise multiplication. Eqn.(6) implies that the covariance matrix Σ of x̃ can be decomposed into two parts. The first part depends on normalization parameter γ and the second part becomes correlation matrix of x̃. It is observed that Σij , which represents dependency between i-th channel and j-th channel, is scaled by γiγj after BN is applied.\nThe Batch Decorrelation (BD) branch requires computing Σ− 1 2 , which is usually related to eigendecomposition or SVD and involves heavy computation (Huang et al., 2018). Instead, here we adopt an efficient approach, i.e., Newton’s Iteration to obtain Σ− 1 2 (Bini et al., 2005; Higham, 1986). Given covariance matrix Σ, Newton’s Iteration calculates Σ− 1 2 by the following iterations:{\nΣ0 = I Σk = 1 2 (3Σk−1 − Σ 3 k−1Σ), k = 1, 2, · · · , T.\n(7)\nwhere T is the iteration number (T = 3 in our experiments). Note that the convergence of Eqn.(7) is guaranteed if ‖Σ‖2 < 1 (Bini et al., 2005). To this end, Σ is normalized as Σ/tr(Σ) where tr(·) is the trace operator (Huang et al., 2019). In this way, the normalized covariance matrix is written as Σ = γγ T\n‖γ‖22 1M x̄x̄ T. To sum up, the batch decorrelation branch firstly calculates a normalized covariance matrix and then applies Newton’s Iteration to obtain its inverse square root, reducing lots of computational cost compared with SVD decomposition in the training stage. Furthermore, BD branch can be merged into convolutional layers in the inference stage, which adds extra computation marginally." }, { "heading": "3.2.2 ADAPTIVE INSTANCE INVERSE (AII)", "text": "Channel dependencies are specific to each sample. Consequently, a conditional decorrelation is desired for each sample. The adaptive instance inverse (AII) branch only uses diagonal entries to model channel dependencies, as shown in Eqn.(3). Since a diagonal matrix can be inverted easily, this approach can avoid the computation of Eqn.(4).\nTo construct the AII branch, we analyze its input (the output of a BN layer), which is formulated as x̃ncij = γcx̄ncij +βc. The input of AII is the instance variance of each channel (details are provided in Appendix Sec. B),\nσ2nc = γ2c (σ 2 IN) nc\n(σ2BN) c\n(8)\nwhere σ2IN and σ 2 BN represent the variances in IN and BN respectively. The ratio of them measures the relative fluctuation of how much the instance statistic are deviated from the batch-estimated statistic. Similar to Eqn.(6), the input of AII is also scaled by γ2c .\nThe AII branch takes σ2nc as input and computes an adaptive instance inverse, i.e. [Diag(Sn)] − 12 . It needs to satisfy two requirements. First, as is desired, dependencies among channels should be embedded in [Diag(Sn)]\n− 12 for each sample. Second, the output of AII should have the same philosophy as inverse square root of variance or covariance in BD branch. To achieve this, a reparameterization trick is employed to generate adaptive instance inverse. Let s be an estimate of variance, the AII branch can be reparameterized as below,\n[Diag(Sn)] − 12 = Diag(F̃ (σ2n)) · s− 1 2 , (9)\nF̃ (σ2n) = δ2(W2δ1(LN(W1σ 2 n))), s = σ 2(x̃), (10)\nwhere δ1 and δ2 are ReLU and sigmoid activation function respectively, W1 ∈ R C r ×C and W2 ∈ RC×Cr and r is reduction ratio, s ∈ R denotes variance of all elements in x̃, which is a batch statistic in training and is obtained using moving average for inference. F̃ (σ2n) ∈ (0, 1)C is treated as a gating mechanism in order to control the strength of instance inverse for each channel. Following the best practice (Hu et al. (2018); Cao et al. (2019)) in characterizing channel relationships, F̃ is expressed by a bottleneck architecture that is able to model channel dependencies and limit model complexity. Layer normalization (LN) is used inside the bottleneck transform (before ReLU) to ease optimization. It is seen from Eqn.(9) that s− 1 2 represents the quantity of inverse square root of variance and F̃ (σ2n) regulates the extend of variance inverse. F̃ maps the instance variance to a set of channel weights. In this sense, the AII branch intrinsically introduces channel dependencies conditioned on each input." }, { "heading": "3.3 DISCUSSIONS", "text": "Instantiations. Our CE block can be integrated into various advanced architectures, such as ResNet, VGGNet, ShuffleNet V2 and MobileNet V2, by inserting it in ’norm+ReLU-like’ building block. The CE block is described in Fig.2(b). As discussed earlier, CE processes incoming features after the normalization layer by combining two branches, i.e. batch decorrelation (BD) and adaptive instance inverse (AII). Compared with SE block in Fig.2(a), the proposed CE block combines both instance and batch statistics, and it can consequently model dependencies among channels better.\nA series of CENets can be constructed by integrating CE block into various advanced CNN architectures. For example, we consider the residual networks (ResNet). The core unit of the ResNet is the residual block that consists of ‘1× 1’, ‘3× 3’ and ‘1× 1’ convolution layers, sequentially. Since the CE block is expected to help channel equalization, it would benefit from larger number of channels. Therefore, we employ CE block in the last ‘1 × 1’ convolution layer by plugging the CE module before ReLU non-linearity, as shown in Fig.2(c). As for CE-MobileNet V2, since the last ‘1 × 1’ convolution layer in the bottleneck is not followed by a ReLU activation, we insert CE in the ‘3× 3’ convolution layer that also has a largest number of channels in the bottleneck. Following similar strategies, CE is further integrated into ShuffleNet V2 to construct CE-ShuffleNet V2. We provide extensive experiments evaluating all these CENets in Sec.4 and computational details in training and inference in Sec.F of Appendix.\nEquivalent γ. Here we show how BD and AII benefit from each other through parameter γ. First, we disclose the mechanism in preventing inhibited channels behind the BD branch. Previous work (He et al., 2017b; Yu et al., 2018; Frankle & Carbin, 2018) revealed that γ in BN can be used to prune less unimportant channels, implying that the representational power of feature map largely depends on the magnitude of γ. Combining Eqn.(4) and Eqn.(5), the output of BD can be expressed as pBDnij = Diag(Σ − 12 γ)x̄nij + Σ − 12 β. Comparing with Eqn.(2), an equivalent γ can be defined as γ̂ = Σ− 1 2 γ for BD branch. The proposition 1 shows that BD explicitly increases the magnitude of γ̂ in a feed-forward way, encouraging all channels to contribute to the feature learning process. The representational power is thus boosted in a channel basis. We provide the proof of proposition 1 in Sec.C of Appendix.\nFurthermore, the original γ in BN is also implicitly enlarged. It can be seen from Eqn.(6), a sufficient small γc can cause degradation of the covariance matrix and then the convergence of Newton’s iteration (Bini et al., 2005) cannot be guaranteed. As a result, once the network converges, γc is not supposed to degrade. This will in turn bring many benefits to AII branch. Eqn.(8) shows that the input of AII is proportional to γ, meaning that the features fed into AII branch are enlarged as γ increases. In this way, a bottleneck architecture in AII can learn more compact global information and model channel dependencies better.\nProposition 1. Let Σ be covariance matrix of feature maps after batch normalization. Assume that Σk = Σ − 12 , ∀k = 2, 3, · · · , T , then ‖γ̂‖1 > ‖γ‖1. Especially, we have |γ̂i| > |γi|.\nConnection with Nash Equilibrium. We show an interesting connection between the proposed CE block and the well-known Nash Equilibrium in game theory (Leshem & Zehavi, 2009). To be specific, we bring novel insights on normalization from an optimization perspective. Suppose each channel obtains its output by maximizing capacity available to itself under some constraints. Especially, we restrict that each channel has a maximum budget and all the outputs are non-negative. Further, if we consider dependencies among channels, the channels are thought to play a noncooperative game, named Gaussian interference game which admits a unique Nash Equilibrium solution (Laufer et al., 2006). In Sec.D of Appendix, we present the detailed construction of Gaussian interference game in the context of CNNs. It is worth noting that when all the outputs are activated (larger than 0), this Nash Equilibrium solution has an explicit expression. Under some mild approximations, it can be shown that the explicit Nash Equilibrium solution can surprisingly match the representation of CE in Eqn.(5). It shows that decorrelating features after normalization layer can be connected with Nash Equilibrium, implying that the proposed CE block indeed encourages every channel to contribute to the network’s computation. We present detailed explanations about the connection between CE and Nash Equilibrium in Sec.D of Appendix." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our methods on two basic vision tasks, image classification on ImageNet and object detection/segmentation on COCO, where we demonstrate the effectiveness of the CE block." }, { "heading": "4.1 IMAGE CLASSIFICATION ON IMAGENET", "text": "We first evaluate CE on the ImageNet benchmark. The training details are illustrated in Sec.F of Appendix.\nPerformance comparison on ResNet. We evaluate on representative residual network structures including ResNet18, ResNet50 and ResNet101. The CE-ResNet is then compared with baseline (plain ResNet) and SE-ResNet. For fair comparisons, we use publicly available code and re-implement baseline models and SE modules with their respective best settings in a unified Pytorch framework. To save computation, the CE blocks are selectively inserted into the last normalization layer of each residual block. Specifically, for ResNet18, we plug the CE block into each residual block. For ResNet50, CE is inserted into all residual blocks except for those layers with 2048 channels. For ResNet101, the CE blocks are employed in the first seven residual blocks.\nAs shown in Table 1, our proposed CE outperforms the BN baseline and SE block by a large margin with little increase of GFLOPs. Concretely, CE-ResNet18, CE-ResNet50 and CE-ResNet101 obtain top-1 accuracy increase of 1.5%, 1.7% and 1.0% compared with the corresponding plain ResNet architectures. The CE-ResNet50 even outperforms the plain ResNet101 (78.0). We plot training and validation loss during the training process for ResNet50, SE-ResNet50 and CE-ResNet50 in Sec.E of Appendix.\nWe also analyze the complexity of BN, SE, and CE in terms of GFLOPs, GPU and CPU running time. We evaluate the inference time1 with a mini-batch of 32. In term of GFLOPs, the CEResNet18, CE-ResNet50, CE-ResNet101 has only 0.242% and 0.241% relative increase in GFLOPs compared with plain ResNet. Additionally, the CPU and GPU inference time of CENet is nearly the same with SENet.\nPerformance comparison on light-weight networks. We further verify the effectiveness of our proposed CE in two representative light-weight networks, MobileNet V2 and ShuffleNet V2. The results of comparison are listed in Table 2. It is seen that CE blocks bring conspicuous improvements\n1The CPU type is Intel Xeon CPU E5-2682 v4, and the GPU is NVIDIA GTX1080TI. The implementation is based on Pytorch\nin performance at a minimal increase in computational burden on mobile settings. For MobileNet V2, we see that CE blocks even improves top-1 accuracy of baseline by 2.1%.\nOther Normalizers. In addition to BN, CE is also effective for other normalization technologies, since inhibited channels emerges in many well-known normalizers as shown in Fig.1. To prove this, we conduct experiments using ResNet-50 under different normalizers including batch normalization (BN), group normalization (GN), instance normalization (IN), and layer normalization (LN). For these experiments, we stack CE block after the above normalizers to see whether CE helps other normalization methods. As shown in Table 3, our CE generalize well over different normalization technology, improving the performance by 0.6-1.8 top-1 accuracy." }, { "heading": "4.2 ANALYSIS OF CE", "text": "In this section, we first demonstrate that CE is able to equalize the importance of all channels and then analyze the effects of BD and AII branches separately on CIFAR10 and ImageNet datasets. More discussions about CE are provided in Sec.E of Appendix.\nCE is effective in channel equalization. In Fig.1, we have demonstrated that CE is able to alleviate inhibited channels, which is a necessary condition of channel equalization. Here, we further show that the ability that CE can prevent inhibited channels is robust to a wide range of strength of weight decay. As shown in Fig.3(c,d), CE prevents inhibited channels and retains higher performance under different strengths of weight decay.\nNext, we verify whether CE can help channel equalization by an ablation approach used in Morcos et al. (2018). Typically, the importance of a single channel to the network’s computation can be measured by the relative performance drop once that channel is removed (clamping activity a feature map to zero). In this regard, the more reliant a network is on a small set of channels, the more quickly the accuracy will drop if those channels are ablated. On the contrary, if the importance of channels to the network’s computation are more equal, the accuracy will drop more gently. With this powerful technique, we see how ResNet50 and MobileNet V2 with CE blocks respond to cumulative random ablation of channels. We plot the ablation ratio versus the top-1 accuracy in Fig.3(a,b). As we can see, our CE block is able to resist the cumulative random ablation of channels on both ResNet50 and MobileNet V2, showing that CE can effectively equalize the importance of channels. For example, the top-1 accuracy of our CE-ResNet50 is 1.7 higher\nthan the original ResNet50 if no channels are ablated, but when 70% channels are ablated, CEResNet50 still obtain 23.0 top-1 accuracy, while the original ResNet50 gets only 4.6 top-1 accuracy.\nBD is able to mitigate inhibited channels. As proved in Proposition 1, equivalent γ in the BD branch is explicitly enlarged, leading to the expansion of representational power of all channels. Here we investigate this property experimentally with a single BD branch. The inhibited channels ratio are measured by the percentage of feature maps whose values are less than 1e-2. Fig.3(d) shows inhibited channels ratio under a wide range of weight decay for BN, BD and CE. It is observed that the top-1 accuracy of VGGNet with BN drops significantly as the weight decay increases, but BD can reduce accuracy drop. For example, when the weight decay is 1e-3, the top-1 accuracy of BD is only 0.1 higher than BN, but when the weight decay reaches to 1e-2, BD is 4.95 higher. Moreover, the inhibited channels ratio of CE is even lower than BD while the top-1 accuracy is higher, which can demonstrate that CE can strengthen the effect of alleviation of inhibited channels compared with single BD.\nAII helps CE learn preciser feature representation. First, as discussed in Sec.3.3, AII benefits from BD such that the features fed into AII branch are more informative. To see this, we train ResNet50 with a single AII or CE branch, termed AII-ResNet50 or CE-ResNet50. We do principal component analysis (PCA) on the inputs of AII branch in AII-ResNet50 and the counterpart in CEResNet50 and plot the box chart of principal components. As is shown in Fig.4, the input of AII branch in CE-ResNet50 gets much lower means and variances, meaning that the input feature has more valid basis and thus more informative.From the Grad-CAM visualization provided in Sec.E of Appendix, we find that AII helps CE learn preciser feature representation.\nBD and AII are complementary. Here, we verify that BD and AII are complementary to each other. We train plain ResNet50, BD-ResNet50, AII-ResNet50, and CE-ResNet50 for comparison. The top-1 accuracy is reported in Table 4. It is observed that the BD-ResNet50 and AII-ResNet50 are 0.4 and 0.7 higher than the plain ResNet-50 respectively. However, when they are combined, the top-1 accuracy improves by 1.7, higher than combined accuracy increase (1.1), which demonstrates that they benefit from each other." }, { "heading": "4.3 OBJECT DETECTION AND INSTANCE SEGMENTATION ON COCO", "text": "We assess the generalization of our CE block on detection/segmentation track using the COCO2017 dataset ( Lin et al. (2014)). We train our model on the union of 80k training images and 35k validation images and report the performance on the mini-val 5k images. Mask-RCNN is used as the base detection/segmentation framework. The standard COCO metrics of Average Precision (AP) for bounding box detection (APbb) and instance segmentation (APm) is used to evaluate our methods. In addition, we adopt two common training settings for our models, (1) freezing the vanilla batch normalization and channel equilibrium layer and (2) updating parameters with the synchronized version. For vanilla BN and CE layers, all the gamma, beta parameters, and the tracked running\nstatistics are frozen. In contrast, for the synchronized version, the running mean and variance for batch normalization and the covariance for CE layers are computed across multiple GPUs. The gamma and beta parameters are updated during training while F̃ and λ are frozen to prevent overfitting. We use MMDetection training framework with ResNet50/ResNet101 as basic backbones and all the hyper-parameters are the same as Chen et al. (2019). Table 5 shows the detection and segmentation results. The results show that compared with vanilla BN, our CE block can consistently improve the performance. For example, our fine-tuned CE-ResNet50 is 2.2 AP higher in detection and 2.7 AP higher in segmentation. For the sync BD version, CE-ResNet50 gets 42.0 AP in detection and 37.5 AP in segmentation, which is the best performance for ResNet50 to the best of our knowledge. To sum up, these experiments demonstrate the generalization ability of CE blocks in other tasks." }, { "heading": "5 CONCLUSION", "text": "In this paper, we presented a novel network block, termed as Channel Equilibrium (CE). The CE block conditionally decorrelates feature maps after normalization layer by switching between batch decorrelation branch and adaptive instance inverse branch. We show that CE is able to explicitly alleviate the inhibited channels and help channel equalization, enhancing the representational power of a neural network in a feed-forward way. Specifically, CE can be stacked between the normalization layer and the ReLU function, making it flexible to be integrated into many advanced CNN architectures. The superiority of CE blocks has been demonstrated on the task of image classification and instance segmentation. We hope that the analysis of channel equalization in CE could bring a new perspective for future work in architecture design." }, { "heading": "A MORE RELATED WORK", "text": "Sparsity in ReLU. An attractive property of ReLU (Sun et al., 2015; Nair & Hinton, 2010) is sparsity, which brings potential advantages such as information disentangling and linear separability. However, Lu et al. (2019) and Mehta et al. (2019) pointed out that some ReLU neurons may become inactive and output 0 values for any input. Previous work tackled this issue by designing new activation functions, such as ELU (Clevert et al., 2015) and Leaky ReLU (Maas et al., 2013). Recently, Lu et al. (2019) also tried to solve this problem by modifying initialization scheme. Different from these work, we focus on explicitly preventing inhibited channel in a feed-forward way by the proposed CE blocks.\nNormalization and decorrelation. There are many practices on normalizer development, such as Batch Normalization (BN) (Ioffe & Szegedy, 2015), Group normalization (GN) (Wu & He, 2018) and Switchable Normalization (Luo et al., 2018). A normalization scheme is typically applied after a convolution layer and contains two stages: standardization and rescaling. Another type of normalization methods not only standardizes but also decorrelates features, like DBN (Huang et al., 2018), IterNorm (Huang et al., 2019) and switchable whitening (Pan et al., 2019). Despite their success in performance improvement, little is explored about relation between those methods and inhibited channels. Fig.1 shows that inhibited channels emerges in VGGNet where ‘BN+ReLU’ or ‘LN+ReLU’ are used. Unlike previous decorrelated normalizations where decorrelation operation is applied after a convolution layer, our CE explicitly decorrelates features after normalization." }, { "heading": "B COMPUTATION DETAILS IN ’BN-CE-RELU’ BLOCK", "text": "As discussed before, CE processes incoming features after normalization layer by combining two branches, i.e. batch decorrelation and adaptive instance inverse. The former computes a covariance matrix and the latter calculates instance variance. We now take ’BN-CE-ReLU’ block as an example to show the computation details of statistics in it. Given a tensor x ∈ RN×C×H×W , the mean and variance in IN (Ulyanov et al., 2016) are calculated as:\nµncIN = 1\nHW H,W∑ i,j xncij , (σ 2 IN) nc = 1 HW H,W∑ i,j (xncij − µncIN)2 (11)\nHence, we have µIN, σ2IN ∈ RN×C . Then, the statistics in BN can be reformulated as follows:\nµcBN = 1\nNHW N,H,W∑ n,i,j xncij = 1 N N∑ i 1 HW H,W∑ i,j xncij\n(σ2BN) c =\n1\nNHW N,H,W∑ n,i,j (xncij − µcBN)2\n= 1\nN N∑ n 1 HW H,W∑ i,j (xncij − µncIN + µncIN − µcBN)2\n= 1\nN N∑ n ( 1 HW H,W∑ i,j (xncij − µncIN)2 + (µncIN − µcBN)2)\n= 1\nN N∑ n (σ2IN) nc + 1 N N∑ n (µncIN − µcBN)2\n(12)\nThen, we have µBN = E[µIN] and σ2BN = E[σ2IN] + D[µIN], where E[·] and D[·] denote expectation and variance operators over N samples. Further, the input of AII is instance variance of features\nafter BN, which can be calculated as follows:\nσ2nc = 1\nHW H,W∑ i,j [ (γc xncij − µcBN σcBN + βc)− (γc µncIN − µcBN σcBN + βc) ]2\n= γ2c\n(σ2BN) c\n1\nHW H,W∑ i,j (xncij − µncIN)2\n= γ2c (σ 2 IN) nc\n(σ2BN) c\n(13)\nAt last, the output of BN is x̃ncij = γcx̄ncij + βc, then the entry in c-th row and d-th column of covariance matrix Σ of x̃ is calculated as follows:\nΣcd = 1\nNHW N,H,W∑ n,i,j (γcx̄ncij)(γdx̄ndij) = γcγdρcd (14)\nwhere ρcd is the element in c-th row and j-th column of correlation matrix of x̄. Thus, we can write Σ into vector form: Σ = γγT 1M x̄x̄ T if we reshape x̃ to x̃ ∈ RC×M and M = N ·H ·W ." }, { "heading": "C PROOF OF PROPOSITION 1", "text": "Proposition 1. Let Σ be covariance matrix of feature maps after batch normalization. Assume that Σk = Σ − 12 , ∀k = 2, 3, · · · , T , then ‖γ̂‖1 > ‖γ‖1. Especially, we have |γ̂i| > |γi|\nProof. Since Σk = Σ− 1 2 , ∀k = 2, 3, · · · , T , we have Σkγ = 12Σk−1(3I − Σ 2 k−1Σ)γ = Σk−1γ. Therefore, we only need to show ‖γ̂‖1 = ‖ΣT γ‖1 = · · · = ‖Σ2γ‖1 > ‖γ‖1. Now, we show that for k = 2 we have ∥∥ 1 2 (3I − Σ)γ ∥∥ 1 > ‖γ‖1. From Eqn.(6), we know that Σ = γγT ‖γ‖22 ρ where ρ is the correlation matrix of x̃ and −1 ≤ ρij ≤ 1, ∀i, j ∈ [C]. Then, we have\n1 2 (3I − Σ)γ = 1 2 (3I − γγ\nT\n‖γ‖22 ρ)γ\n= 1 2 (3γ − ( γγ\nT\n‖γ‖22 ρ)γ)\n= 1 2 (3γ − 1\n‖γ‖22 C∑ j γ1γjρ1jγj , C∑ j γ2γjρ2jγj , · · · , C∑ j γCγjρCjγj T) = 1\n2 (3γ − 1\n‖γ‖22 C∑ j γ1γjρ1jγj , C∑ j γ2γjρ2jγj , · · · , C∑ j γCγjρCjγj T) = 1\n2 (3− C∑ j γ2j ρ1j ‖γ‖22 )γ1, (3− C∑ j γ2j ρ2j ‖γ‖22 )γ2, · · · , (3− C∑ j γ2j ρCj ‖γ‖22 )γC T\n(15)\nNote that |3− ∑C j γ2j ρij\n‖γ‖22 | ≥ 3− |\n∑C j γ2j ρij\n‖γ‖22 | ≥ 3−\n∑C j γ2j ‖γ‖22\n= 2, where the last equality holds iff ρij = 1, ∀i, j ∈ [C]. However, this is not the case in practice. Hence we have∣∣∣∣[12(3I − Σ)γ ] i ∣∣∣∣ = ∣∣∣∣∣∣12(3− C∑ j γ2j ρij ‖γ‖22 )γi\n∣∣∣∣∣∣ > |γi| (16) Therefore, we have ‖γ̂‖1 > ‖γ‖1. Here completes the proof." }, { "heading": "D CONNECTION BETWEEN CE BLOCK AND NASH EQUILIBRIUM", "text": "We first introduce the definition of Gaussian interference game in context of CNN and then build the connection between a CE block and Nash Equilibrium. For clarity of notation, we omit the subscript n for a concrete sample.\nLet the channels 1, 2, · · · , C operate over H ×W pixels. Assume that the C channels have dependencies G = {gcd(i, j)}C,Cc,d=1. Each pixel is characterized by a power gain hcij ≥ 0 and channel noise strength σc > 0. In context of normalization, we suppose hcij = x̄cij + δ where x̄cij is standardized pixel in Eqn.(2) and δ is sufficiently large to guarantee a non-negative power gain. Assume that c-th channel is allowed to transmit a total power of Pc and we have ∑H,W i,j=1 pcij = Pc. Besides, each channel can transmit a power vector pc = (pc11, · · · , pcHW ). Since normalization layer is often followed by a ReLU activation, we restrict pcij ≥ 0. What we want to maximize the capacity transmitted over the c-th channel, ∀c ∈ [C], then the maximization problem is given by:\nmax Cc(p1, p2, · · · , pC) = h,W∑ i,j=1 ln\n( 1 +\ngccpcij∑ d 6=c gcdpdij + σc/hcij\n)\ns.t.\n{ ∑H,W i,j=1 pcij = Pc,\npcij ≥ 0, ∀i ∈ [H], j ∈ [W ]\n(17)\nwhere Cc is the capacity available to the c-th channel given power distributions p1, p2, · · · , pC . In game theory, C channels and solution space of {pcij}C,H,Wc,i,j=1 together with pay-off vector C = (C1, C2, · · · , CC) form a Gaussian interference game G. Different from basic settings in G, here we do not restrict dependencies gcd to (0, 1). It is known that G has a unique Nash Equilibrium point whose definition is given as below,\nDefinition 1. An C-tuple of strategies (p1, p2, · · · , pC) for channels 1, 2, · · · , C respectively is called a Nash equilibrium iff for all c and for all p (p a strategy for channel c)\nCc(p1, · · · , pc−1, p, pc+1, · · · , pC) ≤ Cc(p1, p2, · · · , pC) (18)\ni.e., given that all other channels d 6= c use strategies pd, channel c best response is pc. Since C1, C2, · · · , CC are concave in p1, p2, · · · , pC respectively, KKT conditions imply the following theorem.\nTheorem 1. Given pay-off in Eqn.(17), (p∗1, · · · , p∗C) is a Nash equilibrium point if and only if there exist v0 = (v10 , · · · , vC0 ) (Lagrange multiplier) such that for all i ∈ [H] and j ∈ [W ],\ngcc∑ d gcdp ∗ dij + σc/hcij\n{ = vc0 for p ∗ cij > 0\n≤ vc0 for p∗cij = 0 (19)\nProof. The Lagrangian corresponding to minimization of −Cc subject to the equality constraint and non-negative constraints on pcij is given by\nLc = − h,W∑ i,j=1 ln\n( 1 +\ngccpcij∑ d6=c gcdpdij + σc/hcij\n) + vc0( H,W∑ i,j=1 pcij − Pc) + H,W∑ i,j=1 vcij1 (−pcij). (20)\nDifferentiating the Lagrangian with respect to pcij and equating the derivative to zero, we obtain\ngcc∑ d gcdpcij + σc/hcij + vcij1 = v c 0 (21)\nNow, using the complementary slackness condition vcij1 pcij = 0 and v cij 1 ≥ 0, we obtain condition (19). This completes the proof.\nBy Theorem 1, the unique Nash Equilibrium point can be explicitly written as follows when p∗cij > 0,\np∗ij = G −1 (Diag(v0)−1diag(G)−Diag(hij)−1σ) (22)\nwhere p∗ij , hij , σ ∈ RC and v0 ∈ RC are Lagrangian multipliers corresponding to equality constraints. Note that a approximation can be made using Taylor expansion as follow: − σchcij = σc(2 + hcij +O((1 + hcij)2)). Thus, a linear proxy to Eqn.(22) can be written as\np∗ij = G −1 (Diag(σ)x̄ij + Diag(v0)−1diag(G) + (2 + δ)σ) (23)\nLet G = [Dn] 1 2 , γ = σ and β = Diag(v0)−1diag(G) + (2 + δ)σ, Eqn.(23) can surprisingly match CE unit in Eqn.(5), implying that the proposed CE block indeed performs a mechanism on channel equalization. In Gaussian interference game, σ is known and v0 can be determined when budget Pc’s are given. However, γ and β are learned by SGD in deep neural networks." }, { "heading": "E EXPERIMENTS", "text": "E.1 TRAINING AND VALIDATION CURVES\nWe plot training and validation loss during the training process for ResNet50, SE-ResNet50 and CE-ResNet50 in Fig.5. We can observe that CE-ResNet50 consistently have lower training and validation errors over the whole training period, indicating that CE improves both learning capacity and generalization ability.\nE.2 MORE DISCUSSION ABOUT CE\nAs discussed in related work in Sec.A, many methods have been proposed to improve normalizers and ReLU activation. The ablation approach in Morcos et al. (2018) is used to see whether and how these methods help channel equalization. We demonstrate the effectiveness of CE by answering the following questions.\nDo other ReLU-like activation functions help channel equalization? Two representative improvements on ReLU function, i.e. ELU (Clevert et al., 2015) and LReLU (Maas et al., 2013), are employed to see whether other ReLU-like activation functions can help channel equalization. We plot the cumulative ablation curve that depicts ablation ratio versus the top-1 accuracy on CIFAR10 dataset in Fig.6(a). The baseline curve is ’BN+ReLU’. As we can see, the top-1 accuracy curve of ’BN+LReLU’ drops more gently, implying that LReLU helps channel equalization. But ’ELU+ReLU’ has worse cumulative ablation curve than ’BN+ReLU’. By contrast, the proposed CE block improves the recognition performance of ’BN+ReLU’ (higher top-1 accuracy) and promotes channel equalization most (the most gentle cumulative ablation curve).\nDo the adaptive normalizers help channel equalization? We experiment on a representative adaptive normalization method (i.e. SN), to see whether it helps channel equalization. SN learns to\nselect an appropriate normalizer from IN, BN and LN for each channel. The cumulative ablation curves are plotted on ImageNet dataset with ResNet-50 under blocks of ’BN+ReLU’, ‘SN+ReLU’ and ’BN+CE+ReLU’. As shown in Fig.6(b), SN even does damage to channel equalization when it is used to replace BN. However, ’BN+CE+ReLU’ shows the most gentle cumulative ablation curve, indicating the effectiveness of CE block in channel equalization. Compared with SN, ResNet-50 with CE block also achieves better top-1 accuracy (78.3 vs 76.9), showing that channel equalization is important for block design in a CNN.\nIntegration strategy of CE block. We put CE in different position of a bottleneck in ResNet50, which consists of three ”Conv-BN-ReLU” basic blocks. The channel of the third block is 4 times than that of the second one. We compare the performance of CE-ResNet50 by putting CE in the second block (CE2-ResNet50) or the third block (CE3-ResNet50). As shown in Table 6, the top-1 accuracy of CE3-ResNet-50 outperforms CE2-ResNet50 by 0.4, which indicates that our CE block benefits from larger number of channels.\nE.3 GRAD-CAM VISUALIZATION\nWe claim that AII learns adaptive inverse of variance for each channel in a self-attention manner. Fed into more informative input, AII is expected to make the network respond to different inputs in a highly class-specific manner. In this way, it helps CE learn preciser feature representation. To verify this, we employ an off-the-shelf tool to visualize the class activation map (CAM) Selvaraju et al. (2017). We use ResNet50, BD-ResNet50, and CE-ResNet50 trained on ImageNet for comparison. As shown in Fig.7, the heat maps extracted from CAM for CE-ResNet50 have more coverage on the\nobject region and less coverage on the background region. It shows that the AII branch helps CE learn preciser information from the images." }, { "heading": "F TRAINING AND INFERENCE", "text": "Moving average in inference. Unlike previous methods in manual architecture design that do not depend on batch estimated statistics, the proposed CE block requires computing the inverse square root of a batch covariance matrix Σ and a global variance scale s in each training step. To make the output depend only on the input, deterministically in inference, we use the moving average to calculate the population estimate of Σ̂− 1 2 and ŝ− 1 2 by following the below updating rules:\nΣ̂− 1 2 = (1−m)Σ̂− 12 +mΣ− 12 , ŝ− 12 = (1−m)ŝ− 12 +m · s− 12 (24)\nwhere s and Σ are the variance scale and covariance calculated within each mini-batch during training, and m denotes the momentum of moving average. It is worth noting that since Σ̂− 1 2 is fixed\nduring inference, the BD branch does not introduce extra costs in memory or computation except for a simple linear transformation ( Σ̂− 1 2 x̃).\nModel and computational complexity. The main computation of our CE includes calculating the covariance and inverse square root of it in the BD branch and computing two FC layers in the AII branch. We see that there is a lot of space to reduce computational cost of CE. For BD branch, given an internal feature x ∈ RN×C×H×W , the cost of calculating a covariance matrix is 2NHWC2, which is comparable to the cost of convolution operation. A pooling operation can be employed to downsample featuremap for too large H and W . In this way, the complexity can be reduced to 2NHWC2/k2 + CHW where k is kernel size of the window for pooling. Further, we can use group-wise whitening to improve efficiency, reducing the cost of computing Σ− 1 2 from TC3 to TCg2 (g is group size). For AII branch, we focus on the additional parameters introduced by two FC layers. In fact, the reduction ratio r can be appropriately chosen to balance model complexity and representational power. Besides, the majority of these parameters come from the final block of the network. For example, a single AII in the final block of ResNet-50 has 2 ∗ 20482/r parameters. In practice, the CE blocks in the final stages of networks are removed to reduce additional parameters. We provide the measurement of computational burden and Flops in Table 1.\nResNet Training Setting. All networks are trained using 8 GPUs with a mini-batch of 32 per GPU. We train all the architectures from scratch for 100 epochs using stochastic gradient descent (SGD) with momentum 0.9 and weight decay 1e-4. The base learning rate is set to 0.1 and is multiplied by 0.1 after 30, 60 and 90 epochs. Besides, the covariance matrix in BD branch is calculated within each GPU. Since the computation of covariance matrix involves heavy computation when the size of feature map is large, a 2× 2 maximum pooling is adopted to down-sample the feature map after the first batch normalization layer. Like (Huang et al., 2019), we also use group-wise decorrelation with group size 16 across the network to improve the efficiency in the BD branch. By default, the reduction ratio r in AII branch is set to 4.\nMobileNet V2 training Setting. All networks are trained using 8 GPUs with a mini-batch of 32 per GPU for 150 epochs with cosine learning rate. The base learning rate is set to 0.05 and the weight decay is 4e-5.\nShuffleNet V2 training Setting. All networks are trained using 8 GPUs with a mini-batch of 128 per GPU for 240 epochs with poly learning rate. The base learning rate is set to 0.5 and weight decay is 4e-5. We also adopt warmup and label smoothing tricks.\nVGG networks on CIFAR10 training setting. For CIFAR10, we train VGG networks with a batch size of 256 on a single GPU for 160 epochs. The initial learning rate is 0.1 and is decreased by 10 times every 60 epochs. The inhibited channel ratios in Fig. 1 and Fig.3(c) is measured by the average ratio for the first 6-th layers since the bottom layers can extract rich feature representation and the sparsity is not desired. For inference drop experiments in Fig.1(c), we randomly drop channels in the third layer with different dropout ratio. For each ratio, we run the experiment 5 times and average the top 1 accuracy.\nMask-RCNN training setting in COCO. We fine-tune the ImageNet pretrained model in COCO for 24 epoch with base learning rate 0.02 and multiply it by 0.1 after 16 and 22 epochs. All the models are trained using 8 GPUs with a mini-batch of 2 images. The basic backbone structure is adopted from the ResNet50/ResNet101 trained on ImageNet." } ]
2,019
null
SP:71b7633050462a3cfc3d64e81ae3f6cec758f068
[ "In this paper the authors adopt prior work in image inpainting to the problem of 2d fluid velocity field inpainting by extending the network architecture and using additional loss functions. Specifically, the U-net network is extended with a DenseBlock in the middle, and a separate branch of the network is added which predicts the stream function (a different representation of the velocity field which guarantees incompressibility). The additional losses are L1 for various derivatives of the flow field (Jacobian, divergence, vorticity). Experiments presented in the paper show that these new elements improve the flow field error compared to a baseline model originally developed for image inpainting. The suggested application for this model is filling gaps in experimental measurements that are missing or impossible to obtain, and where such a model could be computationally cheaper than an actual fluid solver.", "I am not an expert in recent Navier-Stokes approaches, but note that there is a lot of recent work in physics aware modeling. Specifically the sections on e.g. loss seem to have a lot of prior work. It’s difficult for me to judge the exact amount of novelty in this paper with respect to the physics. With respects to the DL part it looks like it’s mainly minor modifications to the known U-net architecture. " ]
In this paper we propose a physics-aware neural network for inpainting fluid flow data. We consider that flow field data inherently follows the solution of the NavierStokes equations and hence our network is designed to capture physical laws. We use a DenseBlock U-Net architecture combined with a stream function formulation to inpaint missing velocity data. Our loss functions represent the relevant physical quantities velocity, velocity Jacobian, vorticity and divergence. Obstacles are treated as known priors, and each layer of the network receives the relevant information through concatenation with the previous layer’s output. Our results demonstrate the network’s capability for physics-aware completion tasks, and the presented ablation studies show the effectiveness of each proposed component.
[]
[ { "authors": [ "Steven L. Brunton", "Bernd R. Noack", "Petros Koumoutsakos" ], "title": "Machine learning for fluid mechanics", "venue": "Annual Review of Fluid Mechanics,", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Yuxiao Guo", "Xin Tong" ], "title": "View-volume network for semantic scene completion from a single depth", "venue": "image. pp. 726–732,", "year": 2018 }, { "authors": [ "Xiaoguang Han", "Zhaoxuan Zhang", "Dong Du", "Mingdai Yang", "Jingming Yu", "Pan Pan", "Xin Yang", "Ligang Liu", "Zixiang Xiong", "Shuguang Cui" ], "title": "Deep reinforcement learning of volume-guided progressive view inpainting for 3d point scene completion from a single depth", "venue": null, "year": 2019 }, { "authors": [ "J.E. Higham", "W. Brevis", "C.J. Keylock" ], "title": "A rapid non-iterative proper orthogonal decomposition based outlier detection and correction for PIV data", "venue": "Measurement Science and Technology,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Satoshi Iizuka", "Edgar Simo-Serra", "Hiroshi Ishikawa" ], "title": "Globally and locally consistent image completion", "venue": "ACM Transactions on Graphics,", "year": 2017 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A. Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Byungsoo Kim", "Vinicius Azevedo", "Markus Gross", "Barbara Solenthaler" ], "title": "Transport-based neural style transfer for smoke simulations", "venue": "ACM Trans. Graph.,", "year": 2019 }, { "authors": [ "Byungsoo Kim", "Vinicius C. Azevedo", "Nils Thuerey", "Theodore Kim", "Markus Gross", "Barbara Solenthaler" ], "title": "Deep Fluids: A Generative Network for Parameterized Fluid Simulations", "venue": "Computer Graphics Forum,", "year": 2019 }, { "authors": [ "Guilin Liu", "Fitsum A. Reda", "Kevin J. Shih", "Ting Chun Wang", "Andrew Tao", "Bryan Catanzaro" ], "title": "Image Inpainting for Irregular Holes Using Partial Convolutions", "venue": "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics),", "year": 2018 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A. Efros" ], "title": "Context Encoders: Feature Learning by Inpainting", "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Maziar Raissi", "Alireza Yazdani", "George E. Karniadakis" ], "title": "Hidden fluid mechanics: A navierstokes informed deep learning framework for assimilating flow visualization data", "venue": null, "year": 2018 }, { "authors": [ "Prajit Ramachandran", "Barret Zoph", "Quoc V. Le" ], "title": "Searching for Activation Functions", "venue": null, "year": 2017 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture", "venue": "Notes in Bioinformatics),", "year": 2015 }, { "authors": [ "Pankaj Saini", "Christoph Arndt", "Adam Steinberg" ], "title": "Development and evaluation of gappy-pod as a data reconstruction technique for noisy piv measurements in gas turbine combustors", "venue": "Experiments in Fluids,", "year": 2016 }, { "authors": [ "Andrea Sciacchitano", "Fulvio Scarano", "Bernhard Wieneke" ], "title": "Multi-frame pyramid correlation for time-resolved PIV", "venue": "Experiments in Fluids,", "year": 2012 }, { "authors": [ "Shuran Song", "Fisher Yu", "Andy Zeng", "Angel Chang", "Manolis Savva", "Thomas Funkhouser" ], "title": "Semantic scene completion from a single depth", "venue": "image. pp. 190–198,", "year": 2017 }, { "authors": [ "Nobuyuki Umetani", "Bernd Bickel" ], "title": "Learning three-dimensional flow for interactive aerodynamic design", "venue": "ACM Trans. Graph.,", "year": 2018 }, { "authors": [ "Daniele Venturi", "George Em Karniadakis" ], "title": "Gappy data and reconstruction procedures for flow past a cylinder", "venue": "Journal of Fluid Mechanics,", "year": 2004 }, { "authors": [ "Steffen Wiewel", "Moritz Becher", "Nils" ], "title": "Thuerey. Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow", "venue": "Computer Graphics Forum,", "year": 2019 }, { "authors": [ "Kai Xu", "Yifei Shi", "Lintao Zheng", "Junyu Zhang", "Min Liu", "Hui Huang", "Hao Su", "Daniel Cohen-Or", "Baoquan Chen" ], "title": "3d attention-driven depth acquisition for object identification", "venue": "ACM Trans. Graph.,", "year": 2016 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S. Huang" ], "title": "Generative Image Inpainting with Contextual Attention", "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S. Huang" ], "title": "Free-Form Image Inpainting with Gated Convolution", "venue": "2018b. URL http://arxiv.org/abs/1806.03589", "year": 2018 }, { "authors": [ "Yanhong Zeng", "Jianlong Fu", "Hongyang Chao", "Baining Guo" ], "title": "Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting", "venue": "pp. 1486–1494,", "year": 2019 }, { "authors": [ "Yinda Zhang", "Thomas A. Funkhouser" ], "title": "Deep depth completion of a single RGB-D image", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Realistically modeling and predicting fluid phenomena is important to a large number of applications, which may range from optimizing objects’ aerodynamic properties to creating special effects in movies. Fluids are commonly modelled by numerically solving the Navier-Stokes equations, however computer generated solutions might be discrepant from real phenomena. This happens possibly due to mismatches of the mathematical model, incorrect numerical discretization, poor discrete resolution, or errors on the estimation of parameters. Thus, approaches such as Particle Image Velocimetry or Doppler flow measurements directly measure fluid quantities in real-world settings. Often though, these measurements cannot be performed densely due to missing sensors, under-sampled domains or occluded and unreachable areas.\nThus, methods for prediction and augmentation of measured flow data are actively researched. Previous approaches are either based on a low-dimensional analysis of the flow field based on Principal Component Analysis (PCA), e.g., (Saini et al., 2016), or are based on physically reconstructing missing areas by solving the unsteady incompressible Navier-Stokes equations, e.g., (Sciacchitano et al., 2012). A main challenge with these traditional techniques is to predict data in large occluded or empty areas, e.g., where more than 50% of the data has to be predicted. In such challenging scenarios, already approximate predictions are useful as they could, for example, optimize strategies for sensor placement, guide procedures for human-based scanning, or improve workflows for digital prototyping.\nThis goal of estimating missing flow field data has many similarities with image inpainting, as it is essentially a scene completion process using partial observations. The recent success of data-driven image inpainting algorithms (Pathak et al., 2016; Iizuka et al., 2017; Liu et al., 2018; Yu et al., 2018a;b) demonstrates the capability of deep neural networks to complete large missing regions in natural images in a plausible fashion. The major difference between flow field inpainting and image inpainting lies in the fact that flow field data inherently follows the solution of the NavierStokes equations, and hence existing image inpainting algorithms can easily fail in physics-aware completion tasks as they never aim to capture the physical laws.\nIn this paper, we formulate the problem of flow data completion in large empty areas as an inpainting problem, but consider the mathematical equations that model the underlying fluid phenomena in the design of the network architecture and loss functions. By synergistically combining deep learning with fluid dynamics, we are able to inpaint data in large and irregular areas. We evaluate our proposed architectures and loss functions using thorough ablation studies both quantitatively and qualitatively. The contributions of this paper can be summarized as:\n• A DenseBlocks U-Net network architecture based on a stream function formulation to inpaint velocity values;\n• A set of physics-derived loss functions representing velocity, velocity gradient, divergence and vorticity;\n• A simple but effective way of handling solid obstacles in the learning process." }, { "heading": "2 RELATED WORK", "text": "Predicting missing data for incompressible Navier-Stokes equations has been studied in the Computational Fluid Dynamics (CFD) field. Sciacchitano et al. (2012) solves the unsteady Navier-Stokes equations locally, and dimensionality reduction approaches such as proper orthogonal decomposition are applied (Venturi & Karniadakis, 2004; Higham et al., 2016; Saini et al., 2016). Data completion of large occluded and empty areas are difficult to handle with these methods, and their running time might become prohibitively expensive depending on the application.\nWe adapt the idea of image inpainting, which has been intensively studied in the field of learning, to reconstruct missing flow data. (Pathak et al., 2016) used Context Encoders as one of the first attempts for filling missing image data with a deep convolutional neural network. CNN-based methods are attractive due their ability to reconstruct complex functions with only few sparse samples while being highly efficient. The follow-up work by Iizuka et al. (2017) proposes a fully convolutional network to complete rectangular missing data regions. The approach, however, still relies on Poisson image blending as a post-processing step. Yu et al. (2018b) introduces contextual attention layer to model long-range dependencies in images and a refinement network for post-processing, enabling end-to-end training. Zeng et al. (2019) extends previous work by extracting context attention maps in different layers of the encoder and skip connect attention maps to the decoder. These approaches all include adversarial losses computed from a discriminator (Goodfellow et al., 2014) in order to better reconstruct visually appealing high frequency details. However, high frequency details from adversarial losses can result in mismatches from ground truth data (Huang et al., 2017), which can potentially predict missing data that diverge from physical laws. Liu et al. (2018) designs partial convolution operations for image inpainting, so that the prediction of the missing pixels is only conditioned on the valid pixels in the original image. The operation enables high quality inpainting results without adversarial loss.\nInpainting approaches have also been successfully used for scene completion and view path planning using data from sparse input views. Song et al. (2017) uses an end-to-end network SSCNet for scene completion and Guo & Tong (2018) a view-volume CNN that extracts geometric features from 2D depth images. Zhang & Funkhouser (2018) presents an end-to-end architecture for depth inpainting, and Han et al. (2019) uses multi-view depth completion to predict point cloud representations. A 3D recurrent network has been used to integrate information from only a few input views (Choy et al., 2016), and Xu et al. (2016) uses spatial and temporal structure of sequential observations to predict a view sequence.\nNeural networks have also recently been applied to fluid simulations. Applications include prediction of the entire dynamics (Wiewel et al., 2019), reconstruction of simulations from a set of input parameters (Kim et al., 2019b), interactive shape design (Umetani & Bickel, 2018), inferring hidden physics quantities (Raissi et al., 2018), and artistic control for visual effects (Kim et al., 2019a). A complete overview of machine learning for fluid dynamics can be found in Brunton et al. (2020)." }, { "heading": "3 LEARNING FLOW DATA", "text": "Our model is inspired by Liu et al. (2018) from image inpainting. The inpainting task in image space can benefit from the capability of deep neural networks to learn semantic priors in an end-toend fashion. To reconstruct missing flow data, however, inherent laws of fluid dynamics should be considered by the neural network. In this section we detail the proposed physically-derived architecture and loss functions incorporated into our model, which results in improved fluid reconstruction results." }, { "heading": "3.1 PHYSICS-AWARE NETWORK", "text": "Our goal is to train a network that can fill empty regions of incompressible velocity fields. The input scheme is similar to standard image inpainting tasks. Given a 2D velocity field uin with a missing fluid data region represented by a binary mask M (0 for empty and 1 for known regions), the network predicts the velocity field with the same dimensionality as the input uout. We adapt the U-Net (Ronneberger et al., 2015) by adding Dense Blocks (Huang et al., 2017) at the bottleneck and replacing the normal convolution operations with modified partial convolutions (Liu et al., 2018). We modified the original scaling factors of previous modified partial convolutions to the mean of the binary mask M , which leads to sharper velocity profiles in the reconstructed field. The modified partial convolution at every location is defined as\nx′ =\n{ WT (X M)M + b if ∑ (M) > 0,\n0 otherwise, (1)\nwhere W are the convolution filter weights, X are the feature values for the current convolution window, b are biases and denotes element-wise multiplication. Additionally to the velocity prediction network (velocity branch), we implement a second network that directly predicts stream function values (stream function branch). This formulation can help to enforce incompressibility of the predicted velocity field and thus guide the network to predict velocities in a physically-aware manner, since ∇ · (∇ × Ψ) = 0. The curl operator is also fully differentiable. This formulation is therefore suitable to become an output layer for reconstructing incompressible flow data. Thus, the stream function branch takes feature maps from the last and second last layers of the velocity branch as well as the inpainting mask and passes them through 4 densely connected convolution layers with swish activation functions (Ramachandran et al., 2017), outputting a stream function Ψ(x, y) field. The velocity field can then be reconstructed through predicted stream functions by applying the curl operator:\nu = ∇×Ψ. (2)\nIn 2D, the stream function becomes a scalar field, and the resulting velocity components are:\nux = ∂ψ\n∂y ,uy = −\n∂ψ ∂x . (3)\nBoth velocities from velocity branch and stream function branch are concatenated together and passed through a final prediction layer along with the inpainting mask. The final prediction layer is a single convolution layer that mixes velocity predictions from both branches. A detailed illustration of our architecture is shown in Figure 1. The exact parameters of the network can be found in Appendix A." }, { "heading": "3.2 LOSS FUNCTIONS", "text": "For incompressible flow data it is important to define a new set of supervised loss functions that model physical properties and constraints. For detailing the employed loss functions, we use û for predicted and u for ground-truth velocities.The L1 velocity loss efficiently reconstructs lowfrequency information and is defined as\nLvel = ||û− u||1. (4) Inspired by Kim et al. (2019b), we additionally minimize the difference of the velocity field Jacobian between ground-truth and predicted velocity fields. With a sufficiently smooth flow-field dataset, high-frequency features of the CNN are potentially on the null space of the L1 distance minimization (Kim et al., 2019b). Thus, matching the Jacobians helps the network to recover highfrequency spectral information, while it also regularizes the reconstructed velocity to match groundtruth derivatives. The velocity Jacobian J(u) is defined in 2D as\nJ(u) =\n( ∂ux ∂x ∂ux ∂y\n∂uy ∂x ∂uy ∂y\n) , (5)\nand the corresponding loss function is simply given as the L1 of vectorized Jacobian between predicted and ground-truth velocities:\nLjacobian = ||J(û)− J(u)||1. (6) Additionally, we compute a loss function that matches the vorticity of predicted and ground-truth velocities. The vorticity field describes the local spinning motion of the velocity field. Similarly to the Jacobian loss, our vorticity loss acts as a directional high-frequency filter that helps to match shearing derivatives of the original data, enhancing the capability of the model to properly match the underlying fluid dynamics. The vorticity loss is given by\nLvort = ||∇ × û−∇× u||1. (7)\nIncompressible flows should have zero divergence, however, numerical simulations often produce results that are not strictly divergence-free due to discretization errors. As we combine predictions from velocity and stream function branches, we are able to match the divergence on the original and predicted fields by minimizing\nLdiv = ||∇ · û−∇ · u||1. (8)\nSimilarly to previous works, each loss function is applied both on the known and unknown regions with potentially different weights. The weight selection is illustrated in Appendix A. We exclude the perceptual Lperceptual, style Lstyle and total variation Ltv losses from the image inpainting model of Liu et al. (2018). Although these losses successfully improve the visual quality of predicted images, they are not suited for completing flow-field data, since they match pre-learned filters from image classification architectures." }, { "heading": "3.3 ENCODING OBSTACLES", "text": "The interaction between fluid and solid obstacles is crucial for fluid dynamics applications. To incorporate solid obstacle information as prior knowledge to the network, we concatenate a binary mask O indicating whether a solid obstacle occupies a cell as an extra input channel. In order to properly propagate the obstacle information to all network layers, the obstacle occupancy information is concatenated to previous layers’ output. To accomplish that, we downsample the obstacle map O to match a specific layer dimensions, similarly to the empty region mask M ." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "" }, { "heading": "4.1 DATA GENERATION", "text": "Due to the lack of publicly available flow data sets captured from real-world experiments, we trained our model on synthetic data. We generated fluid velocity fields with a numerical flow solver for\nincompressible fluids (Mantaflow (Thuerey & Pfaff, 2018)). Each training data sample consists of a 2-dimensional vector field containing the velocity components u and v and the empty regions and obstacles masks M and O. During training, different types of empty region masks are generated on the fly with empty to filled area region ratios that vary randomly from 25 to 99 percent. We generate three different types of masks: uniform random noise masks mimic possible sampling noise from real-world velocity measurements; scan path masks simulate paths of a velocity probing; and large region masks model large occluded areas that are not reachable by probes or measurement devices. Illustrations of types of masks can be seen in the leftmost column in Figure 3.\nTo evaluate the proposed architecture and loss functions, we applied our model on two different datasets, both computed on grid resolutions of 128 × 96. The first wind tunnel dataset implements a scene with turbulent flow around obstacles. We define inflow velocities at bottom and top regions of the domain, while the remaining two sides (left and right) are set as free-flow (open) boundary conditions. The inflow speed is set to random values, and obstacles (6 spheres, 6 rectangles) are randomly positioned, yielding a total of 32,000 simulation frames. The second simple plume dataset implements a smoke rising from a source at the bottom of a fully enclosed box, which represents a common setup in graphics applications. In this dataset, no solid obstacles are included and the source position and size are the parameters that vary across different simulations. In total, 21,000 simulation frames are present in the simple plume dataset." }, { "heading": "4.2 ABLATION STUDIES", "text": "To investigate the effects of various components introduced in previous sections, we performed a series of ablation studies, with results shown in Table 1. We train and evaluate the architectures of different configurations on the wind tunnel dataset, selectively deactivating the DenseBlocks, as well as the stream function and velocity branches. We also compare the effect of the proposed loss functions by progressively adding them to different architecture configurations. Besides evaluating Mean Absolute Error (MAE) over the whole dataset, we also evaluate the model capabilities when varying masking occlusion levels in Figure 2.\nOur ablation study shows that the quality of inpainting results can be significantly improved by Dense Blocks (entries (a-c) and (e-g)) when compared with architectures with no DenseBlocks (entries (d) and (h-k)). We also tested adding loss functions progressively to architectures with (entries\n(a) and (e-g)) and without DenseBlocks (entries (h-k)) to evaluate if those effects are cumulative. Combining the proposed physically-inspired losses yields better results also in this case, which seems a strong indication that they contribute to better results even in distinct architectures.\nThe model that employs only a stream function branch (b) performs similarly to our full model (a). Note that since model (b) reconstructs velocities by taking the curl of stream function predictions, incompressibility is guaranteed and thus no divergence loss is used. However, the synthetic velocity field data has discretization errors and it is not truly divergence free. Therefore, the approach with a single stream function branch cannot capture the divergent modes present on the original data, yielding higher MAE than the combined branches approach. We hypothesize that a pure stream function architecture would fare better with velocity fields obtained from real-world experiments or highly accurate flow solvers, where the divergence is closer to zero.\nWe notice that the model with only the velocity branch (c) has a lower MAE over the whole dataset. However, Figure 2 shows that this model has a low capability of inpainting samples with large empty regions. This indicates that the stream function branch can better guide the network to predict results obeying physical laws, while the velocity branch can help predicting results based on information from a known region. The functionality of the two branches can be more clearly shown in Figure 3, where the models (a,b,c) are visually compared. To plot the velocity fields, we use a HSV colormap that encodes flow direction and relative velocity in the hue and saturation, respectively. The model in (c) only uses the velocity branch information, and it reconstructs lower frequency content better as the information is easier to infer from surrounding known regions. Using the stream function branch only, model (b) better predicts high frequency information, but mistakenly reconstructs lower frequency regions. Finally, model (a) combines the advantages of models (b) and (c) by using both branches, and predictions are more precise in both higher and lower frequency ranges." }, { "heading": "4.3 PREDICTION RESULTS", "text": "Figure 4 shows the results of applying different mask profiles for the simple plume dataset. The top left example shows a scan path mask (left), comparing the reconstructed (middle) and ground truth velocity profiles (right). Even with a sparse scan mask with a occlusion of 0.74, our reconstruction is able recreate velocity profiles that are close to the ground truth, even in regions that are far away from the mask. The top and bottom right images of Figure 4 show similar examples, while the bottom left image depicts a random noise mask to mimic the effects of noisy sampling. In Figure 5 we show corresponding results for the wind tunnel dataset. In this challenging scenario, we use large region masks in combination with obstacles immersed in the fluid. Our results show that our method is able to accurately capture flows around obstacles, even though very sparse velocity field samples were used. We additionally compare the original image inpainting model (Liu et al., 2018) (Table 1 (k)) with our best architecture (Table 1 (a)). The results are shown in Figure 6, indicating that our approach can reconstruct more flow structures, especially near immersed obstacles." }, { "heading": "5 CONCLUSIONS", "text": "We have presented a physics-aware architecture for inpainting missing velocity data. We have shown that our method is especially powerful for data completion of large areas with more than 50% missing data entries. Using the proposed stream function branch in combination with DenseBlocks has proven to be the key element to reduce Mean Absolute Error (MAE), and augmenting the architectures with our physically-derived loss functions has further improved accuracy. The proposed method outperforms existing image inpainting models when applied to flow data, demonstrating the effectiveness of including knowledge about fluid dynamics in the network design.\nWe have evaluated our method on 2-dimensional data. Extending the method to 3-dimensional flows that exhibit more turbulent structures is an essential next step. The major challenge with 3D data, however, is the large memory consumption, which is especially critical for high-resolution simulations. Approaches based on progressive patch-based inpainting (Isola et al., 2016) or viewby-view inpainting (Han et al., 2019) could be relevant for reducing the memory footprint. Further tests are also needed with simulation datasets that capture different real-world scenarios, as well as data from real-world measurements." }, { "heading": "A APPENDIX", "text": "A.1 NETWORK PARAMETERS\nThe U-Net architecture is similar to the one described in (Liu et al., 2018) with the sole difference that stride 2 partial convolutions are only done in every 2nd layer in order to fit the dataset resolution of 128 by 96.\nFor the DenseBlocks U-Net, we replace layers 7 to 10 with a DenseBlock described in Table 3. Note that a DenseBlock as described in Huang et al. (2017) has a skip connection between each layer. Outputs of all previous layers inside the block are concatenated and represent the input of the current layer. The encoder part is also modified to achieve a constant compression ratio of 1.5 over each subsequent layer, see Table 2. The number of features is computed as follows: 64 · 4b l2 c\n1.5l−1 with\nl being the layer number from 1 to 7. The decoder then connects from layer 17 to 22 and is built symmetrically to the encoder. Note that this architecture, although deeper compared to the U-Net (22 vs 16), has only about 28% as many trainable parameters.\nThe stream function prediction network is a DenseBlock that consists of 5 convolution layers, and is described in Table 4. Activation is done with the swish function (Ramachandran et al., 2017) instead of the conventional ReLU because it turned out to facilitate training and improve accuracy.\nTraining for all models is done using Adam optimizer with a learning rate of 0.01 and a batch size of 16. During training, all masks are used equally and the occlusion is set uniformly at random to a value between 25% and 99%.\nA.2 LOSS FUNCTION WEIGHTS\nThe weights for all loss functions are listed in Table 5. Except for Lvort and Ldiv , all losses are weighted differently on known and empty regions. We choose higher weight on empty regions because they are more important for final results." } ]
2,019
PHYSICS-AWARE FLOW DATA COMPLETION USING NEURAL INPAINTING
SP:3312a33df68b98ddffec4706f031fc2173181d05
[ "The goal of this paper is to analyze knowledge consistency between pretrained deep neural nets. In order to do so the paper trains neural networks to predict a hidden layer of one DNN using a hidden layer of another DNN. The model is interesting in that it is multi layer but it also allows decomposing its prediction as the sum of outputs of neural nets with different numbers of hidden layers. The prediction is dubbed \"consistent features\", which are further decomposed in consistent features of different complexity levels, while the error is dubbed \"inconsistent features\".", "This paper presents a method to disentangle intermediate features between two different deep neural networks. More specifically, given two networks, the proposed approach aims to find consistent and inconsistent feature components for a certain layer in each network. If one network is more powerful to the other (e.g., ResNet and AlexNet), the method can figure out which components are weak or strong (i.e., helpful to the performance) for the given task. The authors design a simple yet effective algorithm for extracting knowledge consistency. In addition, they provide a variety of practical experiments including network diagnosis, feature refinement, and network compression. Most of the experimental results support that the proposed method can practically extract consistent feature components with promising improvements." ]
This paper aims to analyze knowledge consistency between pre-trained deep neural networks. We propose a generic definition for knowledge consistency between neural networks at different fuzziness levels. A task-agnostic method is designed to disentangle feature components, which represent the consistent knowledge, from raw intermediate-layer features of each neural network. As a generic tool, our method can be broadly used for different applications. In preliminary experiments, we have used knowledge consistency as a tool to diagnose representations of neural networks. Knowledge consistency provides new insights to explain the success of existing deep-learning techniques, such as knowledge distillation and network compression. More crucially, knowledge consistency can also be used to refine pre-trained networks and boost performance.
[ { "affiliations": [], "name": "Ruofan Liang" }, { "affiliations": [], "name": "Tianlin Li" }, { "affiliations": [], "name": "Longfei Li" }, { "affiliations": [], "name": "Jing Wang" }, { "affiliations": [], "name": "Quanshi Zhanga" } ]
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Information dropout: learning optimal representations through noise", "venue": "In Transactions on PAMI,", "year": 2018 }, { "authors": [ "Devansh Arpit", "Stanislaw Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron Courville", "Yoshua Bengio", "Simon Lacoste-Julien" ], "title": "A closer look at memorization in deep networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": null, "year": 2017 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "In arXiv:1206.5538v3,", "year": 2014 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": null, "year": 2016 }, { "authors": [ "Hao Cheng", "Dongze Lian", "Shenghua Gao", "Yanlin Geng" ], "title": "Evaluating capability of deep neural networks for image classification via information", "venue": null, "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Lior Deutsch" ], "title": "Generating neural networkswith neural networks", "venue": "In arXiv:1801.01952,", "year": 2018 }, { "authors": [ "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Inverting visual representations with convolutional networks", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "M. Everingham", "S.M.A. Eslami", "L. Van Gool", "C.K.I. Williams", "J. Winn", "A. Zisserman" ], "title": "The pascal visual object classes challenge: A retrospective", "venue": "In International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Ruth Fong", "Andrea Vedaldi" ], "title": "Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks", "venue": null, "year": 2018 }, { "authors": [ "Ruth C. Fong", "Andrea Vedaldi" ], "title": "Interpretable explanations of black boxes by meaningful perturbation", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Stanislav Fort", "Pawel Krzysztof Nowak", "Srini Narayanan" ], "title": "Stiffness: A new perspective on generalization in neural networks", "venue": "In arXiv:1901.09491,", "year": 2019 }, { "authors": [ "Tommaso Furlanello", "Zachary Lipton", "Michael Tschannen", "Laurent Itti", "Anima Anandkumar" ], "title": "Born again neural networks", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Chaoyu Guan", "Xiting Wang", "Quanshi Zhang", "Runjin Chen", "Di He", "Xing Xie" ], "title": "Towards a deep and unified understanding of deep neural models in nlp", "venue": null, "year": 2019 }, { "authors": [ "Song Han", "Huizi Mao", "William J. Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "β-vae: learning basic visual concepts with a constrained variational framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NIPS Workshop,", "year": 2014 }, { "authors": [ "Shikun Huang", "Binbin Zhang", "Wen Shen", "Zhihua Wei", "Quanshi Zhang" ], "title": "Utility analysis of network architectures for 3d point cloud processing", "venue": null, "year": 1911 }, { "authors": [ "Aditya Khosla", "Nityananda Jayadevaprakash", "Bangpeng Yao", "Li Fei-Fei" ], "title": "Novel dataset for finegrained image categorization", "venue": "In First CVPR Workshop on Fine-Grained Visual Categorization (FGVC),", "year": 2011 }, { "authors": [ "Pieter-Jan Kindermans", "Kristof T. Schütt", "Maximilian Alber", "Klaus-Robert Müller", "Dumitru Erhan", "Been Kim", "Sven Dähne" ], "title": "Learning how to explain neural networks: Patternnet and patternattribution", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "PangWei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Simon Kornblith", "Mohammad Norouzi", "Honglak Lee", "Geoffrey Hinton" ], "title": "Similarity of neural network representations revisited", "venue": "In arXiv:1905.00414,", "year": 2019 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Himabindu Lakkaraju", "Ece Kamar", "Rich Caruana", "Eric Horvitz" ], "title": "Identifying unknown unknowns in the open world: Representations and policies for guided exploration", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Scott M. Lundberg", "Su-In Lee" ], "title": "A unified approach to interpreting model predictions", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Haotian Ma", "Yinqing Zhang", "Fan Zhou", "Quanshi Zhang" ], "title": "Quantifying layerwise information discarding of neural networks", "venue": "In arXiv:1906.04109,", "year": 2019 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Asit K. Mishra", "Debbie Marr" ], "title": "Apprentice: Using knowledge distillation techniques to improve low-precision network", "venue": null, "year": 2018 }, { "authors": [ "Grégoire Montavon", "Mikio L. Braun", "Klaus-Robert Müller" ], "title": "Kernel analysis of deep networks", "venue": "In Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Ari S. Morcos", "Maithra Raghu", "Samy Bengio" ], "title": "Insights on representational similarity in neural networks with canonical correlation", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Roman Novak", "Yasaman Bahri", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Sensitivity and generalization in neural networks: An empirical study", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", "venue": null, "year": 2017 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "why should i trust you?” explaining the predictions of any classifier", "venue": null, "year": 2016 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E. Hinton" ], "title": "Dynamic routing between capsules", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Ravid Schwartz-Ziv", "Naftali Tishby" ], "title": "Opening the black box of deep neural networks via information", "venue": "In arXiv:1703.00810,", "year": 2017 }, { "authors": [ "Ramprasaath R. Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": null, "year": 2017 }, { "authors": [ "Marcel Simon", "Erik Rodner" ], "title": "Neural activation constellations: Unsupervised part model discovery with convolutional networks", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In arXiv:1312.6199v4,", "year": 2014 }, { "authors": [ "C. Wah", "S. Branson", "P. Welinder", "P. Perona", "S. Belongie" ], "title": "The caltech-ucsd birds-200-2011 dataset", "venue": "Technical report, In California Institute of Technology,", "year": 2011 }, { "authors": [ "Liwei Wang", "Lunjia Hu", "Jiayuan Gu", "Yue Wu", "Zhiqiang Hu", "Kun He", "John Hopcroft" ], "title": "Towards understanding learning representations: To what extent do different neural networks learn the same representation", "venue": null, "year": 2018 }, { "authors": [ "Natalie Wolchover" ], "title": "New theory cracks open the black box of deep learning", "venue": null, "year": 2017 }, { "authors": [ "Aolin Xu", "Maxim Raginsky" ], "title": "Information-theoretic analysis of generalization capability of learning algorithms", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Matthew D. Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Undersantding deep learning requires rethinking generalization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Hao Zhang", "Jiayi Chen", "Haotian Xue", "Quanshi Zhang" ], "title": "Towards a unified evaluation of explanation methods without ground truth", "venue": null, "year": 1911 }, { "authors": [ "Quanshi Zhang", "Wenguan Wang", "Song-Chun Zhu" ], "title": "Examining cnn representations with respect to dataset bias", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Quanshi Zhang", "Ying Nian Wu", "Song-Chun Zhu" ], "title": "Interpretable convolutional neural networks", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Object detectors emerge in deep scene cnns", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Bolei Zhou", "Yiyou Sun", "David Bau", "Antonio Torralba" ], "title": "Interpretable basis decomposition for visual explanation", "venue": "In ECCV,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have shown promise in many tasks of artificial intelligence. However, there is still lack of mathematical tools to diagnose representations in intermediate layers of a DNN, e.g. discovering flaws in representations or identifying reliable and unreliable features. Traditional evaluation of DNNs based on the testing accuracy cannot insightfully examine the correctness of representations of a DNN due to leaked data or shifted datasets (Ribeiro et al., 2016).\nThus, in this paper, we propose a method to diagnose representations of intermediate layers of a DNN from the perspective of knowledge consistency. I.e. given two DNNs pre-trained for the same task, no matter whether the DNNs have the same or different architectures, we aim to examine whether intermediate layers of the two DNNs encode similar visual concepts.\nHere, we define the knowledge of an intermediate layer of the DNN as the set of visual concepts that are encoded by features of an intermediate layer. This research focuses on the consistency of “knowledge” between two DNNs, instead of comparing the similarity of “features.” In comparison, the feature is referred to as the explicit output of a layer. For example, two DNNs extract totally different features, but these features may be computed using similar sets of visual concepts, i.e. encoding consistent knowledge (a toy example of knowledge consistency is shown in the footnote1).\nGiven the same training data, DNNs with different starting conditions usually converge to different knowledge representations, which sometimes leads to the over-fitting problem (Bengio et al., 2014). However, ideally, well learned DNNs with high robustness usually comprehensively encode various types of discriminative features and keep a good balance between the completeness and the discrimination power of features (Wolchover, 2017). Thus, the well learned DNNs are supposed to converge to similar knowledge representations.\nIn general, we can understand knowledge consistency as follows. Let A and B denote two DNNs learned for the same task. xA and xB denote two intermediate-layer features of A and B, respec-\n∗Ruofan Liang and Tianlin Li contribute equally to this research. Quanshi Zhang is the corresponding author with the John Hopcroft Center and MoE Key Lab of Artificial Intelligence AI Institute, Shanghai Jiao Tong University, China. zqs1022@sjtu.edu.cn\n1As a toy example, we show how to revise a pre-trained DNN to generate different features but represent consistent knowledge. The revised DNN shuffles feature elements in a layer xrev = Ax and shuffles the feature back in the next convolutional layer x̂ = Wrev · xrev, where Wrev = WA−1; A is a permutation matrix. The knowledge encoded in the shuffled feature is consistent with the knowledge in the original feature.\ntively. Features xA and xB can be decomposed as xA = x̂A + A and xB = x̂B + B , where neural activations in feature components x̂A and x̂B are triggered by same image regions, thereby representing consistent knowledge. Accordingly, feature components x̂A and x̂B are termed consistent features between A and B. Then, feature components A and B are independent with each other, and they are termed inconsistent features. We assume that consistent components x̂A, x̂B can reconstruct each other, i.e. x̂A can be reconstructed from x̂B; vice versa.\nIn terms of applications, knowledge consistency between DNNs can be used to diagnose feature reliability of DNNs. Usually, consistent components (x̂A, x̂B) represent common and reliable knowledge. Whereas, inconsistent components ( A, B) mainly represent unreliable knowledge or noises.\nTherefore, in this paper, we propose a generic definition for knowledge consistency between two pre-trained DNNs, and we develop a method to disentangle consistent feature components from features of intermediate layers in the DNNs. Our method is task-agnostic, i.e. (1) our method does not require any annotations w.r.t. the task for evaluation; (2) our method can be broadly applied to different DNNs as a supplement evaluation of DNNs besides the testing accuracy. Experimental results supported our assumption, i.e. the disentangled consistent feature components are usually more reliable for the task. Thus, our method of disentangling consistent features can be used to boost performance.\nFurthermore, to enable a solid research on knowledge consistency, we consider the following issues.\n• Fuzzy consistency at different levels (orders): As shown in Fig. 1, the knowledge consistency between DNNs needs to be defined at different fuzziness levels, because there is no strict knowledge consistency between two DNNs. The level of fuzziness in knowledge consistency measures the difficulty of transforming features of a DNN to features of another DNN. A low-level fuzziness indicates that a DNN’s feature can directly reconstruct another DNN’s feature without complex transformations.\n• Disentanglement & quantification: We need to disentangle and quantify feature components, which correspond to the consistent knowledge at different fuzziness levels, away from the chaotic feature. Similarly, we also disentangle and quantify feature components that are inconsistent.\nThere does not exist a standard method to quantify the fuzziness of knowledge consistency (i.e. the difficulty of feature transformation). For simplification, we use non-linear transformations during feature reconstruction to approximate the fuzziness. To this end, we propose a model gk for feature reconstruction. The subscript k indicates that gk contains a total of k cascaded non-linear activation layers. x̂A = gk(xB) represents components, which are disentangled from the feature xA of the DNN A and can be reconstructed by the DNN B’s feature xB . Then, we consider x̂A = gk(xB) to represent consistent knowledge w.r.t. the DNN B at the k-th fuzziness level (or the k-th order). x̂A = gk(xB) is also termed the k-order consistent feature of xA w.r.t. xB .\nIn this way, the most strict consistency is the 0-order consistency, i.e. x̂A = g0(xB) can be reconstructed from xB via a linear transformation. In comparison, some neural activations in the 1-order consistent feature x̂A = g1(xB) are not directly represented by xB and need to be predicted via a non-linear transformation. The smaller k indicates the less prediction involved in the reconstruction and the stricter consistency. Note that the number of non-linear operations k is just a rough approximation of the difficulty of prediction, since there are no standard methods to quantify prediction difficulties.\nnet.pdf\nMore crucially, we implement the model g as a neural network, where k is set as the number of nonlinear layers in g. As shown in Fig. 2, g is designed to disentangle and quantify consistent feature components of different orders between DNNs. Our method can be applied to different types of DNNs and explain the essence of various deep-learning techniques.\n1. Our method provides a new perspective for explaining the effectiveness of knowledge distillation. I.e. we explore the essential reason why the born-again network (Furlanello et al., 2018) exhibits superior performance.\n2. Our method gives insightful analysis of network compression.\n3. Our method can be used to diagnose and refine knowledge representations of pre-trained DNNs and boost the performance without any additional annotations for supervision.\nContributions of this study can be summarized as follows. (1) In this study, we focus on a new problem, i.e. the knowledge consistency between DNNs. (2) We define the knowledge consistency and propose a task-agnostic method to disentangle and quantify consistent features of different orders. (3) Our method can be considered as a mathematical tool to analyze feature reliability of different DNNs. (4) Our method provides a new perspective on explaining existing deep-learning techniques, such as knowledge distillation and network compression." }, { "heading": "2 RELATED WORK", "text": "In spite of the significant discrimination power of DNNs, black-box representations of DNNs have been considered an Achilles’ heel for decades. In this section, we will limit our discussion to the literature of explaining or analyzing knowledge representations of DNNs. In general, previous studies can be roughly classified into the following three types.\nExplaining DNNs visually or semantically: Visualization of DNNs is the most direct way of explaining knowledge hidden inside a DNN, which include gradient-based visualization (Zeiler & Fergus, 2014; Mahendran & Vedaldi, 2015) and inversion-based visualization (Dosovitskiy & Brox, 2016). (Zhou et al., 2015) developed a method to compute the actual image-resolution receptive field of neural activations in a feature map of a convolutional neural network (CNN), which is smaller than the theoretical receptive field based on the filter size. Based on the receptive field, six types of semantics were defined to explain intermediate-layer features of CNNs, including objects, parts, scenes, textures, materials, and colors (Bau et al., 2017; Zhou et al., 2018).\nBeyond visualization, some methods diagnose a pre-trained CNN to obtain insight understanding of CNN representations. Fong and Vedaldi (Fong & Vedaldi, 2018) analyzed how multiple filters jointly represented a specific semantic concept. (Selvaraju et al., 2017), (Fong & Vedaldi, 2017), and (Kindermans et al., 2018) estimated image regions that directly contributed the network output. The LIME (Ribeiro et al., 2016) and SHAP (Lundberg & Lee, 2017) assumed a linear relationship between the input and output of a DNN to extract important input units. (Zhang et al., 2019) proposed a number of metrics to evaluate the objectiveness of explanation results of different explanation methods, even when people could not get ground-truth explanations for the DNN.\nUnlike previous studies visualizing visual appearance encoded in a DNN or extracting important pixels, our method disentangles and quantifies the consistent components of features between two DNNs. Consistent feature components of different orders can be explicitly visualized.\nLearning explainable deep models: Compared to the post-hoc explanations of DNNs, some studies directly learn more meaningful CNN representations. Previous studies extracted scene semantics (Zhou et al., 2015) and mined objects (Simon & Rodner, 2015) from intermediate layers. In the capsule net (Sabour et al., 2017), each output dimension of a capsule usually encoded a specific meaning. (Zhang et al., 2018b) proposed to learn CNNs with disentangled intermediate-layer representations. The infoGAN (Chen et al., 2016) and β-VAE (Higgins et al., 2017) learned interpretable input codes for generative models.\nMathematical evaluation of the representation capacity: Formulating and evaluating the representation capacity of DNNs is another emerging direction. (Novak et al., 2018) proposed generic metrics for the sensitivity of network outputs with respect to parameters of neural networks. (Zhang et al., 2017) discussed the relationship between the parameter number and the generalization capacity of deep neural networks. (Arpit et al., 2017) discussed the representation capacity of neural networks, considering real training data and noises. (Yosinski et al., 2014) evaluated the transferability of filters in intermediate layers. Network-attack methods (Koh & Liang, 2017; Szegedy et al., 2014; Koh & Liang, 2017) can also be used to evaluate representation robustness by computing adversarial samples. (Lakkaraju et al., 2017) discovered knowledge blind spots of the knowledge encoded by a DNN via human-computer interaction. (Zhang et al., 2018a) discovered potential, biased representations of a CNN due to the dataset bias. (Deutsch, 2018) learned the manifold of network parameters to diagnose DNNs. (Huang et al., 2019) quantified the representation utility of different layerwise network architectures in the scenario of 3D point cloud processing. Recently, the stiffness (Fort et al., 2019) was proposed to evaluate the generalization of DNNs.\nThe information-bottleneck theory (Wolchover, 2017; Schwartz-Ziv & Tishby, 2017) provides a generic metric to quantify the information contained in DNNs. The information-bottleneck theory can be extended to evaluate the representation capacity of DNNs (Xu & Raginsky, 2017; Cheng et al., 2018). (Achille & Soatto, 2018) further used the information-bottleneck theory to improve feature representations of a DNN. (Guan et al., 2019; Ma et al., 2019) quantified the word-wise/pixelwise information discarding during the layerwise feature propagation of a DNN, and used the information discarding as a tool to diagnose feature representations.\nThe analysis of representation similarity between DNNs is most highly related to our research, which has been investigated in recent years (Montavon et al., 2011). Most previous studies (Wang et al., 2018; Kornblith et al., 2019; Raghu et al., 2017; Morcos et al., 2018) compared representations via linear transformations, which can be considered as 0-order knowledge consistency. In comparison, we focus on high-order knowledge consistency. More crucially, our method can be used to refine network features and explain the success of existing deep-learning techniques." }, { "heading": "3 ALGORITHM", "text": "In this section, we will introduce the network architecture to disentangle feature components of consistent knowledge at a certain fuzziness level, when we use the intermediate-layer feature x of a DNN to reconstruct intermediate-layer feature x∗ of another DNN2. As shown in Fig. 2, the network g with parameters θ has a recursive architecture with K + 1 blocks. The function of the k-th block is given as follows.\nh(k) = W (k) [ x+ xhigher ] , xhigher = p(k+1)ReLU ( Σ\n− 12 (k+1)h\n(k+1) ) , k = 0, 1, . . . ,K−1 (1)\nThe output feature is computed using both the raw input and the feature of the higher order h(k+1). W (k) denotes a linear operation without a bias term. The last block is given as h(K) = W (K)x. This linear operation can be implemented as either a layer in an MLP or a convolutional layer in a CNN. Σ(k+1) = diag(σ 2 1 , σ 2 2 , . . . , σ 2 n) is referred to a diagonal matrix for element-wise variance of h(k+1), where σ2m the variance of the m-th element of h (k+1) through various input images. Σ(k+1) is used to normalize the magnitude of neural activations. Because of the normalization, the scalar value p(k+1) roughly controls the activation magnitude of h(k+1) w.r.t. h(k). h(0) = gθ(x) corresponds to final output of the network.\n2Re-scaling the magnitude of all neural activations does not affect the knowledge encoded in the DNN. Therefore, we normalize x and x∗ to zero mean and unit variance to remove the magnitude effect.\nIn this way, the entire network can be separated into (K + 1) branches (see Fig. 2), where the k-th branch (k ≤ K) contains a total of k non-linear layers. Note that the k1-order consistent knowledge can also be represented by the network with k2-th branch, if k1 < k2.\nIn order to disentangle consistent features of different orders, the k-th branch is supposed to exclusively encode the k-order consistent features without representing lower-order consistent features. Thus, we propose the following loss to guide the learning process.\nLoss(θ) = ‖gθ(x)− x∗‖2 + λ K∑ k=1 ( p(k) )2 (2)\nwhere x and x∗ denote intermediate-layer features of two pre-trained DNNs. The second term in this loss penalizes neural activations from high-order branches, thereby forcing as much low-order consistent knowledge as possible to be represented by low-order branches.\nFurthermore, based on (K + 1) branches of the network, we can disentangle features of x∗ into (K + 2) additive components.\nx∗ = gθ(x) + x ∆, gθ(x) = x (0) + x(1) + · · ·+ x(K) (3) where x∆ = x∗ − gθ(x) indicates feature components that cannot be represented by x. x(k) denotes feature components that are exclusively represented by the k-order branch.\nBased on Equation (1), the signal-processing procedure for the k-th feature component can be represented as Convk → Normk → ReLUk → p(k) → Convk−1 → · · · → ReLU1 → p(1) → Conv0 (see Fig. 2). Therefore, we can disentangle from Equation (1) all linear and non-linear operations along the entire k-th branch as follows, and in this way, x(k) can be considered as the k-order consistent feature component.\nx(k) = W (0)\n( k∏\nk′=1\np(k ′)A(k ′)Σ − 12 (k′)W (k′)\n) x (4)\nwhere A(k ′) = diag(a1, a2, . . . , aM ) is a diagonal matrix. Each diagonal element am ∈ {0, 1} represents the binary information-passing state of xm through an ReLU layer (1 ≤ m ≤ M). Each element is given as am = 1(vm > 0), where v = Σ − 1 2 (k′)h (k′). Note that ReLU(W k(x + xhigher)) 6= ReLU(W k(x)) + ReLU(W k(xhigher)), but A(k ′)(W k(x + xhigher)) = A(k ′)W kx + A(k ′)W kxhigher. Thus, we record such information-passing states to decompose feature components." }, { "heading": "4 COMPARATIVE STUDIES", "text": "As a generic tool, knowledge consistency based on g can be used for different applications. In order to demonstrate utilities of knowledge consistency, we designed various comparative studies, including (1) diagnosing and debugging pre-trained DNNs, (2) evaluating the instability of learning DNNs, (3) feature refinement of DNNs, (4) analyzing information discarding during the compression of DNNs, and (5) explaining effects of knowledge distillation in knowledge representations.\nA total of five typical DNNs for image classification were used, i.e. the AlexNet (Krizhevsky et al., 2012), the VGG-16 (Simonyan & Zisserman, 2015), and the ResNet-18, ResNet-34, ResNet50 (He et al., 2016). These DNNs were learned using three benchmark datasets, which included the CUB200-2011 dataset (Wah et al., 2011), the Stanford Dogs dataset (Khosla et al., 2011), and the Pascal VOC 2012 dataset (Everingham et al., 2015). Note that both training images and testing images were cropped using bounding boxes of objects. We set λ = 0.1 for all experiments, except for feature reconstruction of AlexNet (we set λ = 8.0 for AlexNet features). It was because the shallow model of the AlexNet usually had significant noisy features, which caused considerable inconsistent components." }, { "heading": "4.1 NETWORK DIAGNOSIS BASED ON KNOWLEDGE CONSISTENCY", "text": "The most direct application of knowledge consistency is to use a strong (well learned) DNN to diagnose representation flaws hidden in a weak DNN. This is of special values in real applications, e.g. shallow (usually weak) DNNs are more suitable to be adopted to mobile devices than deep DNNs.\nLet two DNNs be trained for the same task, and one DNN significantly outperforms the other DNN. We assume that the strong DNN has encoded ideal knowledge representations of the target task. The weak DNN may have the following two types of representation flaws. • Unreliable features in the weak DNN are defined as feature components, which cannot be reconstructed by features of the strong DNN. (see Appendix B for detailed discussions). • Blind spots of the weak DNN are defined as feature components in the strong DNN, which are inconsistent with features of the weak DNN. These feature components usually reflect blind spots of the knowledge of the weak DNN (see Appendix B for detailed discussions).\nFor implementation, we trained DNNs for fine-grained classification using the CUB200-2011 dataset (Wah et al., 2011) (without data augmentation). We considered the AlexNet (Krizhevsky et al., 2012) as the weak DNN (56.97% top-1 accuracy), and took the ResNet-34 (He et al., 2016) as the strong DNN (73.09% top-1 accuracy).\nPlease see Fig. 3. We diagnosed the output feature of the last convolutional layer in the AlexNet, which is termed xA. Accordingly, we selected the last 14 × 14 × 256 feature map of the ResNet-34 (denoted by xB) for the computation of knowledge consistency, because xA and xB had similar map sizes. We disentangled and visualized unreliable components from xA (i.e. inconsistent components in xA). We also visualized components disentangled from xB (i.e. inconsistent components in xB), which corresponded to blind spots of the weak DNN’s knowledge.\nFurthermore, we conducted two experiments to further verify the claim of blind spots and unreliable features in a DNN. The first experiment aimed to verify blind spots. The basic idea of this experiment was to examine the increase of the classification accuracy, when we added information of blind spots to the raw feature. We followed above experimental settings, which used the intermediatelayer feature xA of the AlexNet (with 56.97% top-1 accuracy) to reconstruct the intermediate-layer feature xB of Resnet-34 (with 73.09% top-1 accuracy). Then, inconsistent feature components were termed blind spots. In this way, we added feature components of blind spots back to the AlexNet’s feature (adding these features back to gθ(xA)), and then learned a classifier upon the new feature. To enable a fair comparison, the classifier had the same architecture as the AlexNet’s modules above xA, and during the learning process, we fixed parameters in DNN A and θ to avoid the revision of such parameters affecting the performance. We found that compared to the raw feature xA of the AlexNet, the new feature boosted the classification accuracy by 16.1%.\nThe second experiment was conducted to verify unreliable features. The basic idea of this experiment was to examine the increase of the classification accuracy, when we removed information of unreliable features from the raw feature. We designed two classifiers, which had the same architecture as the AlexNet’s modules above xA. We fed the raw feature xA to the first classifier. Then, we removed unreliable feature components from xA (i.e. obtaining gθ(xB)), and fed the revised feature to the second classifier. We learned these two classifiers, and classifiers with and without unreliable features exhibited classification accuracy of 60.3% and 65.6%, respectively.\nTherefore, above two experiments successfully demonstrated that both the insertion of blind spots and the removal of unreliable features boosted the classification accuracy." }, { "heading": "4.2 STABILITY OF LEARNING", "text": "The stability of learning DNNs is of considerable values in deep learning, i.e. examining whether or not all DNNs represent the same knowledge, when people repeatedly learn multiple DNNs for the same task. High knowledge consistency between DNNs usually indicates high learning stability.\nMore specifically, we trained two DNNs (A,B) with the same architecture for the same task. Then, we disentangled inconsistent feature components x∆A =xA−gθ(xB) and x∆B =xB−gθ(xA) from their features xA and xB of a specific layer, respectively. Accordingly, gθ(xB) and gθ(xA) corresponded to consistent feature components in xA and xB , respectively, whose knowledge was shared by two networks. The inconsistent feature x∆A was quantified by the variance of feature element through different units of x∆A and through different input images V ar(x∆A) def = EI,i[(x∆A,I,i − EI′,i′ [x∆A,I′,i′ ])2], where x∆A,I,i denotes the i-th element of x∆A given the image I. We can use V ar(x∆A)/V ar(xA) to measure the instability of learning DNNs A and B.\nIn experiments, we evaluated the instability of learning the AlexNet (Krizhevsky et al., 2012), the VGG-16 (Simonyan & Zisserman, 2015), and the ResNet-34 (He et al., 2016). We considered the following cases.\nCase 1, learning DNNs from different initializations using the same training data: For each network architecture, we trained multiple networks using the CUB200-2011 dataset (Wah et al., 2011). The instability of learning DNNs was reported as the average of instability over all pairs of neural networks.\nCase 2, learning DNNs using different sets of training data: We randomly divided all training samples in the CUB200-2011 dataset (Wah et al., 2011) into two subsets, each containing 50% samples. For each network architecture, we trained two DNNs (A,B) for fine-grained classification (without pre-training). The instability of learning DNNs was reported as [V ar(x∆A)/V ar(xA) + V ar(x∆B)/V ar(xB)]/2.\nTable 1 compares the instability of learning different DNNs. Table 2 reports the variance of consistent components of different orders. We found that the learning of shallow layers in DNNs was usually more stable than the learning of deep layers. The reason may be as follows. A DNN with more layers usually can represent more complex visual patterns, thereby needing more training samples. Without a huge training set (e.g. the ImageNet dataset (Krizhevsky et al., 2012)), a deep network may be more likely to suffer from the over-fitting problem, which causes high variances in Table 1, i.e. DNNs with different initial parameters may learn different knowledge representations." }, { "heading": "4.3 FEATURE REFINEMENT BASED ON KNOWLEDGE CONSISTENCY", "text": "Knowledge consistency can also be used to refine intermediate-layer features of pre-trained DNNs. Given multiple DNNs pre-trained for the same task, feature components, which are consistent with various DNNs, usually represent common knowledge and are reliable. Whereas, inconsistent feature components w.r.t. other DNNs usually correspond to unreliable knowledge or noises. In this way,\nintermediate-layer features can be refined by removing inconsistent components and exclusively using consistent components to accomplish the task.\nMore specifically, given two pre-trained DNNs, we use the feature of a certain layer in the first DNN to reconstruct the corresponding feature of the second DNN. The reconstructed feature is given as x̂. In this way, we can replace the feature of the second DNN with the reconstructed feature x̂, and then use x̂ to simultaneously learn all subsequent layers in the second DNN to boost performance. Note that for clear and rigorous comparisons, we only disentangled consistent feature components from the feature of a single layer and refined the feature. It was because the simultaneous refinement of features of multiple layers would increase it difficult to clarify the refinement of which feature made the major contribution.\nIn experiments, we trained DNNs with various architectures for image classification, including the VGG-16 (Simonyan & Zisserman, 2015), the ResNet-18, the ResNet-34, and the ResNet-50 (He et al., 2016). We conducted the following two experiments, in which we used knowledge consistency to refine DNN features.\nExp. 1, removing unreliable features (noises): For each specific network architecture, we trained two DNNs using the CUB200-2011 dataset (Wah et al., 2011) with different parameter initializations. Consistent components were disentangled from the original feature of a DNN and then used for image classification. As discussed in Section 4.1, consistent components can be considered as refined features without noises. We used the refined features as input to finetune the pre-trained upper layers in the DNN B for classification. Table 3 reports the increase of the classification accuracy by using the refined feature. The refined features slightly boosted the performance.\nFairness of comparisons: (1) To enable fair comparisons, we first trained gθ and then kept gθ unchanged during the finetuning of classifiers, thereby fixing the refined features. Otherwise, allowing gθ to change would be equivalent to adding more layers/parameters to DNN B for classification. We needed to eliminate such effects for fair comparisons, which avoided the network from benefitting from additional layers/parameters. (2) As baselines, we also further finetuned the corresponding upper layers of DNNs A and B to evaluate their performance. Please see Appendix A for more discussions about how we ensured the fairness.\nExp. 2, removing redundant features from pre-trained DNNs: A typical deep-learning methodology is to finetune a DNN for a specific task, where the DNN is pre-trained for multiple tasks (including both the target and other tasks). This is quite common in deep learning, e.g. DNNs pre-trained using the ImageNet dataset (Deng et al., 2009) are usually finetuned for various tasks. However, in this case, feature components pre-trained for other tasks are redundant for the target task and will affect the further finetuning process.\nTherefore, we conducted three new experiments, in which our method removed redundant features w.r.t. the target task from the pre-trained DNN. In Experiment 2.1 (namely VOC-animal), let two DNNs A and B be learned to classify 20 object classes in the Pascal VOC 2012 dataset (Everingham et al., 2015). We were also given object images of six animal categories (bird, cat, cow, dog, horse, sheep). We fixed parameters in DNNs A and B, and our goal was to use fixed DNNs A and B to generate clean features for animal categories without redundant features.\nLet xA and xB denote intermediate-layer features of DNNs A and B. Our method used xA to reconstruct xB . Then, the reconstructed result gθ(xA) corresponded to reliable features for animals, while inconsistent components x∆ indicated features of other categories. We used clean features gθ(xA) to learn the animal classifier. gθ were fixed during the learning of the animal classifier to\nenable a fair comparison. In comparison, the baseline method directly used either xA or xB to finetune the pre-trained DNN to classify the six animal categories.\nIn Experiment 2.2 (termed Mix-CUB), two original DNNs were learned using both the CUB2002011 dataset (Wah et al., 2011) and the Stanford Dogs dataset (Khosla et al., 2011) to classify both 200 bird species and 120 dog species. Then, our method used bird images to disentangle feature components for birds and then learned a new fine-grained classifier for birds. The baseline method was implemented following the same setting as in VOC-animal. Experiment 2.3 (namely Mix-Dogs) was similar to Mix-CUB. Our method disentangled dog features away from bird features to learn a new dog classifier. In all above experiments, original DNNs were learned from scratch without data augmentation. Table 4 compares the classification accuracy of different methods. It shows that our method significantly alleviated the over-fitting problem and outperformed the baseline." }, { "heading": "4.4 ANALYZING INFORMATION DISCARDING OF NETWORK COMPRESSION", "text": "Network compression is an emerging research direction in recent years. Knowledge consistency between the compressed network and the original network can evaluate the discarding of knowledge during the compression process. I.e. people may visualize or analyze feature components in the original network, which are not consistent with features in the compressed network, to represent the discarded knowledge in the compressed network.\nIn experiments, we trained the VGG-16 using the CUB200-2011 dataset (Wah et al., 2011) for finegrained classification. Then, we compressed the VGG-16 using the method of (Han et al., 2016) with different pruning thresholds. We used features of the compressed DNN to reconstruct features of the original DNN. Then, inconsistent components disentangled from the original DNN usually corresponded to the knowledge discarding during the compression process. Fig. 4(left) visualizes the discarded feature components. We used V ar(x∆) (defined in Section 4.2) to quantify the information discarding. Fig. 4 compares the decrease of accuracy with the discarding of feature information." }, { "heading": "4.5 EXPLAINING KNOWLEDGE DISTILLATION VIA KNOWLEDGE CONSISTENCY", "text": "As a generic tool, our method can also explain the success of knowledge distillation. In particular, (Furlanello et al., 2018) proposed a method to gradually refine a neural network via recursive knowledge distillation. I.e. this method recursively distills the knowledge of the current net to a\nnew net with the same architecture and distilling the new net to an even newer net. The new(er) net is termed a born-again neural network and learned using both the task loss and the distillation loss. Surprisingly, such a recursive distillation process can substantially boost the performance of the neural network in various experiments.\nIn general, the net in a new generation both inherits knowledge from the old net and learns new knowledge from the data. The success of the born-again neural network can be explained as that knowledge representations of networks are gradually enriched during the recursive distillation process. To verify this assertion, in experiments, we trained the VGG-16 using the CUB200-2011 dataset (Wah et al., 2011) for fine-grained classification. We trained born-again neural networks of another four generations3. We disentangled feature components in the newest DNN, which were not consistent with an intermediate DNN. Inconsistent components were considered as blind spots of knowledge representations of the intermediate DNN and were quantified by V ar(x∆). Fig. 4(right) shows V ar(x∆) of DNNs in the 1st, 2nd, 3rd, and 4th generations. Inconsistent components were gradually reduced after several generations." }, { "heading": "4.6 CONSISTENT AND INCONSISTENT FEATURES BETWEEN DIFFERENT TASKS", "text": "In order to visualize consistent and inconsistent features between different tasks (i.e. the fine-grained classification and the binary classification), we trained DNN A (a VGG-16 network) to simultaneously classify 320 species, including 200 bird species in the CUB200-2011 dataset (Wah et al., 2011) and 120 dog species in the Stanford Dogs dataset (Khosla et al., 2011), in a fine-grained manner. On the other hand, we learned DNN B (another VGG-16 network) for the binary classification of the bird and the dog based on these two datasets. We visualized the feature maps of consistent and inconsistent feature components between the two DNNs in Fig. 5. Obviously, DNN A encoded more knowledge than DNN B, thereby xA reconstructing more features than xB ." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have proposed a generic definition of knowledge consistency between intermediatelayers of two DNNs. A task-agnostic method is developed to disentangle and quantify consistent features of different orders from intermediate-layer features. Consistent feature components are usually more reliable than inconsistent components for the task, so our method can be used to further refine the pre-trained DNN without a need for additional supervision. As a mathematical tool, knowledge consistency can also help explain existing deep-learning techniques, and experiments have demonstrated the effectiveness of our method." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was partially supported by National Natural Science Foundation of China (U19B2043 and 61906120) and Huawei Technologies.\n3Because (Furlanello et al., 2018) did not clarify the distillation loss, we applied the distillation loss in (Hinton et al., 2014) following parameter settings τ = 1.0 in (Mishra & Marr, 2018), i.e. Loss = Lossclassify + 0.5Lossdistill." }, { "heading": "A FAIRNESS OF EXPERIMENTS IN SECTION 4.3", "text": "In Section 4.3, we used knowledge consistency to refine intermediate-layer features of pre-trained DNNs. Given multiple DNNs pre-trained for the same task, feature components, which are consistent with various DNNs, usually represent common knowledge and are reliable. In this way, intermediate-layer features can be refined by removing inconsistent components and exclusively using consistent components to accomplish the task.\nBoth Experiment 1 and Experiment 2 followed the same procedure. I.e. we trained two DNNs A andB for the same task. Then, we extracted consistent feature components g(xA) when we used the DNN A’s feature to reconstruct the DNN B’s feature. We compared the classification performance of the DNN A, the DNN B, and the classifier learned based on consistent features g(xA).\nHowever, an issues of fairness may be raised, i.e. when we added the network g upon the DNN A, this operation increased the depth of the network. Thus, the comparison between the revised DNN and the original DNN A may be unfair.\nIn order to ensure a fair comparison, we applied following experimental settings. • For the evaluation of DNNs A and B, both the DNNs were further refined using target training samples in Experiments 1 and 2. • For the evaluation of our method, without using any annotations of the target task, we first trained g(xA) to use xA to reconstruct xB and disentangle consistent feature components. Then, we fixed all parameters of DNNs A, B, and g, and only used output features of g(xA) to train a classifier, in order to test the discrimination power of output features of g(xA).\nImage AX Image\nBX\nC onv\nR eLU N orm\n)g(X A\nA ddition C onv\nR eLU N orm\nA ddition C onv\nR eLU N orm\nReconstruction Loss\nAX\nAX\nAX\nClassifier Classifier\nDNN A DNN B\nAs shown in the above figure, for the evaluation of our method, first, we were not given any training annotations for the target task. Without any prior knowledge of the target task, we used the pretrained DNNs (blue area in the above figure) to train the network g (red area in the above figure). Then, we fixed parameters of DNNs in both red area and blue area in the above figure. Finally, we received training samples of the target task, and used them to exclusively train the classifier (the blue area in the above figure).\nIn short, the learning of the network g was independent with the target task, and parameters of A, B, and g were not refine-tuned during the learning of the new classifier.\nTherefore, comparisons in Tables 3 and 4 fairly reflects the discrimination power of raw features of DNNs A and B and consistent features produced by g(xA)." }, { "heading": "B BLIND SPOTS AND UNRELIABLE FEATURES", "text": "By our assumption, strong DNNs can encode true knowledge, while there are more representation flaws in weak DNNs. If we try to use the intermediate features of a weak DNN to reconstruct intermediate features of a strong DNN, the strong DNN features cannot be perfectly reconstructed due to the inherent blind spots of the weak DNN. On the other hand, when we use features of a strong DNN to reconstruct a weak DNN’s features, the unreconstructed feature components in these\nreconstructed features also exist. These unreconstructed parts are not modeled in the strong DNN, and they are termed unreliable features by us." } ]
2,020
null
SP:f44604decdd75946cc41dbdc8f25039e141276fe
[ "The paper proposes extensions of Generative Adversarial Networks to modeling multiple text corpora. Concretely, the paper looks at two problems: 1) given independently pretrained word embeddings from K corpora, finding a common word embdding, 2) extracting document representations from a discriminator of a GAN trained to generate tf-idf vectors. Preliminary experiments show that the proposed approaches outperform baseline classifiers trained with word2vec.", "The paper proposes to use Generative Adversarial Networks (GANs) in the context of natural language processing and introduces two models for generating document embeddings. The first model, weGAN, aggregates multiple sets of single-corpus word2vec embeddings into one set of cross-corpus word representations; document embeddings are a function of these updated word embeddings. The second model, deGAN, side-steps word-level embeddings and directly generates document-level representations. For both models, the real examples come from word2vec and tf-idf, while the artificial examples are the ones generated by the network. The authors show that their document embeddings are better than the word2vec/tf-idf baseline at clustering documents according to the corpus they originate from." ]
Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.
[]
[ { "authors": [ "David M. Blei", "Andrew Y. Ng", "Michael I. Jordan" ], "title": "Latent dirichlet allocation", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Ronan Collobert", "Jason Weston", "Léon Bottou", "Michael Karlen", "Koray Kavukcuoglu", "Pavel P. Kuksa" ], "title": "Natural language processing (almost) from scratch", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "John Glover" ], "title": "Modeling documents with generative adversarial networks", "venue": "In Workshop on Adversarial Training (NIPS", "year": 2016 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "year": 2014 }, { "authors": [ "Quoc V. Le", "Tomas Mikolov" ], "title": "Distributed representations of sentences and documents", "venue": "In Proceedings of the 31st International Conference on Machine Learning", "year": 2014 }, { "authors": [ "Jiwei Li", "Will Monroe", "Tianlin Shi", "Alan Ritter", "Dan Jurafsky" ], "title": "Adversarial learning for neural dialogue generation", "venue": "Technical report,", "year": 2017 }, { "authors": [ "Ming-Yu Liu", "Oncel Tuzel" ], "title": "Coupled generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "In Workshop (ICLR", "year": 2013 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "Technical report,", "year": 2014 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "William M. Rand" ], "title": "Objective criteria for the evaluation of clustering methods", "venue": "Journal of the American Statistical Association,", "year": 1971 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Y. Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "year": 2013 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Yizhe Zhang", "Zhe Gan", "Lawrence Carin" ], "title": "Generating text via adversarial training", "venue": "In Workshop on Adversarial Training (NIPS", "year": 2016 }, { "authors": [ "Junbo Jake Zhao", "Michaël Mathieu", "Yann LeCun" ], "title": "Energy-based generative adversarial networks", "venue": "In 5th International Conference on Learning embeddings (ICLR 2017),", "year": 2017 } ]
[ { "heading": "1 Introduction", "text": "Generative adversarial nets (GAN) (Goodfellow et al., 2014) belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones. In a GAN model, there are two sub-models simultaneously trained: a generative model G from which artificial data examples can be sampled, and a discriminative model D which classifies real data examples and artificial ones from G. By training G to maximize its generation power, and training D to minimize the generation power of G, so that ideally there will be no difference between the true and artificial examples, a minimax problem can be established. The GAN model has been shown to closely replicate a number of image data sets, such as MNIST, Toronto Face Database (TFD), CIFAR-10, SVHN, and ImageNet (Goodfellow et al., 2014; Salimans et al., 2016).\nThe GAN model has been extended to text data in a number of ways. For instance, Zhang et al. (2016) applied a long-short term memory (LSTM) (Hochreiter & Schmidhuber, 1997) generator and approximated discretization to generate text data. Moreover, Li et al. (2017) applied the GAN model to generate dialogues, i.e. pairs of questions and answers. Meanwhile, the GAN model can also be applied to generate bag-of-words embeddings of text data, which focus more on key terms in a text document rather than the original document itself. Glover (2016) provided such a model with the energy-based GAN (Zhao et al., 2017).\nTo the best of our knowledge, there has been no literature on applying the GAN model to multiple corpora of text data. Multi-class GANs have been proposed (Liu & Tuzel, 2016; Mirza & Osindero, 2014), but a class in multi-class classification is not the same as multiple corpora. Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized, and documents from the same corpus are expected to share similar topics or key words, considering the membership information can benefit the training of a text model from a supervised perspective. We consider two problems associated with training multi-corpus text data: (1) Given a separate set of word Mikolov et al. (2013b), how to obtain a better set of cross-corpus word embeddings from them? (2) How to incorporate the generation of document embeddings from different corpora in a single GAN model?\nFor the first problem, we train a GAN model which discriminates documents represented by different word embeddings, and train the cross-corpus word embedding so that it is similar to each existing word embedding per corpus. For the second problem, we train a GAN model which considers both crosscorpus and per-corpus “topics” in the generator, and applies a discriminator which considers each original and artificial document corpus. We also show that with sufficient training, the distribution of\nthe artificial document embeddings is equivalent to the original ones. Our work has the following contributions: (1) we extend GANs to multiple corpora of text data, (2) we provide applications of GANs to finetune word embeddings and to create robust document embeddings, and (3) we establish theoretical convergence results of the multi-class GAN model.\nSection 2 reviews existing GAN models related to this paper. Section 3 describes the GAN models on training cross-corpus word embeddings and generating document embeddings for each corpora, and explains the associated algorithms. Section 4 presents the results of the two models on text data sets, and transfers them to supervised learning. Section 5 summarizes the results and concludes the paper." }, { "heading": "2 Literature Review", "text": "In a GAN model, we assume that the data examples x are drawn from a distribution px(·), and the artificial data examples G(z) := G(z, θg) are transformed from the noise distribution z ∼ pz(·). The binary classifier D(·) outputs the probability of a data example (or an artificial one) being an original one. Because the probabilistic structure of a GAN can be unstable to train, the Wasserstein GAN (Arjovsky et al., 2017) is proposed which applies a 1-Lipschitz function as a discriminator.\nWe note that in many circumstances, data sets are obtained with supervised labels or categories, which can add explanatory power to unsupervised models such as the GAN. For instance, the CoGAN (Liu & Tuzel, 2016) considers pairs of data examples from different categories, and the weights of the first few layers (i.e. close to z) are tied. Mirza & Osindero (2014) proposed the conditional GAN where the generator G and the discriminator D depend on the class label y. Salimans et al. (2016) applied the class labels for semi-supervised learning with an additional artificial class. However, all these models consider only images and do not produce word or document embeddings, therefore being different from our models.\nFor generating real text, Zhang et al. (2016) proposed textGAN in which the generator has an LSTM form, and a uni-dimensional convolutional neural network (Collobert et al., 2011; Kim, 2014) is applied as the discriminator. Also, a weighted softmax function is applied to make the argmax function differentiable. The focus of our work is to summarize information from longer documents, so we apply document embeddings such as the tf-idf to represent the documents rather than to generate real text.\nFor generating bag-of-words embeddings of text, Glover (2016) proposed a GAN model with the mean squared error of a de-noising autoencoder as the discriminator, and the output x is the one-hot word embedding of a document. Our models are different from this model because we consider tf-idf document embeddings for multiple text corpora in the deGAN model (Section 3.2), and weGAN (Section 3.1) can be applied to produce word embeddings. Also, we focus on robustness based on several corpora, while Glover (2016) assumed a single corpus.\nFor extracting word embeddings given text data, Mikolov et al. (2013a) proposed the word2vec model, for which there are two variations: the continuous bag-of-words (cBoW) model (Mikolov et al., 2013a), where the neighboring words are used to predict the appearance of each word; the skip-gram model, where each neighboring word is used individually for prediction. In GloVe (Pennington et al., 2014), a bilinear regression model is trained on the log of the word co-occurrence matrix. In these models, the weights associated with each word are used as the embedding. For obtaining document embeddings, the para2vec model (Le & Mikolov, 2014) adds per-paragraph vectors to train word2vec-type models, so that the vectors can be used as embeddings for each paragraph. A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al. (2013)." }, { "heading": "3 Models and Algorithms", "text": "Suppose we have a number of different corpora C1, . . . , CM , which for example can be based on different categories or sentiments of text documents. We suppose that Cm = {dm1 , . . . , dmnm}, m = 1, . . . ,M , where each dmi represents a document. The words in all corpora are collected in a\ndictionary, and indexed from 1 to V . We name the GAN model to train cross-corpus word embeddings as “weGAN,” where “we” stands for “word embeddings,” and the GAN model to generate document embeddings for multiple corpora as “deGAN,” where “de” stands for “document embeddings.”" }, { "heading": "3.1 weGAN: Training cross-corpus word embeddings", "text": "We assume that for each corpora Cm, we are given word embeddings for each word vm1 , . . . , vmV ∈ Rd, where d is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model C taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings Vm := {vmi }Vi=1, m = 1, . . . ,M , into a single set of word embeddings G := {v0i }Vi=1. Note that V1, . . . ,VM are given but G is trained. Here we consider G as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings V1, . . . ,VM and the same documents represented by the new embeddings G. Next we describe how the documents are represented by a set of embeddings V1, . . . ,VM and G. For each document dmi , we define its document embedding with Vm as gmi := f(dmi ,Vm), where f(·) can be any mapping. Similarly, we define the document embedding of dmi with G as follows, with G = {v0j }Vj=1 trainable fG(dmi ) := f(dmi ,G). In a typical example, word embeddings would be based on word2vec or GLoVe. Function f can be based on tf-idf, i.e. f(dmi ,V) = ∑V j=1 t m ij v m j where vmj is the word embedding of the j-th word in the m-th corpus Cm and tmi = (tmi1, . . . , tmiV ) is the tf-idf representation of the i-th document dmi in the m-th corpus Cm. To train the GAN model, we consider the following minimax problem\nmin C,G max D\n{ M∑\nm=1 nm∑ i=1 [log(D(gmi )) + log(1− D(fG(dmi )))− log(eTkmi C(fG(d m i )))] } , (1)\nwhere D is a discriminator of whether a document is original or artificial. Here kmi is the label of document dmi with respect to classifier C, and ekmi is a unit vector with only the k m i -th component being one and all other components being zeros. Note that log(eTkmi C(fG(d m i ))) is equivalent to KL(ekmi ‖C(fG(d m i ))), but we use the former notation due to its brevity.\nThe intuition of problem (3) is explained as follows. First we consider a discriminator D which is a feedforward neural network (FFNN) with binary outcomes, and classifies the document embeddings {fG(dmi )} nm i=1 M m=1 against the original document embeddings {gmi } nm i=1 M m=1. Discriminator D minimizes this classification error, i.e. it maximizes the log-likelihood of {fG(dmi )} nm i=1 M m=1 having label 0 and {gmi } nm i=1 M m=1 having label 1. This corresponds to\nM∑ m=1 nm∑ i=1 [log(D(gmi )) + log(1−D(fG(dmi )))] . (2)\nFor the generator G, we wish to minimize (3) against G so that we can apply the minimax strategy, and the combined word embeddings G would resemble each set of word embeddings V1, . . . ,VM . Meanwhile, we also consider classifier C with K outcomes, and associates dmi with label kmi , so that the generator G can learn from the document labeling in a semi-supervised way. If the classifier C outputs a K-dimensional softmax probability vector, we minimize the following against G, which corresponds to (3) given D and C:\nM∑ m=1 nm∑ i=1 [log(1−D(fG(dmi )))− log(eTkmi C(fG(d m i ))) ] . (3)\nFor the classifier C, we also minimize its negative log-likelihood\n− M∑\nm=1 nm∑ i=1 log(eTkmi C(fG(d m i ))). (4)\nAssembling (4-6) together, we retrieve the original minimax problem (3).\nWe train the discriminator and the classifier, {D, C}, and the combined embeddings G according to (4-6) iteratively for a fixed number of epochs with the stochastic gradient descent algorithm, until the discrimination and classification errors become stable.\nFigure 1 illustrates the weGAN model. The algorithm for weGAN is summarized in Algorithm 1 in the appendix.\n𝒞1 𝒞2 ⋯ 𝒞𝑀\n𝑑1 1 ⋯ 𝑑𝑛1 1 𝑑1 2 ⋯ 𝑑𝑛2 2 ⋯ 𝑑1 𝑀 ⋯ 𝑑𝑛𝑀 𝑀\n𝑔1 1, … , 𝑔𝑛1 1\ncorpora\ndocuments\n𝒱1 word\nembeddings\ndocument\nembeddings\n𝒟discriminator\n𝐹𝑟𝑜𝑚 {𝒱1, … , 𝒱𝑀} 𝑜𝑟 𝒢\n𝒞classifier\noutput\n𝒱2\n𝑔1 2, … , 𝑔𝑛2 2\n𝒱𝑀\n𝑔1 𝑀 , … , 𝑔𝑛𝑀 𝑀\n𝑡𝑛𝑀 𝑀{𝑑1 1 , … , 𝑑𝑛𝑀 𝑀 }\n𝒢\n{𝑓𝒢 𝑑1 1 , … , 𝑓𝒢(𝑑𝑛𝑀 𝑀 )}\nLabel distribution\nFigure 1: Model structure of weGAN.\n𝒞1 𝒞2 ⋯ 𝒞𝑀\n𝑑1 1 ⋯ 𝑑𝑛1 1 𝑑1 2 ⋯ 𝑑𝑛2 2 ⋯ 𝑑1 𝑀 ⋯ 𝑑𝑛𝑀 𝑀\ncorpora\ndocument\nembeddings\n𝒟discriminator\n𝐹𝑟𝑜𝑚 𝒞1, 𝒞2, … , 𝒞𝑀 𝑜𝑟 {𝒢1, … , 𝒢𝑀}output\n𝒢1(𝑛)\ngenerators\n𝑊ℎ 0,𝑊𝑜 0\n𝒢1 𝒢𝑀𝒢2\nnoise𝑛\n𝒢2(𝑛) 𝒢𝑀(𝑛)generated embeddings\n𝑊ℎ 1,𝑊𝑜 1 𝑊ℎ 2,𝑊𝑜 2 𝑊ℎ 𝑀,𝑊𝑜 𝑀\nFigure 2: Model structure of deGAN." }, { "heading": "3.2 deGAN: Generating document embeddings for multi-corpus text data", "text": "In this section, our goal is to generate document embeddings which would resemble real document embeddings in each corpus Cm, m = 1, . . . ,M . We construct M generators, G1, . . . ,GM so that Gm generate artificial examples in corpus Cm. As in Section 3.1, there is a certain document embedding such as tf-idf, bag-of-words, or para2vec. Let G = {G1, . . . ,GM}. We initialize a noise vector n = (n1, . . . , ndn) ∈ Rdn , where n1, . . . , ndn\niid∼ N , and N is any noise distribution. For a generator Gm = {Wmh ,W 0h ,Wmo ,W 0o } represented by its parameters, we first map the noise vector n to the hidden layer, which represents different topics. We consider two hidden vectors, h0 for general topics and hm for specific topics per corpus, hm = a1(Wmh n), h 0 = a1(W 0 hn). Here a1(·) represents a nonlinear activation function. In this model, the bias term can be ignored in order to prevent the “mode collapse” problem of the generator. Having the hidden vectors, we then map them to the generated document embedding with another activation function a2(·),\nom = a2(W m o h m + w0oh 0). (5)\nTo summarize, we may represent the process from noise to the document embedding as Gm(n) = a2(W m o a1(W m h n) + w 0 oa1(W 0 hn)). Given the generated document embeddings G1(n), . . . ,GM (n), we consider the following minimax problem to train the generator G and the discriminator D:\nmin G M∑ m=1 En log { eTM+mD∗G(Gm(n))/ [eTM+mD∗G(Gm(n)) + eTmD∗G(Gm(n))] } , (6)\nwhere\nD∗G ∈ argmaxD M∑ m=1 Edm∼pm [ log(eTmD(dm)) ] + M∑ m=1 En[log(e T M+mD(Gm(n)))]. (7)\nHere we assume that any document embedding dm in corpus Cm is a sample with respect to the probability density pm. Note that when M = 1, the discriminator part of our model is equivalent to the original GAN model.\nTo explain (10), first we consider the discriminator D. Because there are multiple corpora of text documents, here we consider 2M categories as output ofD, from which categories 1, . . . ,M represent the original corpora C1, . . . , CM , and categories M + 1, . . . , 2M represent the generated document embeddings (e.g. bag-of-words) from G1, . . . ,GM . Assume the discriminator D, a feedforward neural network, outputs the distribution of a text document being in each category. We maximize the\nlog-likelihood of each document being in the correct category against D M∑\nm=1\nEpm [ log(eTmD(dm)) ] + M∑ m=1 En[log(e T M+mD(Gm(n)))]. (8)\nSuch a classifier does not only classifies text documents into different categories, but also considers M “fake” categories from the generators. When training the generators G1, . . . ,GM , we minimize the following which makes a comparison between the m-th and (M +m)-th categories\nM∑ m=1 En log eTM+mD(Gm(n)) eTM+mD(Gm(n)) + eTmD(Gm(n)) . (9)\nThe intuition of (13) is that for each generated document embedding Gm(n), we need to decrease eTM+mD(Gm(n)), which is the probability of the generated embedding being correctly classified, and increase eTmD(Gm(n)), which is the probability of the generated embedding being classified into the target corpus Cm. The ratio in (13) reflects these two properties. We iteratively train (12) and (13) until the classification error ofD becomes stable. Figure 2 illustrates the deGAN model. The algorithm for deGAN is summarized in Algorithm 2 in the appendix.\nWe next show that from (10), the distributions of the document embeddings from the optimal G1, . . . ,GM are equal to the data distributions of C1, . . . , CM , which is a generalization of Goodfellow et al. (2014) to the multi-corpus scenario. The proof of Proposition 1 is in the appendix.\nProposition 1. Let us assume that the random variables d1, . . . , dM are continuous with probability density p1, . . . , pM which have bounded support X ; n is a continuous random variable with bounded support and activations a1 and a2 are continuous; and that G1∗, . . . ,GM∗ are solutions to (10). Then q∗1 , . . . , q ∗ M , the probability density of the document embeddings from G1∗, . . ., GM∗, are equal to p1, . . . , pM ." }, { "heading": "4 Experiments", "text": "In the experiments, we consider four data sets, two of them newly created and the remaining two already public: CNN, TIME, 20 Newsgroups (in the appendix), and Reuters-21578 (in the appendix). The code and the two new data sets are available at github.com/deeplearning2018/emgan. For the pre-processing of all the documents, we transformed all characters to lower case, stemmed the documents, and ran the word2vec model on each corpora to obtain word embeddings with a size of 300. In all subsequent models, we only consider the most frequent 5,000 words across all corpora in a data set.\nThe document embedding in weGAN is the tf-idf weighted word embedding transformed by the tanh activation, i.e. f(dmi ,Vm) = tanh (∑V j=1 t m ij v m j ) . For deGAN, we use L1-normalized tf-idf as the document embedding because it is easier to interpret than the transformed embedding in (20).\nFor weGAN, the cross-corpus word embeddings are initialized with the word2vec model trained from all documents. For training our models, we apply a learning rate which increases linearly from 0.01 to 1.0 and train the models for 100 epochs with a batch size of 50 per corpus. The classifier C has a single hidden layer with 50 hidden nodes, and the discriminator with a single hidden layer D has 10 hidden nodes. All these parameters have been optimized. For the labels kmi in (8), we apply corpus membership of each document.\nFor the noise distribution N for deGAN, we apply the uniform distribution U(−1, 1). In (14) for deGAN, a1 = tanh and a2 = softmax so that the model outputs document embedding vectors which are comparable to L1-normalized tf-idf vectors for each document. For the discriminator D of deGAN, we apply the word2vec embeddings based on all corpora to initialize its first layer, followed by another hidden layer of 50 nodes. For the discriminator D, we apply a learning rate of 0.1, and for the generator G, we apply a learning rate of 0.001, because the initial training phase of deGAN can be unstable. We also apply a batch size of 50 per corpus. For the softmax layers of deGAN, we initialize them with the log of the topic-word matrix in latent Dirichlet allocation (LDA) (Blei et al., 2003) in order to provide intuitive estimates.\nw2v-RI weGAN-RI mean 67.88% 68.45%\nsd. 0.02% 0.01% w2v-accuracy weGAN-accuracy mean 92.05% 92.36% sd. 0.06% 0.03%\nTable 1: A comparison between word2vec and weGAN in terms of the Rand index and the classification accuracy for the CNN data set.\nObama w2v Bush Trump Kerry Abe Netanyahu Rouhani Erdogan he Karzai Tillerson Obama weGAN Trump Bush Kerry Abe Netanyahu Erdogan Tillerson he Carter Rouhani Trump w2v Obama Pence Erdogan Bush Duterte he Sanders Macron Christie Tillerson Trump weGAN Obama Pence Bush Christie Sanders Clinton Erdogan Tillerson Macron\nDuterte U.S. w2v US Pentagon United Iranian NATO Turkish Qatar Iran British UAE U.S. weGAN US Pentagon United Iranian\nNATO Turkish Iran Qatar American UAE\nTable 2: Synonyms of “Obama,” “Trump,” and “U.S.” before and after weGAN training for the CNN data set.\nFor weGAN, we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents: (1) applying the document embeddings to cluster the documents into M clusters with the K-means algorithm, and calculating the Rand index (RI) (Rand, 1971) against the original corpus membership; (2) finetuning the classifier C and comparing the classification error against an FFNN of the same structure initialized with word2vec (w2v). For deGAN, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the same FFNN. Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch." }, { "heading": "4.1 The CNN data set", "text": "In the CNN data set, we collected all news links on www.cnn.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept those belonging to the three largest categories: “politics,” “world,” and “US.” We then divided these documents into 21,674 training documents, from which 2,708 validation documents are held out, and 5,420 testing documents.\nWe hypothesize that because weGAN takes into account document labels in a semi-supervised way, the embeddings trained from weGAN can better incorporate the labeling information and therefore, produce document embeddings which are better separated. The results are shown in Table 1 and averaged over 5 randomized runs. Performing the Welch’s t-test, both changes after weGAN training are statistically significant at a 0.05 significance level. Because the Rand index captures matching accuracy, we observe from Table 1 that weGAN tends to improve both metrics.\nMeanwhile, we also wish to observe the spatial structure of the trained embeddings, which can be explored by the synonyms of each word measured by the cosine similarity. On average, the top 10 synonyms of each word differ by 0.22 word after weGAN training, and 20.7% of all words have different top 10 synonyms after training. Therefore, weGAN tends to provide small adjustments rather than structural changes. Table 2 lists the 10 most similar terms of three terms, “Obama,” “Trump,” and “U.S.,” before and after weGAN training, ordered by cosine similarity.\nWe observe from Table 2 that for “Obama,” ”Trump” and “Tillerson” are more similar after weGAN training, which means that the structure of the weGAN embeddings can be more up-to-date. For “Trump,” we observe that “Clinton” is not among the synonyms before, but is after, which shows that the synonyms after are more relevant. For “U.S.,” we observe that after training, “American” replaces “British” in the list of synonyms, which is also more relevant.\nWe next discuss deGAN. In Table 3, we compare the performance of finetuning the discriminator of deGAN for document classification, and the performance of the FFNN initialized with word2vec. The change is also statistically significant at the 0.05 level. From Table 3, we observe that deGAN improves the accuracy of supervised learning.\nUS original climate change year study according says country temperatures average US deGAN area efforts volunteers town\nweapons shot local nearly department also\nTable 4: Bag-of-words representations of original and artificial text in the CNN data set.\nTo compare the generated samples from deGAN with the original bag-of-words, we randomly select one record in each original and artificial corpus. The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed. The bag-of-words embeddings are shown in Table 4.\nFrom Table 4, we observe that the bag-of-words embeddings of the original documents tend to contain more name entities, while those of the artificial deGAN documents tend to be more general. There are many additional examples not shown here with observed artificial bag-of-words embeddings having many name entities such as “Turkey,” “ISIS,” etc. from generated documents, e.g. “Syria eventually ISIS U.S. details jet aircraft October video extremist...”\n10 5 0 5 10 10\n5\n0\n5\n10\nFigure 3: 2-d representation of original (red) and artificial (blue) examples in the CNN data set.\n15 10 5 0 5 10 15 15\n10\n5\n0\n5\n10\n15\nFigure 4: 2-d representation of original (red) and artificial (blue) examples in the TIME data set.\nWe also perform dimensional reduction using t-SNE (van der Maaten & Hinton, 2008), and plot 100 random samples from each original or artificial category. The original samples are shown in red and the generated ones are shown in blue in Figure 3. We do not further distinguish the categories because there is no clear distinction between the three original corpora, “politics,” “world,” and “US.”\nWe observe that the original and artificial examples are generally mixed together and not well separable, which means that the artificial examples are similar to the original ones. However, we also observe that the artificial samples tend to be more centered and have no outliers (represented by the outermost red oval)." }, { "heading": "4.2 The TIME data set", "text": "In the TIME data set, we collected all news links on time.com in the GDELT 1.0 Event Database from April 1st, 2013 to July 7, 2017. We then collected the news articles from the links, and kept\nthose belonging to the five largest categories: “Entertainment,” “Ideas,” “Politics,” “US,” and “World.” We divided these documents into 12,286 training documents, from which 1,535 validation documents are held out, and 3,075 testing documents.\nTable 5 compares the clustering results of word2vec and weGAN, and the classification accuracy of an FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set. The differences are also significant at the 0.05 level.\nObama w2v Trump Bush Xi Erdogan Rouhani Reagan Hollande Duterte Abe Jokowi Obama weGAN Trump Bush Xi Erdogan Reagan Rouhani Hollande Abe Jokowi Duterte Trump w2v Obama Erdogan Rubio Duterte Bush Putin Sanders Xi Macron Pence Trump weGAN Obama Erdogan Rubio Bush Sanders Putin Duterte Xi Macron Pence\nU.S. w2v NATO Iran Japan Pentagon Russia Pakistan Tehran EU Ukrainian Moscow U.S. weGAN NATO Pentagon Iran Japan Russia Tehran Pakistan EU Ukrainian Moscow\nTable 6: Synonyms of “Obama,” “Trump,” and “U.S.” before and after weGAN training for the TIME data set.\nFrom Table 5, we observe that both GAN models yield improved performance of supervised learning. For weGAN, on an average, the top 10 synonyms of each word differ by 0.27 word after weGAN training, and 24.8% of all words have different top 10 synonyms after training. We also compare the synonyms of the same common words, “Obama,” “Trump,” and “U.S.,” which are listed in Table 6. In the TIME data set, for “Obama,” “Reagan” is ranked slightly higher as an American president. For “Trump,” “Bush” and “Sanders” are ranked higher as American presidents or candidates. For “U.S.,” we note that “Pentagon” is ranked higher after weGAN training, which we think is also reasonable because the term is closely related to the U.S. government.\nFor deGAN, we also compare the original and artificial samples in terms of the highest probability words, which is shown in Table 7. We also perform dimensional reduction using t-SNE for 100 examples per corpus and plot them in Figure 4. All these figures and tables show results similar to Section 4.1." }, { "heading": "A Algorithms for weGAN and deGAN", "text": "Algorithm 1 (for weGAN).\n1. Train G based on f from all corpora C1, . . . , CM . 2. Randomly initialize the weights and biases of the classifier C and discriminator D. 3. Until maximum number of iterations reached\n(a) Update C and D according to (4) and (6) given a mini-batch S1 of training examples {dmi }i,m.\n(b) Update G according to (5) given a mini-batch S2 of training examples {dmi }i,m. 4. Output G as the cross-corpus word embeddings.\nAlgorithm 2 (for deGAN).\n1. Randomly initialize the weights of G1, . . . ,GM . 2. Initialize the discriminator D with the weights of the first layer (which takes document\nembeddings as the input) initialized by word embeddings, and other parameters randomly initialized.\n3. Until maximum number of iterations reached (a) Update D according to (12) given a mini-batch of training examples dmi and samples\nfrom noise n. (b) Update G1, . . . ,GM according to (13) given a mini-batch of training examples dmi and\nsamples form noise n. 4. Output G1, . . . ,GM as generators of document embeddings and D as a corpus classifier." }, { "heading": "B Proof of Proposition 1", "text": "Since X is bounded, all of the integrals exhibited next are well-defined and finite. Since n, a1, and a2 are continuous, it follows that for any parameters, Gm(n) is a continuous random variable with probability density qm with finite support. From (11),\nD∗G = argmaxD\n{ M∑\nm=1\n∫ pm(x) log(e T mD(x))dx\n+ M∑ m=1 ∫ qm(x) log(e T M+mD(x))dx }\n= argmax D {∫ M∑ m=1 pm(x) log(e T mD(x))\n+ M∑ m=1 qm(x) log(e T M+mD(x))dx\n} . (10)\nThis problem reduces to\nmax b1,...,bm M∑ m=1 am log bm subject to M∑ m=1 bm = 1, (11)\nthe solution of which is b∗m = am/ ∑M m=1 am, m = 1, . . . ,M . Therefore, the solution to (15) is\nD∗G(x) = (p1(x), . . . , pM (x), q1(x), . . . , qM (x))∑M m=1 pm(x) + ∑M m=1 qm(x) . (12)\nWe then obtain from (10) that\nq∗1 , . . . , q ∗ M ∈ arg min\nq1,...,qM M∑ m=1 ∫ qm(x) · log [ qm(x) qm(x) + pm(x) ] dx\n= arg min q1,...,qM\n−M log 2 + M∑\nm=1\n∫ qm(x) log [ qm(x)\n(qm(x) + pm(x))/2\n] dx\n= arg min q1,...,qM\n−M log 2 + M∑\nm=1\nKL(qm‖(qm + pm)/2).\nFrom non-negativity of the Kullback-Leibler divergence, we conclude that\n(q∗1 , . . . , q ∗ M ) = (p1, . . . , pM )." }, { "heading": "C The 20 Newsgroups data set", "text": "The 20 Newsgroups data set is a collection of news documents with 20 categories. To reduce the number of categories so that the GAN models are more compact and have more samples per corpus, we grouped the documents into 6 super-categories: “religion,” “computer,” “cars,” “sport,” “science,” and “politics” (“misc” is ignored because of its noisiness). We considered each super-category as a different corpora. We then divided these documents into 10,708 training documents, from which 1,335 validation documents are held out, and 7,134 testing documents. We train weGAN and deGAN in the the beginning of Section 4, except that we use a learning rate of 0.01 for the discriminator in deGAN to stabilize the cost function. Table 8 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the 0.05 level. The other results are similar to the previous two data sets and are thereby omitted here." }, { "heading": "D The Reuters-21578 data set", "text": "The Reuters-21578 data set is a collection of newswire articles. Because the data set is highly skewed, we considered the eight categories with more than 100 training documents: “earn,” “acq,” “crude,” “trade,” “money-fx,” “interest,” “money-supply,” and “ship.” We then divided these documents into 5,497 training documents, from which 692 validation documents are held out, and 2,207 testing documents. We train weGAN and deGAN in the same way as in the 20 Newsgroups data set. Table 9 compares the clustering results of word2vec and weGAN, and the classification accuracy of the FFNN initialized with word2vec, finetuned weGAN, and finetuned deGAN. All comparisons are statistically significant at the 0.05 level except the Rand index. The other results are similar to the CNN and TIME data sets and are thereby omitted here." } ]
2,019
null
SP:a078647f423f16068679fd5621f3600f1d96f7bf
[ "The paper approaches the CMDP problem, in which one wishes to learn a max return policy subject to trajectory-based constraints. The paper proposes a technique based on the introduced concept of \"backward value functions\". These functions satisfy a sort of Bellman equation. The paper proposes a safe policy improvement step based on these value functions, with theoretical guarantees on the safety of the resulting policy. The method is evaluated on gridworlds and mujoco tasks, showing good performance.", "This paper presents a new approach for solving Constrained MDPs. Because the cost constraint is cumulative, the best action depends on the cumulative cost so far. They address this issue by learning a backward value function of the estimated cumulative cost so far. Their theoretical results show that the same properties for forward value functions hold here for backwards ones. They are then able to use the forward and backward cost estimates to constraint the actions selection, by adding a safety layer to the algorithm. The results show that the method does a better job of meeting safety constraints than the Lyapunov based method." ]
Although Reinforcement Learning (RL) algorithms have found tremendous success in simulated domains, they often cannot directly be applied to physical systems, especially in cases where there are hard constraints to satisfy (e.g. on safety or resources). In standard RL, the agent is incentivized to explore any behavior as long as it maximizes rewards, but in the real world undesired behavior can damage either the system or the agent in a way that breaks the learning process itself. In this work, we model the problem of learning with constraints as a Constrained Markov Decision Process, and provide a new on-policy formulation for solving it. A key contribution of our approach is to translate cumulative cost constraints into state-based constraints. Through this, we define a safe policy improvement method which maximizes returns while ensuring that the constraints are satisfied at every step. We provide theoretical guarantees under which the agent converges while ensuring safety over the course of training. We also highlight computational advantages of this approach. The effectiveness of our approach is demonstrated on safe navigation tasks and in safety-constrained versions of MuJoCo environments, with deep neural networks.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "arXiv preprint arXiv:1705.10528,", "year": 2017 }, { "authors": [ "Eitan Altman" ], "title": "Constrained Markov decision processes, volume 7", "venue": "CRC Press,", "year": 1999 }, { "authors": [ "Felix Berkenkamp", "Matteo Turchetta", "Angela Schoellig", "Andreas Krause" ], "title": "Safe model-based reinforcement learning with stability guarantees", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Dynamic programming and optimal control, volume 1", "venue": "Athena scientific Belmont, MA,", "year": 1995 }, { "authors": [ "Yinlam Chow", "Mohammad Ghavamzadeh", "Lucas Janson", "Marco Pavone" ], "title": "Risk-constrained reinforcement learning with percentile risk criteria", "venue": "arXiv preprint arXiv:1512.01629,", "year": 2015 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "A lyapunovbased approach to safe reinforcement learning", "venue": "arXiv preprint arXiv:1805.07708,", "year": 2018 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Aleksandra Faust", "Mohammad Ghavamzadeh", "Edgar DuenezGuzman" ], "title": "Lyapunov-based safe policy optimization for continuous control", "venue": null, "year": 1901 }, { "authors": [ "Gal Dalal", "Krishnamurthy Dvijotham", "Matej Vecerik", "Todd Hester", "Cosmin Paduraru", "Yuval Tassa" ], "title": "Safe exploration in continuous action spaces", "venue": "arXiv preprint arXiv:1801.08757,", "year": 2018 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Benjamin Eysenbach", "Shixiang Gu", "Julian Ibarz", "Sergey Levine" ], "title": "Leave no trace: Learning to reset for safe and autonomous reinforcement learning", "venue": "arXiv preprint arXiv:1711.06782,", "year": 2017 }, { "authors": [ "Torsten Koller", "Felix Berkenkamp", "Matteo Turchetta", "Andreas Krause" ], "title": "Learning-based model predictive control for safe exploration and reinforcement learning", "venue": "arXiv preprint arXiv:1803.08287,", "year": 2018 }, { "authors": [ "Vijay R Konda", "John N Tsitsiklis" ], "title": "Actor-critic algorithms. In Advances in neural information processing", "venue": null, "year": 2000 }, { "authors": [ "Ilya Kostrikov" ], "title": "Pytorch implementations of reinforcement learning algorithms", "venue": "https://github. com/ikostrikov/pytorch-a2c-ppo-acktr-gail,", "year": 2018 }, { "authors": [ "Ian Michael Mitchell" ], "title": "Application of level set methods to control and reachability problems in continuous and hybrid systems", "venue": null, "year": 2003 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Teodor Mihai Moldovan", "Pieter Abbeel" ], "title": "Safe exploration in markov decision processes", "venue": "arXiv preprint arXiv:1205.4810,", "year": 2012 }, { "authors": [ "Tetsuro Morimura", "Eiji Uchibe", "Junichiro Yoshimoto", "Jan Peters", "Kenji Doya" ], "title": "Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning", "venue": "Neural Computation,", "year": 2010 }, { "authors": [ "Gergely Neu", "Anders Jonsson", "Vicenç Gómez" ], "title": "A unified view of entropy-regularized markov decision processes", "venue": "arXiv preprint arXiv:1705.07798,", "year": 2017 }, { "authors": [ "Jing Peng", "Ronald J Williams" ], "title": "Incremental multi-step q-learning", "venue": "In Machine Learning Proceedings", "year": 1994 }, { "authors": [ "Joelle Pineau" ], "title": "The machine learning reproducibility checklist", "venue": "https://www.cs.mcgill.ca/ ~jpineau/ReproducibilityChecklist.pdf,", "year": 2018 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 2014 }, { "authors": [ "Gavin A Rummery", "Mahesan Niranjan" ], "title": "On-line Q-learning using connectionist systems, volume 37", "venue": null, "year": 1994 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Richard S Sutton" ], "title": "Learning to predict by the methods of temporal differences", "venue": "Machine learning,", "year": 1988 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Aviv Tamar", "Dotan Di Castro", "Shie Mannor" ], "title": "Policy evaluation with variance related risk criteria in markov decision processes", "venue": "arXiv preprint arXiv:1301.0104,", "year": 2013 }, { "authors": [ "Chen Tessler", "Daniel J Mankowitz", "Shie Mannor" ], "title": "Reward constrained policy optimization", "venue": "arXiv preprint arXiv:1805.11074,", "year": 2018 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Matteo Turchetta", "Felix Berkenkamp", "Andreas Krause" ], "title": "Safe exploration in finite markov decision processes with gaussian processes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Akifumi Wachi", "Yanan Sui", "Yisong Yue", "Masahiro Ono" ], "title": "Safe exploration and optimization of constrained mdps using gaussian processes", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Yuhuai Wu", "Elman Mansimov", "Roger B Grosse", "Shun Liao", "Jimmy Ba" ], "title": "Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Morimura" ], "title": "We give the proof too for the sake of completeness", "venue": null, "year": 2010 }, { "authors": [ "Morimura" ], "title": "Using the Markov property and then substituting Eq", "venue": null, "year": 2010 }, { "authors": [], "title": "From entropy-regularized literature (Neu et al., 2017), the inner λ-solution policy has the form", "venue": null, "year": 2017 }, { "authors": [ "Achiam" ], "title": "Point-Gather: The environment", "venue": null, "year": 2017 }, { "authors": [ "Chow" ], "title": "A bi-pedal agent (HalfCheetah-v0) is augmented with speed safety constraints. The agent gets the reward based on the speed with which it runs, and the constrain is define on the speed to be less than 1, i.e., it gets a constraint cost based on 1[|v| > 1], where v is the velocity at the state. The maximum length of the episode is 200 and the constraint threshold is d0 = 50", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement Learning (RL) provides a sound decision-theoretic framework to optimize the behavior of learning agents in an interactive setting (Sutton & Barto, 2018). Recently, the field of RL has found success in many high-dimensional domains, like video games, Go, robot locomotion and navigation. However, most of the success of RL algorithms has been limited to simulators, where the learning algorithm has the ability to reset the simulator. In the physical world, an agent will need to avoid harmful behavior (e.g. damaging the environment or the agent’s hardware) while learning to explore behaviors that maximize the reward.\nA few popular approaches for avoiding undesired behaviors for high-dimensional systems include reward-shaping (Moldovan & Abbeel, 2012), reachability-preserving algorithms (Mitchell, 2003; Eysenbach et al., 2017), state-level surrogate constraint satisfaction algorithms (Dalal et al., 2018), risk-sensitive algorithms (Tamar et al., 2013; Chow et al., 2015) and apprenticeship learning (Abbeel & Ng, 2004). There also exists model-based Bayesian approaches that are focused on imposing the constraints via the dynamics (such as classifying parts of state space as unsafe) and then using model predictive control to incorporate the constraints in the policy optimization and planning (Turchetta et al., 2016; Berkenkamp et al., 2017; Wachi et al., 2018; Koller et al., 2018). A natural way to model safety is via constraint satisfaction. A standard formulation for adding constraints to RL problems is the Constrained Markov Decision Process (CMDP) framework (Altman, 1999), wherein the environment is extended to also provide feedback on constraint costs. The agent must then attempt to maximize its expected return while also satisfying cumulative constraints.\nA few algorithms have been proposed to solve CMDPs for high-dimensional domains with continuous action spaces - however they come with their own caveats. Reward Constrained Policy Optimization (Tessler et al., 2018) and Primal Dual Policy Optimization (Chow et al., 2015) do not guarantee constraint satisfaction during the learning procedure, only on the final policy. Constrained Policy Optimization (Achiam et al., 2017) provides monotonic policy improvement but is computationally expensive due to requiring a backtracking line-search procedure and conjugate gradient algorithm for approximating the Fisher Information Matrix. Lyapunov-based Safe Policy Optimization (Chow\net al., 2019) requires solving a Linear Program (LP) at every step of policy evaluation, although they show that there exists heuristics which can be substituted for the LP at the expense of theoretical guarantees.\nIn this work, we propose an alternate formulation for solving CMDPs that transforms trajectory-level constraints into localized state-dependent constraints, through which a safe policy improvement step can be defined. In our approach, we define a notion of Backward Value Functions, which act as an estimator of the expected cost collected by the agent so far and can be learned via standard RL bootstrap techniques. We provide conditions under which this new formulation is able to solve CMDPs without violating the constraints during the learning process. Our formulation allows us to define state-level constraints without explicitly solving a LP or the Dual problem at every iteration. Our method is implemented as a reduction to any model-free on-policy bootstrap based RL algorithm, both for deterministic and stochastic policies, and discrete and continuous action spaces. We provide the empirical evidence of our approach with Deep RL methods on various safety benchmarks, including 2D navigation grid worlds (Leike et al., 2017; Chow et al., 2018), and MuJoCo tasks (Achiam et al., 2017; Chow et al., 2019)." }, { "heading": "2 CONSTRAINED MARKOV DECISION PROCESSES", "text": "We write P(Y ) for the set of probability distributions on a space Y . A Markov Decision Process (MDP) (Puterman, 2014) is a tuple (X ,A,P, r, x0), where X is a set of states, A is a set of actions, r : X × A → [0, RMAX ] is a reward function, P : X × A → P(X ) is a transition probability function, and x0 is a fixed starting state. For simplicity we assume a deterministic reward function and starting state, but our results generalize.\nA Constrained Markov Decision Process (CMDP) (Altman, 1999) is a MDP with additional constraints that restrict the set of permissible policies for the MDP. Formally, a CMDP is a tuple (X ,A,P, r, x0, d, d0), where d : X → [0, DMAX ] is the cost function1 and d0 ∈ R≥0 is the maximum allowed cumulative cost. The set of feasible policies that satisfy the CMDP is the subset of stationary policies ΠD := {π : X → P(A)\n∣∣ E[∑Tt=0 d(xt) | x0, π] ≤ d0}. We consider a finite time horizon T after which the episode terminates. The expected sum of rewards following a policy π from an initial state x is given by the value function V π(x) = E[ ∑T t=0 r(xt, at) | π, x]. Analogously,\nthe expected sum of costs is given by the cost value function V πD (x) = E[ ∑T t=0 d(xt) | π, x]. The RL problem in the CMDP is to find the feasible policy which maximizes expected returns from the initial state x0, i.e.\nπ∗ = arg max π∈ΠD V π(x0)\nAn important point to note about CMDPs is that, in the original formulation, the cost function depends on immediate states but the constraint is cumulative and thus depends on the entire trajectory.\nIn the case of MDPs, where a model of the environment is not known or is not easily obtained, it is still possible for the agent to find the optimal policy using Temporal Difference (TD) methods (Sutton, 1988). Broadly, these methods update the estimates of the value functions via bootstraps of previous estimates on sampled transitions (we refer the reader to Sutton & Barto (2018) for more information). In the on-policy setting, we alternate between estimating the state-action value function Qπ for a given π and updating the policy to be greedy with respect to the value function." }, { "heading": "3 SAFE POLICY ITERATION VIA BACKWARD VALUE FUNCTIONS", "text": "Our approach proposes to convert the trajectory-level constraints of the CMDP into single-step state-wise constraints in such a way that satisfying the state-wise formulation will entail satisfying the original trajectory-level problem. The advantages of this approach are twofold: i) working with single-step state-wise constraints allows us to obtain analytical solutions to the optimization problem, and ii) the state-wise constraints can be defined via value-function-like quantities and can thus be estimated with well-studied value-based methods. The state-wise constraints are defined via Backward Value Functions, in Section 3.2, and in Section 3.3 we provide a safe policy iteration procedure which satisfies said constraints (and thus the original problem).\n1Here the cost only depends on states and not state-action pairs." }, { "heading": "3.1 BACKWARD MARKOV CHAIN", "text": "Unlike in traditional RL, in the CMDP setting the agent needs to take into account the constraints which it has accumulated so far in order to plan accordingly for the future. Intuitively, the accumulated cost so far can be estimated via the cost value function VD running “backward in time”. Before giving the details of our approach and formally introducing the Backward Value Functions, we review the main ideas, which are built upon the work of Morimura et al. (2010), who also considered time-reversed Markov chains but from the standpoint of estimating the gradient of the log stationary distribution; we extend these ideas to TD methods. Assumption 3.1 (Stationarity). The MDP is ergodic for any policy π, i.e., the Markov chain characterized by the transition probability Pπ(xt+1|xt) = ∑ at∈A P(xt+1|xt, at)π(at|xt) is irreducible and aperiodic.\nLet M(π) denote the Markov chain characterized by transition probability Pπ(xt+1|xt). The above assumption implies that there exists a unique stationary distribution ηπ associated with π, such that it satisfies: ηπ(xt+1) = ∑ xt∈X P\nπ(xt+1|xt)ηπ(xt). We abuse the notation and denote Pπ(xt+1, at|xt) = P(xt+1|xt, at)π(at|xt). According to Bayes’ rule, the probability q(xt−1, at−1|xt) of a previous state-action pair (xt−1, at−1) leading to the current state xt is given by:\nq(xt−1, at−1|xt) = P(xt|xt−1, at−1)Pr(xt−1, at−1)∑ xt−1∈X ∑ at−1∈A P(xt|xt−1, at−1)Pr(xt−1, at−1) .\nFrom Assumption 3.1, we have that Pr(xt−1, at−1) = ηπ(xt−1)π(at−1|xt−1), and∑ xt−1∈X ∑ at−1∈A P(xt|xt−1, at−1)Pr(xt−1, at−1) = η π(xt). We denote the posterior q(xt−1, at−1|xt) as backward (or time-reversed) probability #»P π (xt−1, at−1|xt), and we have:\n#»P π (xt−1, at−1|xt) = P(xt|xt−1, at−1)ηπ(xt−1)π(at−1|xt−1) ηπ(xt)\n= Pπ(xt, at−1|xt−1)ηπ(xt−1)\nηπ(xt) . (1)\nThe forward Markov chain, characterized by the transition matrix Pπ(xt+1|xt), runs forward in time, i.e., it gives the probability of the next state in which the agent will end up. Analogously, a backward Markov chain is denoted by the transition matrix #»P π (xt−1|xt) = ∑ at−1∈A #»P π (xt−1, at−1|xt), and describes the state and action the agent took to reach the current state. Definition 3.1 (Backward Markov Chain). A backward Markov chain associated with M(π) is denoted by #»B(π) and is characterized by the transition probability #»P π (xt−1|xt)." }, { "heading": "3.2 BACKWARD VALUE FUNCTION", "text": "We define the Backward Value Function (BVF) to be a value function running on the backward Markov chain\n#»B(π). A BVF is the expected sum of returns or costs collected by the agent so far. We are mainly interested in maintaining estimates of the cumulative cost incurred at a state in order to express the total constraint state-wise.\nWe note that, since every Markov chainM(π) is ergodic by Assumption 3.1, the corresponding backward Markov chain B(π) is also ergodic (Morimura et al., 2010, Prop. B.1). In particular, every policy π can reach the initial state via some path in the transition graph of the backward Markov chain. Thus, the backwards Markov chain are also finite-horizon for some TB, with x0 corresponding to the terminal state. We define a finite-horizon Backward Value Function for cost as:\n#» V π\nD(xt) = E #»B (π) [ TB∑ k=0 d(xt−k)|xt ] . (2)\nProposition 3.1 (Sampling). Samples from the forward Markov chainM(π) can be used directly to estimate the statistics of the backward Markov chain #»B(π) (or the Backward Value Function). We\nhave:\nE #»B (π) [ K∑ k=0 d(xt−k)|xt ] = EM(π) [ K∑ k=0 d(xt−k)|xt, ηπ(xt−K) ] , (3)\n= EM(π) [ K∑ k=0 d(xt+k)|xt+K , ηπ(xt) ] ,\nwhere EM(π) and E #»B (π) are expectations over the forward and backward chains respectively. The Equation (3) holds true even in the limit K →∞.\nThe proof is given in Appendix B.1. Using the above proposition, we get an interchangeability property that removes the need to sample from the backward chain. We can use the traditional RL setting and draw samples from the forward chain and still estimate the BVFs. Equation (2) can be written recursively as:\n#» V π\nD(xt) = E #»B (π)\n[ d(xt) + #» V π D(xt−1) ] .\nIn operator form, the above equation can also be written as:\n( #»T π #» V π\nD)(xt) = E xt−1∼ #» P π\n[ d(xt) + #» V π D(xt−1) ] . (4)\nProposition 3.2 (Fixed point). For a policy π, the associated Backward Value Function vector, #» V π , satisfies limk→∞ ( #»T π )k #» V = #» V π for every vector #» V , and #» V π\nis the unique solution of the equation #»\nV π = #»T π #» V π .\nThe proof is given in Appendix B.2. The above proposition allows us to soundly extend the RL methods based on Bellman operators for the estimation of BVFs." }, { "heading": "3.3 SAFE POLICY IMPROVEMENT VIA BVF-BASED CONSTRAINTS", "text": "With the Backward Value Function framework, the trajectory-level optimization problem associated with a CMDP can be rewritten in state-wise form. Recall that a feasible policy must satisfy the constraint:\nEM(π) [ T∑ k=0 d(xk) | x0 ] ≤ d0.\nAlternatively, for each timestep t ∈ [0, T ] of a trajectory:\nE\n[ t∑\nk=0\nd(xk) | x0, π ] + E [ T∑ k=t d(xk) | x0, π ] − E [ d(xt) | x0 ] ≤ d0.\nVia the identities E[ ∑T k=t d(xk) | x0, π] ≤ Ext∼δx0 (Pπ)t [V π D (xt)] and E[ ∑t k=0 d(xk) | x0, π] ≤ Exk∼δx0 (Pπ)t [ #» V π D(xt)](derived in Appendix C) 2, we remark that the quantity on the LHS is less than the expectation over k-step trajectories of #» V π\nD(xt) + V π D (xt)− d(xt). In other words, for each\nt ∈ [0, T ] :\nEM(π) [ T∑ k=0 d(xk) | x0 ] ≤ Ext∼δx0 (Pπ)t [ #» V π D(xt) + V π D (xt)− d(xt) ] ≤ d0.\nThese are the state-wise constraints that should hold at each step in a given trajectory - we refer to them as the value-based constraints. Satisfying the value-based constraints will automatically satisfy the given CMDP constraints.\n2δx0 is a Dirac distribution at x0, and δx0(P π)t is the distribution of states at time t.\nThis formulation allows us to introduce a policy improvement step, which maintains a safe feasible policy at every iteration by using the previous estimates of the forward and backward value functions3. The policy improvement step is defined by a linear program, which performs a greedy update with respect to the current state-action value function subject to the value-based constraints:\nπk+1(·|x) = arg max π∈Π\n〈 π(·|x), Qπk(x, ·) 〉 , (SPI)\ns.t. 〈 π(·|x), QπkD (x, ·) 〉 + #» V πk D (x)− d(x) ≤ d0, ∀x ∈ X .\nOur first result is that the policies obtained by the policy improvement step will satisfy the safety constraints. We write TV(·, ·) for the total variation metric between distributions. Theorem 3.1 (Consistent Feasibility). Assume that successive policies are updated sufficiently slowly, i.e. TV(πk+1(·|x), πk(·|x)) ≤ d0−V πk D (x0)\n2DMAXT 2 .4 Then the policy iteration step given by (SPI) is\nconsistently feasible, i.e. if πk is feasible at x0 then so is πk+1.\nIt is also possible to consider larger neighbourhoods for updates of successive policies, but at the cost of everywhere-feasibility. For want of space, we present that result in Appendix D.\nNext we show that the policy iteration step given by (SPI) leads to monotonic improvement.\nTheorem 3.2 (Policy Improvement). Let πn and πn+1 be successive policies generated by the policy iteration step of (SPI). Then V πn+1(x) ≥ V πn(x) ∀x ∈ X . In particular, the sequence of value functions {V πn}n≥0 given by (SPI) monotonically converges.\nProofs for Theorems 3.1 and 3.2 are given in Appendix D. Finding the sub-optimality gap (if any) remains an interesting question left for future work." }, { "heading": "4 PRACTICAL IMPLEMENTATION CONSIDERATIONS", "text": "" }, { "heading": "4.1 DISCRETE ACTION SPACE", "text": "In discrete action spaces, the problem in (SPI) can be solved exactly as a Linear Programming problem. It is possible to approximate its analytical solution by casting it into the corresponding entropy-regularized counterpart (Neu et al., 2017; Chow et al., 2018). The details of the closed form solution can be found in Appendix E.\nFurthermore, if we restrict the set of policies to be deterministic, then it is possible to have an in-graph solution as well. The procedure then closely resembles the Action Elimination Procedure (Puterman, 2014, Chapter 6), where non-optimal actions are identified as being those which violate the constraints." }, { "heading": "4.2 EXTENSION TO CONTINUOUS CONTROL", "text": "For MDPs with only state-dependent costs, Dalal et al. (2018) proposed the use of safety layers, a constraint projection approach, that enables action correction at each step. At any given state, an unconstrained action is selected and is passed to the safety layer, which projects the action to the nearest action (in Euclidean norm) satisfying the necessary constraints. We extend this approach to stochastic policies to handle the corrections for the actions generated by stochastic policies. When the policy is parameterized with a Gaussian distribution, then the safety-layer can still be used by projecting both the mean and standard-deviation vector to the constraint-satisfying hyper-plane5. In most cases, the standard-deviation vector is kept fixed or independent of the state (Kostrikov, 2018; Dhariwal et al., 2017), which allows us to formulate the problem as solving the following L2-projection of the mean of the Gaussian in Euclidean space. For µπ(.; θ), at any given state x ∈ X ,\n3In general, it is not possible to obtain the expectation Ext∼δx0 (Pπ)t [·] directly as it may be intractable to compute or we may not have access to the true transition distributions of the model. Thus, we sample a batch of transitions from the current policy and use them for the updates.\n4This can be enforced, for example, by constraining iterates to a neighborhood D(π, πk) ≤ δ. 5More information about this claim can be found in Appendix F.\nthe safety layer solves the following projection problem:\narg min µ\n[ 1\n2 ‖(µ− µπ(x)‖2\n] ,\ns.t. QπD(x, µ) + #» V π D(x)− d(x) ≤ d0.\nAs shown in Dalal et al. (2018); Chow et al. (2019), if the constraints have linear nature then an analytical solution exists. In order to get a linearized version of the constraints (and simplify the projection), we can approximate the constraint with its first-order Taylor series at µ = µπ(x):\narg min µ\n[ 1\n2 ‖(µ− µπ(x)‖2\n] , (5)\ns.t. #» V π D(x)− d(x) +QπD(x, µπ(x)) + (µ− µπ(x))T (∇QπD(x, µ)|µ=µπ(x))︸ ︷︷ ︸ First order Taylor expansion ≤ d0.\nThe above objective function is positive-definite and quadratic, and the constraints are linear. Though this problem can be solved by an in-graph QP solver, there exists an analytical solution (see Appendix G): Proposition 4.1. At a given state x ∈ X , the solution to the Eq. (5), µ∗ is:\nµ∗ = µπ(x)− λ∗(x) · gµ,D(x), where, gµ,D(x) = ∇QπD(x, µ)|µ=µπ(x),\nλ∗(x) =\n( −(d0 + d(x)− #» V π\nD(x)−QπD(x, µπ(x))) gµ,D(x)T gµ,D(x)\n)+ ." }, { "heading": "5 RELATED WORK", "text": "Lagrangian-based methods: Initially introduced in Altman (1999), more scalable versions of the Lagrangian based methods have been proposed over the years (Moldovan & Abbeel, 2012; Tessler et al., 2018; Chow et al., 2015). The general form of the Lagrangian methods is to convert the problem to an unconstrained problem via Langrange multipliers. If the policy parameters are denoted by θ, then Lagrangian formulation becomes: minλ≥0 maxθ(L(θ, λ) = minλ≥0 maxθ [V\nπθ (x0)− λ(V πθD (x0)− d0))] , where L is the Lagrangian and λ is the Lagrange multiplier (penalty coefficient). The main problems of the Lagrangian methods are that the Lagrangian multiplier is either a hyper-parameter (without much intuition), or is solved on a lower time-scale. That makes the unconstrained RL problem a three time-scale 6 problem, which makes it very difficult to optimize in practice. Another problem is that during the optimization, this procedure can violate the constraints. Ideally, we want a method that can respect the constraint throughout the training and not just at the final optimal policy.\nLyapunov-based methods: In control theory, the stability of the system under a fixed policy is computed using Lyapunov functions (Khalil, 1996). A Lyapunov function is a type of scalar potential function that keeps track of the energy that a system continually dissipates. Recently, Chow et al. (2018; 2019) provide a method of constructing the Lyapunov functions to guarantee global safety of a behavior policy using a set of local linear constraints. Their method requires the knowledge of TV (π, π∗) to guarantee the theoretical claims. They substitute the ideally required Lyapunov function with an approximate solution that requires solving a LP problem at every iteration. For the practical scalable versions, they use a heuristic, a constant Lyapunov function for all states that only depends on the initial state and the horizon. While our methods also constructs state-wise constraints, there are two notable differences: a) our assumption only rely on the current policy candidate and the baseline policy, instead of the baseline and the optimal policy, b) our method does not require solving an LP at every update step to construct the constraint and as such the only approximation error that is introduced comes from the function approximation.\n6Classic Actor Critic is two time-scale (Konda & Tsitsiklis, 2000), and adding a learning schedule over the Lagrangian makes it three time scale.\nConservative Policy Improvement: Constrained Policy Optimization (CPO) (Achiam et al., 2017) extends the trust-region policy optimization (Schulman et al., 2015) algorithm to satisfy constraints during training as well as after convergence. CPO is computationally expensive as it uses an approximation to the Fisher Information Matrix which requires many steps of conjugate gradient descent (ncg steps) followed by a backtracking line-search procedure (nls steps) for each iteration, so it is more expensive by O(ncg + nls) per update. Furthermore, accurately estimating the curvature requires a large number of samples in each batch (Wu et al., 2017)." }, { "heading": "6 EXPERIMENTS", "text": "We empirically validate our approach on RL benchmarks to measure the performance of the agent with respect to the accumulated return and cost during training in the presence of neural-networks based function approximators. We compare our approach with the respective Unconstrained versions, and the Lyapunov-based approach (Chow et al., 2018; 2019) in each setting. Even though our formulation is based on the undiscounted case, we use discounting with γ = 0.99 for estimating the value functions in order to be consistent with the baselines. 7" }, { "heading": "6.1 STOCHASTIC GRID WORLD", "text": "Motivated by the safety in navigation tasks, we first consider a stochastic 2D grid world (Leike et al., 2017; Chow et al., 2018). The agent (green cell in Fig. 1a) starts in the bottom-right corner, the safe region, and the objective is to move to the goal on the other side of the grid (blue cell). The agent can only move in the adjoining cells in the cardinal directions. It gets a reward of +1000 on reaching the goal, and a penalty of −1 at every timestep. Thus, the task is to reach the goal in the shortest amount of time. There are a number of pits in the terrain (red cells) that represent the safety constraint and the agent gets a cost of 10 on passing through any pit cell. Occasionally, with probability p = 0.05, a random action will be executed instead of the one selected by the agent. Thus, the task is to reach to the goal in the shortest amount of time, while passing through the red grids at most d0/10 times. The size of the grid is 12× 12 cells, and the pits are randomly generated for each grid with probability ρ = 0.3. The agent starts at (12, 12) and the goal is selected uniformly on (α, 0), where α ∼ U(0, 12). The threshold d0 = 20 implies the agent can pass at most two pits. The maximum horizon is 200 steps, after which the episode terminates.\nWe use the action elimination procedure described in Sec 4.1 in combination with n-step SARSA (Rummery & Niranjan, 1994; Peng & Williams, 1994) using neural networks and multiple synchronous agents as in Mnih et al. (2016). We use -greedy exploration. The results are shown in Fig. 1 (more experimental details can be found in Appendix H). We observe that the agent is able to respect the safety constraints more adequately than the Lyapunov-based method, albeit at the expense of some decrease in return, which is the expected trade-off for satisfying the constraints." }, { "heading": "6.2 MUJOCO BENCHMARKS", "text": "Based on the safety experiments in Achiam et al. (2017); Chow et al. (2019), we design three simulated robot locomotion continuous control tasks using the MuJoCo simulator (Todorov et al., 2012) and OpenAI Gym (Brockman et al., 2016): (1) Point-Gather: A point-mass agent (S ⊆ R9, A ⊆ R2) is rewarded for collecting the green apples and constrained to avoid the red bombs; (2) Safe-Cheetah: A bi-pedal agent (S ⊆ R18, A ⊆ R6) is rewarded for running at high speed, but at the same time constrained by a speed limit; (3) Point-Circle: The point-mass agent (S ⊆ R9, A ⊆ R2) is rewarded for running along the circumference of a circle in counter-clockwise direction, but is constrained to stay within a safe region smaller than the radius of the circle.\nWe integrate our method on top of the A2C algorithms (Mnih et al., 2016) and PPO (Schulman et al., 2017), using the procedure described in Section 4.2. More details about the tasks and network architecture can be found in the Appendix I. Algorithmic details can be found in Appendix J. The results with A2C are shown in Fig. 2 and the results with PPO are shown in Fig. 3. We observe that\n7In practice, the starting states in the episode are unlikely to be distributed according to the stationary distribution ηπ . We still use the initial trajectories to update the estimates nonetheless in our experiments, but we use n-step updates.\nour Safe method is able to respect the safety constraint throughout most of the learning, and with much greater degree of compliance than the Lyapunov-based method, especially when combined with A2C. The one case where the Safe method fails to respect the constraint is in Point-Circle with PPO (Fig. 3(c)). Upon further examination, we note that the training in this scenario has one of two outcomes: some runs end with the learner in an infeasible set of states from which it cannot recover; other runs end in a good policy that respects the constraint. We discuss solutions to overcome this in the final section." }, { "heading": "7 DISCUSSION", "text": "We present a method for solving constrained MDPs that respects trajectory-level constraints by converting them into state dependent value-based constraints, and show how the method can be used to handle safety limitations in both discrete and continuous spaces. The main advantage of our approach is that the optimization problem is more easily solved with value-based constraints, while providing similar guarantees and requiring less approximations. The empirical results presented show that our approach is able to solve the tasks with good performance while maintaining safety throughout training. It is important to note that there is a fundamental trade-off between exploration and safety. It is impossible to be 100% safe without some knowledge; in cases where that knowledge is not provided a priori, it must be acquired through exploration. We see this in some of our results (Gridworld, Point-Circle) where our safe policy goes above the constraint in the very early phases of training (all our experiments started from a random policy). We note that the other methods also suffer from this shortcoming. An open question is how to provide initial conditions or a priori knowledge, to avoid this burn-in phase. Another complementary strategy to explore is for cases where an agent is stuck in an unsafe or infeasible policy space, where a recovery method (trained by purely minimizing the constraints) could be useful to help the agent recover (Achiam et al., 2017; Chow et al., 2019)." }, { "heading": "A REPRODUCIBILITY CHECKLIST", "text": "We follow the reproducibility checklist (Pineau, 2018) and point to relevant sections explaining them here. For all algorithms presented, check if you include:\n• A clear description of the algorithm. The algorithms are explained in Sec. J. Any additional details for Discrete methods are provided in Sec. 4.1, and for continuous Sec. 4.2.\n• An analysis of the complexity (time, space, sample size) of the algorithm. The analytical solution in Eq. (5) consists of a few vector arithmetic and relu operator and as such has the same complexity as the baselines. For the discrete case, with deterministic policies the solution again can be implemented as part of the computation graph, consisting of basic vector arithmetic operations, and has very little additional overhead. For discrete actions with stochastic policies, one needs to sovle the LP problem in (SPI). In that case the complexity is same as the baseline safe-methods (Lyapunov), and is higher than the unconstrained versions. In terms of computation time (for Deep-RL experiments) the newly proposed algorithms are almost identical to the baselines due to its parallelizable nature. We do not make any claims about the sample complexity.\n• A link to a downloadable source code, including all dependencies. The code will be made available after the acceptance of the paper.\nFor any theoretical claim, check if you include:\n• A statement of the result. See the main paper for all the claims we make. Additional details are provided in the Appendix.\n• A clear explanation of any assumptions. See the main paper for all the assumptions. • A complete proof of the claim. See the main paper. The cross references to the proofs in\nthe Appendix have been included in the main paper.\nFor all figures and tables that present empirical results, check if you include:\n• A complete description of the data collection process, including sample size. For the base agent we standard benchmarks provided in OpenAI Gym (Brockman et al., 2016), and rllab (Duan et al., 2016). We use the code from Achiam et al. (2017) for building the Point-Circle and Point-Gather environments.\n• A link to downloadable version of the dataset or simulation environment. See: github.com/openai/gym for OpenAI Gym benchmarks, github.com/jachiam/cpo for rllab based Circle and Gather environments.\n• An explanation of how samples were allocated for training / validation / testing. We do not use a split as we run multiple runs over random seeds to examine the optimization performance.\n• An explanation of any data that were excluded. NA • The range of hyper-parameters considered, method to select the best hyper-parameter\nconfiguration, and specification of all hyper-parameters used to generate results. The default hyper-parameters for the MuJoCo baselines are taken from Kostrikov (2018). The ranges and parameters for Grid experiments are described in Sec. H, and for MuJoCo are described in Sec. I.\n• The exact number of evaluation runs. The number of evaluation runs is mentioned in the caption corresponding to each result.\n• A description of how experiments were run. See Experiments Sec. 6 in the main paper and in the Appendix Sec. H and Sec. I.\n• A clear definition of the specific measure or statistics used to report results. Undiscounted return and cost using the current policy over the horizon are plotted after every 1000 episodes are plotted. We use a linear-filter with 0.7 weight for smoothing. We use the smooting algorithm provided by TensorBoard (https://github.com/tensorflow/ tensorboard).\n• Clearly defined error bars. Standard error used in all cases.\n• A description of results with central tendency (e.g. mean) and variation (e.g. stddev). The bold lines in the figure represent the mean, and the shaded region denotes the 80% confidence interval.\n• A description of the computing infrastructure used. We distribute all runs across 10 CPU nodes (Intel(R) Xeon(R) CPU E5-2650 v4) and 1 GPU (GP 100) per run for experiments." }, { "heading": "B BACKWARD VALUE FUNCTIONS", "text": "We have the following result from Proposition 1 from Morimura et al. (2010). We give the proof too for the sake of completeness.\nProposition B.1. Let the forward Markov chain M(π) be irreducible and ergodic, i.e., has a stationary distribution. Then the associated backward Markov chain\n#»B(π) is also ergodic and has the same unique stationary distribution asM(π):\nηπ(x) = #»η π(x), (∀x ∈ X )\nwhere ηπ(x) and #»η π(x) are the stationary distributions ofM(π) and #»B(π).\nProof. Multiply both sides of Eq. (1) by ηπ(xt) and sum over all actions at−1 ∈ A we obtain detailed balance like equations (with respect to time):\n#»P π (xt−1|xt)ηπ(xt) = Pπ(xt, at−1|xt−1)ηπ(xt−1). (∀xt−1 ∈ X , xt ∈ X )\nSum over all possible xt we have:∑ xt∈X #»P π (xt−1|xt)ηπ(xt) = ηπ(xt−1).\nThe above equation indicates that #»B(π) has same stationary distribution asM(π). In the matrix form the above equation can be written as η #» P π = η, that implies that η is stationary distribution with #» P π transition matrix.\nB.1 RELATION BETWEEN FORWARD AND BACKWARD MARKOV CHAINS AND BACKWARD VALUE FUNCTIONS\nProof. We use the technique of Proposition 2 of Morimura et al. (2010) to prove this. Using the Markov property and then substituting Eq. (1) for each term we have:\n#»P π (xt−1, at−1, . . . , xt−K , at−K |xt) = #»P π (xt−1, at−1|xt) . . . #»P π (xt−K , at−K |xt−K+1),\n= Pπ(xt, at−1|xt−1) . . .Pπ(xt−K+1, at−K |xt−K)ηπ(xt−K)\nηπ(xt) ,\n∝ Pπ(xt, at−1|xt−1) . . .Pπ(xt−K+1, at−K |xt−K)ηπ(xt−K).\nThis proves the proposition for finite K. Using the Prop. B.1, K →∞ case is proven too:\nlim K→∞ E #»B (π) [ K∑ k=0 d(xt−k)|xt ] = lim K→∞ EM(π) [ K∑ k=0 d(xt−k)|xt, ηπ(xt−K) ] ,\n= ∑ x∈X ∑ a∈A π(a|x)ηπ(x)d(x).\nB.2 TD FOR BVF\nProof. We use the same technique from Stochastic Shortest Path dynamic programming (Bertsekas et al., 1995, Vol 2, Proposition 1.1) to prove the above proposition. The general outline of the proof is given below, for more details we refer the reader to the textbook.\nWe have, #»T π #» V = d+ #» P π #» V . (Eq. (4) in matrix notation)\nUsing induction argument, we have for all #»\nV ∈ Rn and k ≥ 1, we have:( #»T π )k #» V = ( #» P π )k #» V +\nk−1∑ m=0 ( #» P π )m d,\nTaking the limit, and using the result, limk→∞ ( #» P π )k #» V = 0, regarding proper policies from\nBertsekas et al. (1995, Vol 2, Equation 1.2), we have:\nlim k→∞\n( #»T π )k #» V = lim\nk→∞ k−1∑ m=0 ( #» P π )m d = #» V π ,\nAlso we have by definition:( #» T π )k+1 #» V = d+ #» P π ( #» T π )k #» V ,\nand by taking the limit k →∞, we have: #»\nV π = d+ #» P π #» V π ,\nwhich is equivalent to, #»\nV π = #»T π #» V π .\nTo show uniqueness, note that if #» V = #»T π #» V , then #» V = ( #»T π )k #» V for all k and letting k →∞ we get\n#» V = #» V π .\nC VALUE-BASED CONSTRAINT LEMMA\nLemma C.1. E [∑T k=t d(xk) | x0, π ] ≤ Ext∼δx0 (Pπ)t [ V πD (xt) ] and E [∑t k=0 d(xk) | x0, π ] ≤\nExk∼δx0 (Pπ)t [ #» V π D(xk) ]\nProof. Follows since adding more steps to the trajectory (from T − t steps to T ) can only increase the expected total cost. E [∑T k=t d(xk) | x0, π ] = δx0(P π)t (∑T k=t(P π)k ) d ≤\nδx0(P π)t (∑T+t k=t (P π)k ) d = Ext∼δx0 (Pπ)t [ V πD (xt) ] . The backward case is analogous." }, { "heading": "D PROPERTIES OF THE POLICY ITERATION (SPI)", "text": "Theorem D.1. Let σ(x) := TV(πk+1(·|x), πk(·|x)) = (1/2) ∑ a |πk+1(a|x)−πk(a|x)| denote the total variation between policies πk(·|x) and πk+1(·|x). If the policies are updated sufficiently slowly and πk is feasible, then so is πk+1. More specifically:\n(I) If πk is feasible at x0 and σ(x) ≤ d0−V\nπk D (x0)\n2T 2DMAX ∀x then πk+1 is feasible at x0.\n(II) If πk is feasible everywhere (i.e. V πkD (x) ≤ d0 ∀x) and σ(x) ≤ d0−V πk D (x)\n2T maxx′{d0− #» V πk D (x ′)−d(x′)}\n∀x then πk+1 is feasible everywhere.\nWe note that the second case allows the policies to be updated in a larger neighborhood but requires πk to be feasible everywhere. By contrast the first item updates policies in a smaller neighbourhood but only requires feasibility at the starting state.\nProof. Similar to the analysis in Chow et al. (2018). We aim to show that V πk+1D (x0) ≤ d0. For simplicity we consider k = 0, and by induction the other cases will follow. We write P0 = Pπ0 , P1 = P π1 , ∆(a|x) = π1(a|x) − π0(a|x), and P∆ = [∑ a∈A ∆(a|x)P (x′|x, a) ] {x′,x}. Note that (I −P0) = (I −P1 +P∆), and therefore (I −P1 +P∆)(I −P0)−1 = I|X |×|X|. Thus, we find\n(I − P0)−1 = (I − P1)−1(I|X |×|X| + P∆(I − P0)−1).\nMultiplying both sides by the cost vector d one has\nV π0D (x) = E [ T∑ t=0 d(xt) + ε(xt) | π1, x ] ,\nfor each x, where ε(x) = ∑ a∈A ∆(a|x) ∑ x′∈X P (x\n′|x, a)V π0D (x′). Splitting the expectation, we have\nV π1D (x) = V π0 D (x)− E [ T∑ t=0 ε(xt) | π1, x ] For case (I) we note that V π0D (x\n′) ≤ DMAXT and so −2σ(xt)DMAXT ≤ ε(xt) ∀xt. Using σ(xt) ≤ (d0−V πkD )/2DMAXT 2 gives V π1 D (x0) ≤ V π0 D (x0)−2DMAXT 2(d0−V π0 D (x0))/(2DMAXT\n2) = d0, i.e. π0 is feasible at x0. For case (II) we note that V π0D (x) ≤ maxx′{d0 − #» V π0 D (x\n′) − d(x′)} =: Θ since π0 is feasible at every x. As before, we have −2σ(xt)Θ ≤ ε(xt) ∀xt and so V π1D (x) ≤ V π0 D (x) − 2ΘT (d0 − V π0D (x))/(2ΘT ) = d0 ∀x, i.e. π1 is feasible everywhere.\nTheorem D.2. Let πn and πn+1 be successive policies generated be the policy iteration algorithm of (SPI). Then V πn+1 ≥ V πn .\nProof. Note that πn+1 and πn are both feasible solutions of the LP (SPI). Since πn+1 maximizes V π over all feasible solutions, the result follows." }, { "heading": "E ANALYTICAL SOLUTION OF THE UPDATE - DISCRETE CASE", "text": "We follow the same procedure as (Chow et al., 2018, Section E.1) to convert the problem to its Shannon entropy regularized version:\nmax π∈∆\nπ(.|x)T (Q(x, .) + τ log π(.|x)),\ns.t. π(.|x)TQD(x, .) + #» V π D(x)− d(x) ≤ d0, (6)\nwhere τ > 0 is a regularization constant. Consider the Lagrangian problem for optimization:\nmax λ≥0 max π∈∆\nΓx(π, λ) = π(.|x)T (Q(x, .) + λQD(x, .) + τ log π(.|x)) + λ(d0 + d(x)− #» V (x))\nFrom entropy-regularized literature (Neu et al., 2017), the inner λ-solution policy has the form: π∗Γ,λ(.|x) ∝ exp ( −Q(x, .) + λQD(x, .)\nτ\n)\nWe now need to solve for the optimal lagrange multiplier λ∗ at x.\nmax λ≥0 −τ log-sum-exp\n( −Q(x, .) + λQD(x, .)\nτ\n) + λ(d0 + d(x)− #» V D(x)),\nwhere log-sum-exp(y) = log ∑ a exp(ya) is a convex function in y, and objective is a concave function of λ. Using KKT conditions, the∇λ gives the solution:\n(d0 + d(x)− #» V D(x))−\n∑ aQD(x, a) exp( ( −Q(x,a)+λQD(x,a)τ ) )∑\na exp( ( −Q(x,a)+λQD(x,a)τ ) )\n= 0\nUsing parameterization of z = exp(−λ), the above condition can be written as polynomial equation in z: ∑\na\n( d0 + d(x)− #» V D(x)−QD(x, a) ) . ( exp(−Q(x, a)\nτ )\n) z QD(x,a) τ = 0\nThe roots to this polynomial will give 0 ≤ z∗(x) ≤ 1, using which one can find λ∗(x) = − log(z∗(x)). The roots can be found using the Newton’s method. The final optimal policy of the entropy-regularized process is then:\nπ∗Γ ∝ exp ( −Q(x, ·) + λ ∗QD(x, ·) τ )" }, { "heading": "F EXTENSION OF SAFETY LAYER TO STOCHASTIC POLICIES WITH GAUSSIAN PARAMTERIZATION", "text": "Consider stochastic gaussian policies parameterized by mean µ(x; θ) and standard-deviation σ(x;φ), and the actions sampled have the form µ(x; θ) + σ(x;φ) , where ∼ N (0, I) is the noise. Here, < µ(x; θ), σ(x;φ) > are both deterministic w.r.t. the parameters θ, φ and x, and as such both of them together can be treated in the same way as deterministic policy (π(x) =< µ(x), σ(x) >). The actual action sampled and executed in the environment is still stochastic, but we have moved the stochasticity fron the policy to the environment. This allows us to define and work with action-value functions QD(x, µπ(x), σπ(x)). In this case, the corresponding projected actions have the form µ′ + σ′ . The main objective of the safety layer (without the constraints) can be further simplified as:\narg min µ′,σ′\nE ∼N (0,I) [ 1\n2 ‖(µ′ + σ′ )− (µπ(x) + σπ(x) )‖\n2 ]\narg min µ′,σ′\nE ∼N (0,I) [ 1\n2 ‖(µ′ − µπ(x)) + ((σ′ − σπ(x)) )‖\n2 ]\narg min µ′,σ′\n1 2 E ∼N (0,I) ‖µ′ − µπ(x)‖2 + ‖(σ′ − σπ(x)) ‖2 + 2 < µ′ − µπ(x), (σ′ − σπ(x)) >︸ ︷︷ ︸ =0,due to linearity of expectation, ∼N (0,I) arg min µ′,σ′ 1 2 ( ‖µ′ − µπ(x)‖ 2 + E ∼N (0,I) [ ‖(σ′ − σπ(x)) ‖ 2 ])\narg min µ′,σ′\n1\n2 ‖µ′ − µπ(x)‖2 + ‖(σ′ − σπ(x))‖2 E ∼N (0,I) [‖ ‖2]︸ ︷︷ ︸ =1,second moment of arg min µ′,σ′ 1 2 ( ‖µ′ − µπ(x)‖ 2 + ‖(σ′ − σπ(x))‖ 2 )\nAs both µπ(.; θ) and σπ(.;φ) are modelled by independent set of parameters (different neural networks, usually) we can solve each of the safety layer problem independently, w.r.t. only those parameters." }, { "heading": "G ANALYTICAL SOLUTION IN SAFETY LAYER", "text": "The proof is similar to the proof of the Proposition 1 of Dalal et al. (2018). We have the following optimization problem:\narg min µ\n[ 1\n2 ‖(µ− µπ(x)‖2\n] ,\ns.t. #» V π\nD(x)− d(x) +QπD(x, µπ(x)) + (µ− µπ(x))T (∇QπD(x, µ)|µ=µπ(x)) ≤ d0\nAs the objective function and constraints are convex, and the feasible solution, µ∗, λ∗, should satisfy the KKT conditions. We define (x) = (d0 + d(x) − #» V π\nD(x) − QπD(x, µπ(x))), and gµ,D(x) = ∇QπD(x, u)|u=µπ(x). Thus, we can write the Lagrangian as:\nL(µ, λ) = 1\n2 ‖(µ− µπ(x)‖2 + λ((µ− µπ(x))T gµ,D(x)− (x))\nFrom the KKT conditions, we get:\n∇µL = µ− µπ(x) + λgµ,D(x) = 0 (7) (µ− µπ(x))T gµ,D(x)− (x) = 0 (8)\nFrom Eq. (7), we have:\nµ∗ = µπ(x)− λ∗(x) · gµ,D(x) (9)\nSubstituting Eq. (9) in Eq. (8), we get:\n−λ∗(x) · gµ,D(x)T gµ,D(x)− (x) = 0\nλ∗ = − (x)\ngµ,D(x)T gµ,D(x)\nWhen the constraints are satisfied ( (x) > 0), the λ should be inactive, and hence we have ()+ operator, that is 0 for negative values." }, { "heading": "H DETAILS OF GRID-WORLD EXPERIMENTS", "text": "H.1 ARCHITECTURE AND TRAINING DETAILS\nWe use one-hot encoding of the agent’s location in the grid as the observation, i.e. x is a binary vector of dimension R12×12. The agent is trained for 200k episodes, and the current policy’s performance is evaluated after every 1k episodes.\nThe same three layer neural network with the architecture is used for state encoding for all the different the estimators. The feed-forward neural network has hidden layers of size 64, 64, 64, and relu activations. For the state-action value based estimators, the last layer is a linear layer with 4 outputs, for each action. For value function based estimators the last layer is linear layer with a single output.\nWe use Adam Optimizer for training all the estimators. A learning rate of 1e-3 was selected for all the reward based estimators and a learning rate of 5e-4 was selected for all the cost based estimators. The same range of learning rate parameters for considered for all estimators i.e. {1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1}.\nWe use n-step trajectory length in A2C with n = 4, i.e., trajectories of length n were collected and the estimators were updated to used via the td-errors based on that. We use the number of parallel agents 20 in all the experiments. The range of parameters considered was n ∈ {1, 4, 20}. The same value of n was used for all the baselines." }, { "heading": "I DETAILS OF THE MUJOCO EXPERIMENTS", "text": "I.1 ENVIRONMENTS DESCRIPTION\n• Point-Gather: The environment (Fig.4c) is taken from Achiam et al. (2017), where the point mass agent gets a reward of +10.0 for collecting a green apple, and a cost of 1 for collecting a red bomb. Two apples and eight bombs are spawned randomly at the start of each episode. The constraints are defined over the nmber of bombs collected over the episode. Episode horizon is 15 and threshold d0 = 4.\n• Safe-Cheetah: This environment (Fig.4b) is taken from Chow et al. (2019). A bi-pedal agent (HalfCheetah-v0) is augmented with speed safety constraints. The agent gets the reward based on the speed with which it runs, and the constrain is define on the speed to be less than 1, i.e., it gets a constraint cost based on 1[|v| > 1], where v is the velocity at the state. The maximum length of the episode is 200 and the constraint threshold is d0 = 50.\n• Point-Circle: This environment (Fig.4a) is taken from Achiam et al. (2017). The pointmass agent is rewarded for running along the circumference of a circle of radius 15 in counter-clockwise direction, with the reward and cost function:\nR(s) = vT [−y, x]\n1 + | ‖[x, y]‖2 − 15| ,\nC(s) = 1[|x| > 2.5],\nwhere x, y are coordinates in the plane and v is the velocity. The length of the episode is 65 and the constraint threshold d0 = 10.0.\nI.2 NETWORK ARCHITECTURE AND TRAINING DETAILS\nThe architecture and the training procedure is based on the open-source implementations (Kostrikov, 2018). All the value based estimators use a network architecture of 2 hidden layers of size 200, 50 hidden units with tanh non-linearity, followed by a linear layer with single output. For the actor, we model mean using a network architecture of 2 hidden layers of size 100, 50 hidden units with tanh non-linearity, followed by a linear layer with dimensions of the action-space and tanh non-linearity. For the Q(x, µ) we also a 2 layer neural network with 200, (50 + action-dimension) hidden units and tanh non-linearity. We concatenate the mean in the second layer, and add a linear layer with single output in the end.\nEntropy regularization with β = 0.001 was used for all the experiments and the baselines. The trajectory length for different environments. For PPO GAE with λ = 0.95 was used for every algorithm. 20 parallel actors were used for every algorithm for each experiment. We searched the trajectory length hyper-parameter in the range 5,20,100 for every environment. We used the trajectory length of 1000 over which the samples are collected for PPO, for all environments. For the A2C experiments, for SafeCheetah trajectory length of 5 is used and for the rest 20 is used.\nWe use Adam Optimizer for training all the estimators. The learning rate of the critic is always 0.5 the learning rate of the actor. For the cost estimators, the same learning rate was used for forward and backward estimators. The same range of learning rate parameters for considered for all estimators i.e. {1e-5, 5e-5, 1e-4, 5e-4, 1e-3, 5e-3, 1e-2, 5e-2, 1e-1}.\nI.3 OTHER DETAILS\nAs we mentioned in Sec. 7, due to exploration the agent can potentially end up being in an infeasible policy space. To prevent that from happening a recovery policy (or safe-guard policy) (Achiam et al., 2017; Chow et al., 2019) is used to recover back to the feasible policy space. We run the experiments with and without the use of recovery policies (in the same procedure as the baselines), and chose the run that performs the best. We noticed that, empirically, for our approach recovery policies are only required for Point-Circle environments, as the agent has much more probability of being stuck in the constraint space.\nIn order to take error due to function approximation into account, Achiam et al. (2017) use costshaping to smooth out the sparse constraint, and Chow et al. (2019) use a relaxed threshold, i.e. d0 · (1 − δ), instead of d0, where δ ∈ (0, 1). We run experiments with δ = {0.0, 0.2} for each algorithms, and use the best among them. We found that empirically, only for Safe-Cheetah δ = 0.2 works better compared to δ = 0.0." }, { "heading": "J ALGORITHM DETAILS", "text": "J.1 N-STEP SYNCHRONOUS SARSA\nThe algorithm for n-step Synchronous SARSA is similar to the n-step Asynchronous Q-learning of Mnih et al. (2016), except that it uses SARSA instead of Q-learning, is synchronous, and instead of greedy maximization step of -greedy we use (SPI). When working with discrete actions and deterministic policies, this can be solved as part of the computation-graph itself. The algorithm is presented in Alg. 1.\nJ.2 A2C\nIn Actor Critic (Konda & Tsitsiklis, 2000) algorithms, the parameterized policy (actor) is denoted by π(a|x; θ), and is updated to minimizing the following loss:\nL(θ) = E[− log π(at|xt; θ)(rt + γV π(xt+1 − Vxt))]\nThe algorithm for A2C with Safety Layer given by Eq. (5) is similar to the Synchronous version of Actor-Critic (Mnih et al., 2016), except that it has estimates for the costs and safety-layer. Note that due to the projection property of the safety layer, it is possible to sample directly from the projected mean. Also, as the projection is a result of vector products and max, it is differentiable and and computed in-graph (via relu). The algorithm is presented in Alg. 2.\nJ.3 PPO\nThe PPO algorithm build on top of the Actor-Critic algorithm and is very similar to Algorithm 2. The main difference is how the PPO loss for the actor is defined as:\nLCLIP (θ) = E[min(ρt(θ)At, clip(ρt(θ), 1− , 1 + )At)],\nwhere the likelihood ration is ρt(θ) = πθ(at|xt) πθold (at|xt)\n, with πold being the policy parameters before the update, < 1 is a hyper-parameters that controls the clipping and At is the generalized advantage estimator:\nA GAE(λ,γ) t = T−1∑ k=0 (λγ)kδV π t+k,\nAlgorithm 1 Synchronous n-step SARSA\nInput: θ parameters forQ(x, .; θ), θD parameters forQD(x, .; θD),φD parameters for #»\nV D(x;φD); π0 initial feasible policy. for episode e ∈ 1, ...,M do\nAdd the initial state to the trajectory buffer τ ← {x0} t← 1 while t < T do:\ntstart ← t while t < t+ n or t == T do\nSelect at using (SPI), execute at, observe xt+1 and reward rt and cost dt. Add experiences to a buffer, i.e., τ ← (at, rt, dt, xt+1). t← t+ 1\nend while Calculate the next action for xt+1 using the current policy estimates, at+1 Bootstrap the targets:\nR← {\n0 if t == T Q(xt+1, at+1; θ) otherwise\nRD ← {\n0 if t == T QD(xt+1, at+1; θD) otherwise\n#»\nRD ←\n{ 0 if t == 0\n#»\nV (xtstart−1;φD) otherwise\n. Calculate the targets for the transitions in buffer for i ∈ {t− 1, . . . , tstart} do\nR← ri + γR RD ← di + γRD Accumulate the gradients wrt θ, θD:\ndθ ← dθ + ∂(R−Q(xi, ai; θ)) 2\n∂θ\ndθD ← dθD + ∂(RD −QD(xi, ai; θD))2\n∂θD\nend for for i ∈ {tstart, . . . , t} do\n#» RD ← di + γ #»\nRD Accumulate the gradients wrt φD:\ndφD ← dφD + ∂(\n#» RD − #» V D(xi;φD)) 2\n∂φD\nend for Do synchronous batch update with the accumulated gradients to update θ, θD, φD using\ndθ, dθD, dφD. end while Empty the trajectory buffer, τ end for\nAlgorithm 2 Synchronous A2C with Safety Layer Input: θ parameters for π(x; θ), φ the parameters for V (x;φ), θD parameters for QD(x, µ; θD), φD parameters for #»\nV D(x;φD); for episode e ∈ 1, ...,M do\nAdd the initial state to the trajectory buffer τ ← {x0} t← 1 while t < T do:\ntstart ← t while t < t+ n or t == T do\nSelect at using sampling from the projected mean µt via the safety layer Eq.(5), execute at, observe xt+1 and reward rt and cost dt.\nAdd experiences to a buffer, i.e., τ ← (at, µt, rt, dt, xt+1). t← t+ 1\nend while Calculate the next mean for xt+1 using the current policy estimates, µt+1 Bootstrap the targets:\nR← {\n0 if t == T V (xt+1, at+1;φ) otherwise\nRD ← {\n0 if t == T QD(xt+1, µt+1; θD) otherwise\n#»\nRD ←\n{ 0 if t == 0\n#»\nV (xtstart−1;φD) otherwise\n. Calculate the targets for the transitions in buffer for i ∈ {t− 1, . . . , tstart} do\nR← ri + γR RD ← di + γRD Accumulate the gradients w.r.t. θ, φ, θD:\ndθ ← dθ +∇θ log π(ai | xi; θ)(R− V (xi;φ))\ndφ← dφ+ ∂(R− V (xiφ)) 2\n∂φ\ndθD ← dθD + ∂(RD −QD(xi, µi; θD))2\n∂θD\nend for for i ∈ {tstart, . . . , t} do\n#» RD ← di + γ #»\nRD Accumulate the gradients wrt φD:\ndφD ← dφD + ∂(\n#» RD − #» V D(xi;φD)) 2\n∂φD\nend for Do synchronous batch update with the accumulated gradients to update θ, φ, θD, φD using\ndθ, dφ, dθD, dφD. end while Empty the trajectory buffer, τ end for\nwhere T is the maxmimum number of timestamps in an episode trajectory, and δj denotes the TD error at j. The value function is updated using the γλ-returns from the GAE:\nL(φ) = E[(V π(x;φ)− (V π(x;φold) +At))2].\nSimilar to the the forward value estimates the backward value estimates are defined in the similar sense. One way to think of it is to assume the trajectories are reversed and we are doing the regular GAE estimation for the value functions.\nThe GAE updates for the regular value function can be seen in the λ-operator form as:\nT πλ vπ = (I − γλPπ)−1(rπ + γPπvπ − vπ) + vπ.\nIn similar spirit it can be shown that the λ-operator for SARSA has the form:\nT πλ qπ = (I − λγPπ)−1(T πqπ − qπ) + qπ,\nwhere (T πqπ−qπ) denotes the TD error. Thus, the GAE estimates can be applied for the Q-functions in the similar form, i.e.\nB GAE(λ,γ) t = T−1∑ k=0 (λγ)kδ QπD t+k,\nL(θD) = E[(QπD(x, a; θD)− (Q θD D (x, a; θDold) +Bt)) 2]." } ]
2,019
null
SP:99e7452e7b7c5a1071af9370aa61acad39f99833
[ "The paper studies the effect of various data augmentation methods on image classification tasks. The Authors propose the Structural Similarity (SSIM) as a measure of the magnitude of the various types of data augmentation noise they consider. The Authors argue that SSIM is superior to PSNR as a measure of the intensity of the noise, across various noise types.", "This paper aims at analyzing the effect of injecting noise to images as data augmentation in training CNN for the image classification task. Based on the SSIM metric (which is shown to be a better metric than PSNR), different noise level on a set of different kinds of noise are explored. Experimental results on two sub-datasets of ImageNet suggest that Speckle noise would lead to better CNN models." ]
Noise injection is a fundamental tool for data augmentation, and yet there is no widely accepted procedure to incorporate it with learning frameworks. This study analyzes the effects of adding or applying different noise models of varying magnitudes to Convolutional Neural Network (CNN) architectures. Noise models that are distributed with different density functions are given common magnitude levels via Structural Similarity (SSIM) metric in order to create an appropriate ground for comparison. The basic results are conforming with the most of the common notions in machine learning, and also introduces some novel heuristics and recommendations on noise injection. The new approaches will provide better understanding on optimal learning procedures for image classification.
[]
[ { "authors": [ "Ismail Avcibas", "Bulent Sankur", "K. Sayood" ], "title": "Statistical evaluation of quality measures. j electron imaging", "venue": "J. Electronic Imaging, 11:206–223,", "year": 2002 }, { "authors": [ "Yoshua Bengio", "Frédéric Bastien", "Arnaud Bergeron", "Nicolas Boulanger-Lewandowski", "Thomas M. Breuel", "Youssouf Chherawala", "Moustapha Cissé", "Myriam Côté", "Dumitru Erhan", "Jeremy Eustache", "Xavier Glorot", "Xavier Muller", "Sylvain Pannetier Lebeuf", "Razvan Pascanu", "Salah Rifai", "Franois Savard", "Guillaume Sicard" ], "title": "Deep learners benefit more from out-of-distribution examples", "venue": "In AISTATS,", "year": 2011 }, { "authors": [ "Alan C. Bovik" ], "title": "Handbook of Image and Video Processing (Communications, Networking and Multimedia)", "venue": null, "year": 2005 }, { "authors": [ "Stefan Braun", "Dan Neil", "Shih-Chii Liu" ], "title": "A curriculum learning method for improved noise robustness in automatic speech recognition", "venue": null, "year": 2016 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Terrance Devries", "Graham W. Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "ArXiv,", "year": 2017 }, { "authors": [ "J. Ding", "B. Chen", "H. Liu", "M. Huang" ], "title": "Convolutional neural network with data augmentation for sar target recognition", "venue": "IEEE Geoscience and Remote Sensing Letters,", "year": 2016 }, { "authors": [ "Tobias Domhan", "Jost Tobias Springenberg", "Frank Hutter" ], "title": "Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves", "venue": "In Proceedings of the 24th International Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Shixiang Gu", "Luca Rigazio" ], "title": "Towards deep neural network architectures robust to adversarial examples", "venue": null, "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "Computer Vision – ECCV", "year": 2016 }, { "authors": [ "L. Holmstrom", "P. Koistinen" ], "title": "Using additive noise in back-propagation training", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Alain Horé", "Djemel Ziou" ], "title": "Image quality metrics", "venue": "Psnr vs. ssim. pp. 2366–2369,", "year": 2010 }, { "authors": [ "Sarfaraz Hussein", "Robert Gillies", "Kunlin Cao", "Qi Song", "Ulas Bagci" ], "title": "Tumornet: Lung nodule characterization using multi-view convolutional neural network with gaussian process", "venue": "CoRR, abs/1703.00645,", "year": 2017 }, { "authors": [ "Q. Huynh-Thu", "M. Ghanbari" ], "title": "Scope of validity of psnr in image/video quality assessment", "venue": "Electronics Letters,", "year": 2008 }, { "authors": [ "J. Kim", "A. Nguyen", "S. Lee" ], "title": "Deep cnn-based blind image quality predictor", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2019 }, { "authors": [ "Micha Koziarski", "Boguslaw Cyganek" ], "title": "Image recognition with deep neural networks in presence of noise dealing with and taking advantage of distortions", "venue": "Integrated Computer-Aided Engineering, 24:1–13,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2012 }, { "authors": [ "Maria Saiz", "Bjrn-Helge Mevik", "V.H. Segtnan", "T. Ns" ], "title": "Ensemble methods and data augmentation by noise addition applied to the analysis of spectroscopic data", "venue": "Analytica Chimica Acta,", "year": 2005 }, { "authors": [ "Peter Westfall" ], "title": "Kurtosis as peakedness, 1905-2014", "venue": "rip. The American Statistician,", "year": 2014 }, { "authors": [ "Shi Yin", "Chao Liu", "Zhiyong Zhang", "Yiye Lin", "Dong Wang", "Javier Tejedor", "Fang Zheng", "Yinguo Li" ], "title": "Noisy training for deep neural networks in speech recognition", "venue": "EURASIP Journal on Audio, Speech, and Music Processing, 2015,", "year": 2015 }, { "authors": [ "Zhou Wang", "A.C. Bovik", "H.R. Sheikh", "E.P. Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE Transactions on Image Processing,", "year": 2004 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional Neural Networks (CNNs) find an ever-growing field of application throughout image and sound processing tasks, since the success of AlexNet (Krizhevsky et al., 2012) in the 2012 ImageNet competition. Yet, training these networks still keeps the need of an ”artistic” touch: even the most cited state-of-the-art studies employ wildly varying set of solvers, augmentation and regularization techniques (Domhan et al., 2015). In this study, one of the crucial data augmentation techniques, noise injection, will be thoroughly analysed to determine the correct way of application on image processing tasks.\nAdding noise to the training data is not a procedure that is unique to the training of neural architectures: additive and multiplicative noise has long been used in signal processing for regression-based methods, in order to create more robust models (Saiz et al., 2005). The technique is also one of the oldest data augmentation methods employed in the training of feed forward networks, as analysed by Holmstrom & Koistinen (1992), yet it is also pointed out in the same study that while using additive Gaussian noise is helpful, the magnitude of the noise cannot be selected blindly, as a badly-chosen variance may actually harm the performance of the resulting network (see Gu & Rigazio (2014) and Hussein et al. (2017) for more examples).\nThe main reasons for noise injection to the training data can be listed as such in a non-excluding manner: first of all, injection of any noise type makes the model more robust against the occurrence of that particular noise over the input data (see Braun et al. (2016) and Saiz et al. (2005) for further reference), such as the cases of Gaussian additive noise in photographs, and Gaussian-Poisson noise on low-light charge coupled devices (Bovik, 2005). Furthermore, it is shown that the neural networks optimize on the noise magnitude they are trained on (Yin et al., 2015). Therefore, it is important to choose the correct type and level of the noise to augment the data during training.\nAnother reason for noise addition is to encourage the model to learn the various aspects of each class by occluding random features. Generally, stochastic regularization techniques embedded inside the neural network architectures are used for this purpose, such as Dropout layers, yet it is also possible to augment the input data for such purposes as in the example of ”cutout” regularization proposed by Devries & Taylor (2017). The improvement of the generalization capacity of a network is highly correlated with its performance, which can be scored by the accuracy over a predetermined test set.\nThere has been similar studies conducted on the topic, with the example of Koziarski & Cyganek (2017) which focuses on the effects of noise injection on the training of deep networks and the possible denoising methods, yet they fail to provide a proper methodology to determine the level of\nnoise to be injected into the training data, and use PSNR as the comparison metric between different noise types which is highly impractical (see Section 3). To resolve these issues, this study focuses on the ways to determine which noise types to combine the training data with, and which levels, in addition to the validity of active noise injection techniques while experimenting on a larger set of noise models.\nIn the structure of this work, the effect of injecting different types of noises into images for varying CNN architectures is assessed based on their performance and noise robustness. Their interaction and relationship with each other are analyzed over (also noise-injected) validation sets. Finally as a follow-up study, proper ways on adding or applying noise to a CNN for image classification tasks are discussed." }, { "heading": "2 DIFFERENT TYPES OF NOISE", "text": "Noise can be -somewhat broadly- defined as unwanted component of the image (Bovik, 2005). It can be sourced from the environment at which the image is taken, the device utilized to take the image, or the medium of communication that is used to convey the information of image from source to the receiver. According to its properties and nature, noise and image can be analytically decomposed as additive or multiplicative, but some of the noise types cannot be described by neither of these classes." }, { "heading": "2.1 ADDITIVE NOISE", "text": "Let f(x) denote an image signal. This signal can be decomposed into two components in an additive manner as f(x) = g(x)+n(x) where g(x) denoting the desired component of the image, and n(x) standing for the unwanted noise component. Most commonly encountered variant of this noise class is Gaussian noise, whose multivariate probability density function can be written as:\nfn(x) = 1√\n(2π)n|Σ| exp ( −1 2 (x−m)TΣ−1(x−m) ) (1)\nwhere m and Σ denoting the n-dimensional mean vector and symmetric covariance matrix with rank n, respectively. In images, the mean vector m is generally zero, therefore the distribution is centered and the magnitude is controlled by the variance. This study also follows these assumptions." }, { "heading": "2.2 MULTIPLICATIVE NOISE", "text": "Again, let f(x) denote an image signal. This signal can also be decomposed into respective desired and noise components as f(x) = g(x)(1 + n(x)). The noise component in this model is called multiplicative noise. The most common variant in this case is called speckle noise, which may have different density functions, and in this study Gaussian is assumed. Similar with the additive noise, the mean is assumed to be 0 and the magnitude refers to the variance.\nSpeckle noise can be encountered in coherent light imaging such as in the cases of SAR images (Ding et al., 2016) and images with laser-based illumination, but they may also be observed in other digital images (Bovik, 2005)." }, { "heading": "2.3 OTHER TYPES OF NOISE", "text": "There exists many other noise instances that cannot be modeled by additive or multiplicative decompositions. Most common of these types are listed below, whose effects on the performances of CNNs are also analysed in this study.\nSalt and pepper (S&P) noise. This noise manifests itself as a basic image degradation, for which only a few pixels in an image are noisy, but they are extremely noisy in a way that pixels are either completely black or white. The magnitude of this noise is the probability of a pixel to be completely black (i.e. pepper), completely white (i.e. salt), or stay unchanged. The probabilities for pepper and\nsalt cases are assumed to be equal, and total probability of the degradation of a pixel is referred as the the magnitude of the noise.\nPoisson noise. Also referred as photon counting noise or shot noise, this noise type has a particular probability density function:\nfn(k) = e−λλk\nk! (2)\nwhere λ stands for both the variance and the mean of the distribution. As Poisson noise is signaldependant, it does not have a direct magnitude parameter similar to other noise types, therefore a magnitude factor c is used that divides the intensity values of all pixels from which the distribution is sampled, and returned to the original range by multiplying to the same factor.\nOcclusion noise. Although it is not generally referred as a noise type, occlusion of important features can happen for a number of reasons in an image: image may be cropped, damaged or a particular object in it may be hidden by an obstacle such as a tree. This noise is realized with zerointensity squares appearing on the image, and the magnitude is determined according to the size of the square as the shape of the occluding object does not have a large impact on the final performance of the model (Devries & Taylor, 2017).\nAs listed in this section, five different types of noise and their various combinations are added or applied to the images, with varying magnitudes. Robustness against each of these noise types is also assessed." }, { "heading": "3 CHOOSING THE RIGHT METRIC: PSNR VS SSIM", "text": "Different types of noise are easy to compare when they are sampled from the same distribution, such as in the case of additive Gaussian noise and speckle noise. However, it is sometimes impossible to assess two different metrics in the same context because of varying magnitude parameters and sensitivity of the model to each of these types. In this case, it becomes imperative to use an intermediary metric that will ease the comparison process for noise types.\nIn general, Peak Signal-to-Noise Ratio (PSNR), and similarly Mean Squared Error (MSE) are the most commonly used quality metrics in the image processing field. For an 8-bit two-dimensional MxN image f̂(n1, n2) and its noise-free counterpart f(n1, n2), the MSE is defined as\nMSE = 1\nMN M−1∑ n1=0 N−1∑ n2=0 [f(n1, n2)− f̂(n1, n2)]2. (3)\nFrom above definition, PSNR (in dB) can be derived.\nPSNR = 10 log10\n{ 2552\nMSE\n} (4)\nThere are several limitations of using PSNR as the image quality metric of a data set: it is shown that PSNR loses its validity as a quality metric when the content and/or codec of the images are different as in that case the correlation between subjective quality and PSNR is highly reduced (Huynh-Thu & Ghanbari, 2008). Also, even though the sensitivity of PSNR to Gaussian noise is very high, the metric is unable to present similar performance for different types of perturbation (Avcibas et al., 2002) (Horé & Ziou, 2010).\nThere exists another widely accepted image quality metric called Structural Similarity (SSIM), which resolves or alleviates some of the above-listed problems. The Structural Similarity between two non-negative image signals x and y, whose means, standard deviations and covariance are denoted by µx, µy , σx, σy and σxy respectively, can be expressed as:\nSSIM = (2µxµy + C1)(2σxy + C2)\n(µ2x + µ 2 y + C1)(σ 2 x + σ 2 y + C2)\n(5)\nwhere C1 and C2 are constants to avoid instability (Zhou Wang et al., 2004). This metric combines luminance, contrast and structure information in order to provide an assessment of similarity in the range from 0 to 1, where 1 stands for highest similarity.\nIn image classification tasks, the objective is mostly to classify the depicted objects or beings according to the human perception. Therefore, it is crucial for the used metric to be consistent with human opinions: it is shown in several studies that SSIM provides a quality metric closer to average opinion of public than PSNR (Kim et al., 2019) (Zhou Wang et al., 2004).\nFurthermore, when sampled from the same noise distributions over identical images of a data set, distribution of PSNR values of the noisy images have significantly high kurtosis than their SSIM counterparts in every noise type except S&P. This behavior is demonstrated over a subset of ImageNet dataset (Deng et al., 2009) called Imagewoof (can be accessed via github.com/fastai/imagenette), with 10 classes and 12454 total number of images on the training set. Each type of noise listed in Section 2 is applied to each image, sampling from the same distribution, and quality metrics are recorded. The utilized noise magnitudes are 0.2 variance for Gaussian noise, 0.2 variance for speckle noise, 0.05 total probability for S&P noise, 0.2 scaling factor for Poisson noise, and 0.3 relative side length for occlusion noise; which are all chosen to provide sensible values from the metrics. The distribution of the metrics can be seen at Figure 1 over the axes\nof each subfigure, and the kurtosis values are noted in Table 1. This is interpreted as PSNR having propensity to produce more outliers than SSIM for the same levels of noise (Westfall, 2014).\nFor the reasons listed above, SSIM will be used as the primary metric throughout this study." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "Effects of injecting different noise types into training data are evaluated for different magnitudes and types of the noise as listed in Section 2. The chosen datasets are two different subsets of ImageNet dataset, namely Imagenette and Imagewoof, each consisting of 10 different classes with 12894 and 12454 training samples respectively and 500 test samples (both can be accessed via github.com/fastai/imagenette). Former dataset contains ten easily classified classes of the original set, while the latter task is the classification of ten different dog breeds and require the network to successfully learn the particularly small features of the classes. Image data range is henceforth from 0 to 1.\nIn order to select the magnitudes of each noise component, a sweep for mean SSIM (MSSIM) of an array of noise magnitudes for each noise type is conducted over 200 images of Imagewoof dataset. Resulting graph can be seen in Figure 2. Very similar results are also observed when the same procedure is conducted on Imagenette dataset. According to the shapes of the curves, a quadratic polynomial is fitted to the relative side length of occlusion noise, and logarithmic polynomials are fitted to the rest. Look-up table for the fittings can be seen at Table 2, which are also the degradations applied in the experiments. Exemplary application of the noise can be seen at Figure 3.\nAs the training model, a 18-layer deep residual network (ResNet18V2) as proposed by He et al. (2016) is chosen, because of the fact that it is a well-known architecture and also sufficiently deep: Bengio et al. (2011) demonstrate that performance of the deep networks are more sensitive to data augmentation than their more shallow counterparts. The residual connections and layering structure of ResNet18V2 exhibit similar architectural properties of the most often utilized CNNs in the field.\nAdam solver with learning rate 1e-4 is preferred for training. Models are trained for 20 epochs. No dropout or weight decay is used for regularization purposes, and Batch Normalization layers are used in accordance with He et al. (2016).\nFigure 2: MSSIM vs Noise\nNoise Types\nMSSIM Gaussian var. Speckle var. S&P prob.\nPoisson magn. Occlus. length\n0.25 0.0341 0.3355 0.1288 0.1046 1.1908 0.5 0.0085 0.0515 0.0461 0.0222 0.9078 0.7 0.0028 0.0115 0.0203 0.0064 0.6308 0.8 0.0016 0.0054 0.0134 0.0035 0.4753 0.9 0.0009 0.0026 0.0089 0.0019 0.3086\nTable 2: Look-up table for the noise levels and MSSIM values" }, { "heading": "4.1 COMPARISON OF NOISY AND VANILLA TRAINING", "text": "Chosen CNN architecture is trained with noise injected to the training data for all noise models described in Section 2, for magnitudes corresponding to respective MSSIM values from Table 2. Total number of 52 networks are trained (25 on noisy data and 1 on original data for each dataset). The categorical accuracy of each trained network on the validation set can be seen at Figures 4 and 5, and the accuracies of the CNNs depending on the noise they are trained with for noisy test set can be observed at Figure 8. The latter results can be considered as the robustness test of the trained networks. One of the most important features of these heatmaps, robustness of the models against the noise injected to their input, is also plotted for each dataset individually in Figures 6 and 7." }, { "heading": "5 DISCUSSION", "text": "There are several confirmations to acquire from this set of results for the literature: first of all, there exists a trade-off between noise robustness and clean set accuracy. Yet contrary to the common notion, we believe that the data presents a highly valid optimum for this exchange in our study. As it can be seen from Figures 6 and 7; in order to create a robust model against particular kind of noise while maintaining the performance of the model, one must apply a level of degradation that results in 0.8 MSSIM over training data. We believe that as long as the noise or perturbation is somewhat homogeneously distributed, this rule of thumb will hold for all image classification tasks. However, the same thing cannot be said for non-homogeneously distributed noise models, as SSIM (and also PSNR as demonstrated in Section 3) fails to capture the level of degradation appropriately for such a verdict (see Occlusion results in Figures 6 and 7).\nA second confirmation of the current literature is the fact that the neural networks optimize on the noise level they are trained with, as seen again at Figures 6 and 7, and also the diagonals of Figure 8. Yet, the level of this optimization is quite small after 0.5 MSSIM, featuring similar robustness for each trained model. Therefore, it is not particularly necessary to determine the noise level of\na dataset, or sample the noise from a predetermined interval, as long as the MSSIM does not drop below 0.5, in which case noise removal techniques need to be considered for better models.\nAs noted above, occlusion noise type will not be thoroughly analyzed in this section because of the fact that the quality metric has failed to provide sufficient comparative data for this discussion. Yet, the performance data and the lack of robustness the other models exhibit towards this particular noise type shows that ”cutout” regularization as presented by Devries & Taylor (2017) is a crucial part of data augmentation in addition to any other perturbation or noise injection technique. A way to further extend the contribution of this method would be to alternate the intensity level of the patches from 0 to 255 for 8-bit images, which can be a topic of another research.\nFor the rest of the noise types; Gaussian, speckle and Poisson noises are observed to increase the performance of the model while boosting the robustness, and their effects exhibit the possibility of interchangeable usage. For image classification tasks involving RGB images of daily objects, injection of only one of these noise types with above-mentioned level is believed to be sufficient as repetition of the clusters can be observed in Figure 8. Among these three, Gaussian noise is recom-\nmended considering the results of model performance. S&P noise contamination, on the other hand, may not be resolved by injection of the former noise types as the other models are not sufficiently robust against it. Therefore, at this point one of the two methodologies are suggested: either S&P noise can be removed by simple filtering techniques, or S&P noise can be applied in an alternating manner with Gaussian noise during data augmentation. Former approach is recommended for the simplicity of the training procedure.\nThe constant behaviour of the models towards occlusion noise in Figures 6, 7 and 8 unfortunately does not have a satisfactory explanation, despite several diagnostics of the training procedure. A longer training procedure, which was not feasible in our experiment because of the model count, may resolve these undesirable results." }, { "heading": "6 CONCLUSION", "text": "In this study, an extensive analysis of noise injection to training data has conducted. The results confirmed some of the notions in the literature, while also providing new rule of thumbs for CNN training. As further targets of research, extension of ”cutout” regularization as described in the above paragraphs, and the distribution behavior of the SSIM and PSNR metrics in Figure 2 with regards to the work of Horé & Ziou (2010) may be pursued." } ]
2,019
null
SP:88d2b8efd477ec41d0bb7720d3a1ce366e1c3060
[ "The paper proposes to use ELMO embeddings to improve the precision on the first step of the DeepBugs tasks defined by Pradel and Sen (2018). This first step is an artificial problem created by taking real programs (with and without bugs, but assuming almost all of them are correct) and introducing bugs of certain type into the programs. Then, a classifier needs to distinguish between the real and the artificial set. This classifier is then to be used as a checker for anomalies in code and the anomalies are reported as bugs, however the paper skips this second step and only reports results on the first classification problem.", "This paper leverage recent advances of ELMo in context embedding and apply it in the source code embedding. With the help of ELMo, source embedding can take the three benefits: (1)  Surrounding names provide indirect information about possible values the variable could take; (2) an variable’s value evolves through the program execution can be captured; (3) open a gate for the reuse of the ptr-trained model. To evaluate the effectiveness of the proposed approach, authors conduct experiments on the downstream task of the bug detection. " ]
Continuous embeddings of tokens in computer programs have been used to support a variety of software development tools, including readability, code search, and program repair. Contextual embeddings are common in natural language processing but have not been previously applied in software engineering. We introduce a new set of deep contextualized word representations for computer programs based on language models. We train a set of embeddings using the ELMo (embeddings from language models) framework of Peters et al (2018). We investigate whether these embeddings are effective when fine-tuned for the downstream task of bug detection. We show that even a low-dimensional embedding trained on a relatively small corpus of programs can improve a state-of-the-art machine learning system for bug detection.
[]
[ { "authors": [ "Miltiadis Allamanis", "Earl T. Barr", "Premkumar Devanbu", "Charles Sutton" ], "title": "A survey of machine learning for big code and naturalness", "venue": "ACM Comput. Surv.,", "year": 2018 }, { "authors": [ "Miltiadis Allamanis", "Marc Brockschmidt", "Mahmoud Khademi" ], "title": "Learning to represent programs with graphs", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Uri Alon", "Meital Zilberstein", "Omer Levy", "Eran Yahav" ], "title": "Code2vec: Learning distributed representations of code", "venue": "Proc. ACM Program. Lang.,", "year": 2019 }, { "authors": [ "Rohan Bavishi", "Michael Pradel", "Koushik Sen" ], "title": "Context2name: A deep learning-based approach to infer natural variable names from usage", "venue": "contexts. CoRR,", "year": 2018 }, { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Zimin Chen", "Martin Monperrus" ], "title": "A literature study of embeddings on source code", "venue": "CoRR, abs/1904.03061,", "year": 2019 }, { "authors": [ "Zimin Chen", "Steve Kommrusch", "Michele Tufano", "Louis-Noël Pouchet", "Denys Poshyvanyk", "Martin Monperrus" ], "title": "Sequencer: Sequence-to-sequence learning for end-to-end program repair", "venue": "CoRR, abs/1901.01808,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "CoRR, abs/1810.04805,", "year": 2018 }, { "authors": [ "L. Gazzola", "D. Micucci", "L. Mariani" ], "title": "Automatic software repair: A survey", "venue": "IEEE Transactions on Software Engineering,", "year": 2019 }, { "authors": [ "Andrew Habib", "Michael Pradel" ], "title": "How many of all bugs do we find? a study of static bug detectors", "venue": "In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering,", "year": 2018 }, { "authors": [ "Jacob A. Harer", "Louis Y. Kim", "Rebecca L. Russell", "Onur Ozdemir", "Leonard R. Kosta", "Akshay Rangamani", "Lei H. Hamilton", "Gabriel I. Centeno", "Jonathan R. Key", "Paul M. Ellingwood", "Marc W. McConley", "Jeffrey M. Opper", "Sang Peter Chin", "Tomo Lazovich" ], "title": "Automated software vulnerability detection with machine learning", "venue": "CoRR, abs/1803.04497,", "year": 2018 }, { "authors": [ "René Just", "Darioush Jalali", "Michael D. Ernst" ], "title": "Defects4j: A database of existing faults to enable controlled testing studies for java programs", "venue": "In Proceedings of the 2014 International Symposium on Software Testing and Analysis,", "year": 2014 }, { "authors": [ "Rafael-Michael Karampatsis", "Charles Sutton" ], "title": "How Often Do Single-Statement Bugs Occur? The ManySStuBs4J Dataset", "venue": "arXiv preprint arXiv:1905.13334,", "year": 2019 }, { "authors": [ "Yujia Li", "Richard Zemel", "Marc Brockschmidt", "Daniel Tarlow" ], "title": "Gated graph sequence neural networks", "venue": "In Proceedings of ICLR’16,", "year": 2016 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized BERT pretraining approach", "venue": "URL http://arxiv.org/abs/1907.11692", "year": 1907 }, { "authors": [ "Fan Long", "Martin Rinard" ], "title": "Automatic patch generation by learning correct code", "venue": "In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2016 }, { "authors": [ "Oren Melamud", "Jacob Goldberger", "Ido Dagan" ], "title": "context2vec: Learning generic context embedding with bidirectional LSTM", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Martin Monperrus" ], "title": "Automatic software repair: A bibliography", "venue": "ACM Comput. Surv.,", "year": 2018 }, { "authors": [ "Arvind Neelakantan", "Jeevan Shankar", "Alexandre Passos", "Andrew McCallum" ], "title": "Efficient non-parametric estimation of multiple embeddings per word in vector space", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Michael Pradel", "Koushik Sen" ], "title": "Deepbugs: A learning approach to name-based bug detection", "venue": "Proc. ACM Program. Lang.,", "year": 2018 }, { "authors": [ "Veselin Raychev", "Pavol Bielik", "Martin Vechev", "Andreas Krause" ], "title": "Learning programs from noisy data", "venue": "In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages,", "year": 2016 }, { "authors": [ "Caitlin Sadowski", "Jeffrey van Gogh", "Ciera Jaspan", "Emma Söderberg", "Collin Winter" ], "title": "Tricorder: Building a program analysis ecosystem", "venue": "In Proceedings of the 37th International Conference on Software Engineering - Volume 1,", "year": 2015 }, { "authors": [ "Joseph Turian", "Lev Ratinov", "Yoshua Bengio" ], "title": "Word representations: A simple and general method for semisupervised learning", "venue": "In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics,", "year": 2010 }, { "authors": [ "Marko Vasic", "Aditya Kanade", "Petros Maniatis", "David Bieber", "Rishabh singh" ], "title": "Neural program repair by jointly learning to localize and repair", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Martin White", "Michele Tufano", "Matias Martinez", "Martin Monperrus", "Denys Poshyvanyk" ], "title": "Sorting and transforming program repair ingredients via deep learning code similarities", "venue": "pp. 479–490,", "year": 2019 }, { "authors": [ "John Wieting", "Mohit Bansal", "Kevin Gimpel", "Karen Livescu" ], "title": "Charagram: Embedding words and sentences via character n-grams", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime G. Carbonell", "Ruslan Salakhutdinov", "Quoc V. Le" ], "title": "XLNet: Generalized autoregressive pretraining for language understanding", "venue": "URL http: //arxiv.org/abs/1906.08237", "year": 1906 }, { "authors": [ "Pengcheng Yin", "Graham Neubig", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander L. Gaunt" ], "title": "Learning to represent edits", "venue": "CoRR, abs/1810.13337,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning rich representations for source code is an open problem that has the potential to enable software engineering and development tools. Some work on machine learning for source code has used hand engineered features (Long & Rinard, 2016, e.g.), but designing and implementing such features can be tedious and error-prone. For this reason, other work considers the task of learning a representation of source code from data (Allamanis et al., 2018a). Many models of source code are based on learned representations called embeddings, which transform words into a continuous vector space (Mikolov et al., 2013). Currently in software engineering (SE) researchers have used static embeddings (Harer et al., 2018; White et al., 2019; Pradel & Sen, 2018), which map a word to the same vector regardless of its context. However, recent work in natural language processing (NLP) has found that contextual embeddings can lead to better performance (Peters et al., 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019). Contextualized embeddings assign a different vector to a word based on the context it is used. For NLP this has the advantage that it can model phenomena like polysemy. A natural question to ask is if these methods would also be beneficial for learning better SE representations.\nIn this paper, we introduce a new set of contextual embeddings for source code. Contextual embeddings have several potential modelling advantages that are specifically suited to modelling source code:\n• Surrounding names contain important information about an identifier. For example, for a variable name, surrounding tokens might include functions that take that variable as an argument or assignments to the variable. These tokens provide indirect information about possible values the variable could take, and so should affect its representation. Even keywords can have very different meanings based on their context. For instance, a private function is not the same as a private variable or a private class (in the case of Java / C++).\n• Contextual embeddings assign a different representation to a variable each time it is used in the program. By doing this, they can potentially capture how a variable’s value evolves through the program execution.\n• Contextual embeddings enable the use of transfer learning. Pre-training a large neural language model and querying it for contextualized representations while simultaneously fine-tuning for the specific task is a very effective technique for supervised tasks for which there is a small amount of supervised data available. As a result only a small model needs to be fine-tuned atop the pre-trained model, without the need for task-specific architectures nor the need of training a large model for each task separately.\nIn this paper, we highlight the potential of contextual code embeddings for program repair. Automatically finding bugs in code is an important open problem in SE. Even simple bugs can be hard to spot and repair. A promising approach to this end is name-based bug detection, introduced by DeepBugs (Pradel & Sen, 2018). The current state-of-the-art in name-based bug detection relies on static representations from Word2Vec (Mikolov et al., 2013) to learn a classifier that distinguishes correct from incorrect code for a specific bug pattern. We introduce a new set of contextualized\nembeddings for code and explore its usefulness on the task of name-based bug detection. Our method significantly outperforms DeepBugs as well as other static representations methods on both the DeepBugs dataset as well as a new previously unused test set of JavaScript projects. We release our implementation and representations as they could lead to improvements in a great variety of SE tasks." }, { "heading": "2 RELATED WORK", "text": "Unsupervised static word embeddings have been extensively used to improve the accuracy of supervised tasks in NLP (Turian et al., 2010). Notable examples of such methods are Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). However, the above models learn only a single context-independent word representation. To overcome this problem some models (Wieting et al., 2016; Bojanowski et al., 2017) enhance the representations with subword information, which can also somewhat deal with out-of-vocabulary words. Another approach is to learn a different representation for every word sense (Neelakantan et al., 2014) but this requires knowing the set of word senses in advance. More recent methods overcome the above issues by learning contextualized embeddings. Melamud et al. (2016) encode the context surrounding a pivot word using a bidirectional LSTM. Peters et al. (2018) use a deep bidirectional LSTM, learning word embeddings as functions of its internal states, calling the method Embeddings using Language Models (ELMo). We discuss ELMo in detail in Section 3. Devlin et al. (2018) introduced bidirectional encoder representations from transformers (BERT). This method learns pre-trained contextual embeddings by jointly conditioning on left and right context via an attention mechanism.\nProgram repair is an important task in software engineering and programming languages. For a detailed review see Monperrus (2018); Gazzola et al. (2019). Many recent program repair methods are based on machine learning. Yin et al. (2018) learn to represent code edits using a gated graph neural network (GGNN) (Li et al., 2016). Allamanis et al. (2018b) learn to identify a particular class of bugs called variable misuse bugs, using a GGNN. Chen et al. (2019) introduce SequenceR which learns to transform buggy lines into fixed ones via machine translation. Our work is orthogonal to these approaches and can be used as input in other models.\nFinally, our work is also related to code representation methods many of which have also been used in program repair. Harer et al. (2018) learn Word2Vec embeddings for C/C++ tokens to predict software vulnerabilities. White et al. (2019) learn Word2Vec embeddings for Java tokens and utilize them in program repair. Alon et al. (2019) learn code embeddings using abstract syntax tree paths. A more detailed overview can be found in (Allamanis et al., 2018a; Chen & Monperrus, 2019)." }, { "heading": "3 EMBEDDINGS FROM LANGUAGE MODELS (ELMO)", "text": "ELMo (Peters et al., 2018) computes word embeddings from the hidden states of a language model. Consequently, the embeddings of each token depend on its context of the input sequence, even out-of-vocabulary (OOV) tokens have effective input representations. In this section, we briefly describe the ELMo embeddings.\nThe first step is that a neural language model is trained to maximize the likelihood of a training corpus. The architecture used by ELMo a bidirectional LSTM with L layers and character convolutions in the input layer. Let the input be a sequence of tokens (t1, ...tN ). For each token tk, denote by xLMk the input representation from the character convolution. Consequently, this representation passes through L layers of forward and backward LSTMs. Then each layer j ∈ {1, ..., L} of the forward LSTM computes a hidden state −−→ hLMk,j , and likewise the hidden states of the backward LSTM are denoted by ←−− hLMk,j . The parameters for the token representation and for the output softmax layer are tied for both directions, while different parameters are learned for each direction of the LSTMs.\nAfter the language model has been trained, we can use it within another downstream task by combining the hidden states of the language model from each LSTM layer. This process is called ELMo. For each token tk of a sentence in the test set, the language model computes 2L + 1 hidden states, one in each direction for each layer, and then the input layer. To make the following more compact, we can write these as hLMk,0 = x LM k for the input layer, and then hLMk,j = [ −−→ hLMk,j , ←−− hLMk,j ] for all of the other layers. The set of these vectors is\nRk = {hLMk,j |j = 0, ..., L}. (1)\nTo create the final representation that is fed to downstream tasks, ELMo collapses the set of representations into a single vector Ek for token tk. A simplistic approach is to only select the top layer, so that Ek = hLMk,L . A more general one, which we use in this work, is to combine the layers via fine-tuned task specific weights s = (s1 . . . sL) for every\nlayer. Then we can compute the embedding for token k as\nEk = γ L∑ j=0 sjh LM k,j , (2)\nwhere γ is an additional scalar parameter that scales the entire vector. In our experiments we did not performed finetuning and thus used equal weights sj = 1/(L + 1) for each layer and γ = 1. However, our implementation also supports all the aforementioned ways of collapsing the set of representations.\nA potential drawback of the method is that it still utilizes a softmax output layer with a fixed vocabulary that does not scale effectively and it still predicts UNK for OOV tokens which may have a negative effect on the representations." }, { "heading": "4 SOURCE CODE ELMO", "text": "We describe Source Code ELMo (SCELMo), which trains ELMo on corpora of source code. However, we note that normally ELMo models in other domains are able to effectively utilize much larger representations. The code was tokenized using the esprima JavaScript tokenizer1. For training the ELMo model we used a corpus of 150,000 JavaScript Files (Raychev et al. 2016) consisting of various open-source projects. This corpus has previously been used on several tasks (Raychev et al., 2016; Pradel & Sen, 2018; Bavishi et al., 2018). We applied the patch released by Allamanis et al. (2018a) to filter out code duplication as this phenomenon was shown on this and other corpora to result in inflation of performance metrics. This resulted in 64750 training files and 33229 validation files. Since the validation set contains files from the same projects as the train the contained instances might be too similar and unrealistic overestimating. To address this we also created a test set of 500 random JavaScript projects sampled from the top 20,000 open-source JavaScript projects as of May 2019. The test corpus has not been previously utilized in previous work and is a better reflection of the performance of the learned bug detectors. Lastly, it is important to know what the performance of the method will be if we do not have access to training data from the projects on which we would like to find bugs. This is common in practice for many real case scenarios. For training the ELMo model, we use an embedding size of 100 features for each of the forward and backward LSTMs so that each layer sums up to 200 features." }, { "heading": "5 CONTEXTUAL EMBEDDINGS FOR PROGRAM REPAIR", "text": "In this section, we describe how contextual embeddings can be incorporated within a recent machine learning-based bug detection system, the DeepBugs system of Pradel & Sen (2018). In the first part of this section, we give background about the DeepBugs system, and then we describe how we incorporate SCELMo within DeepBugs. DeepBugs treats the problem of finding a bug as a classification problem. The system considers a set of specific bug types, which are small mistakes that might be made in a program, such as swapping two arguments. For each bug type, DeepBugs trains a binary classifier that takes a program statement as input and predicts whether the statement contains that type of bug. At test time, this classifier can be run for every statement in the program to attempt to detect bugs.\nIn order to train the model both examples of correct and incorrect (buggy) code are necessary. DeepBugs treats the existing code as correct and randomly mutates it to obtain buggy code. To obtain training examples, we extract all expressions from the source code which are either the function calls with exactly two arguments and all binary expressions. To create instances of buggy code we mutate each of the correct instances. As such, arguments in function calls are swapped, the binary operator in binary expressions is replaced with another random one, and finally randomly either the left or the right operand is replaced by another random binary operand that appears in the same file. Then the classification task is a binary task to predict whether the instance is correct, i.e., it comes from the original code, or whether it is buggy, i.e. it was one of the randomly mutated examples. The validation and test sets are mutated in the same way as the training set. The split between correct and buggy instances has 50/50 class distribution as for each original code instance exactly one mutated buggy counterpart is created.\nThe architecture for the classifier is a feedforward network with a single hidden layer of 200 dimensions with Relu activations and a sigmoid output layer. For both the input and hidden layers a dropout of 0.2. The network was trained in all experiments for 10 epochs with a batch size of 50 and the RMSProp optimizer. We note that for maintaining a consistent comparison with DeepBugs we kept all the above parameters as well as the optimizer’s parameters fixed to the values reported in Pradel & Sen (2018). Tuning these parameters would probably result in at least a small performance increase for our method.\n1https://esprima.org/\nIn our experiments, we consider three bug types that address a set of common programming mistakes: swapped arguments of function calls, using the wrong binary operator and using an incorrect binary operand in a binary expression. The methodology can easily be applied to other bug types. Figure 1 illustrates an example of each of the three bug types." }, { "heading": "5.1 INPUT TO THE CLASSIFIER", "text": "A key question is how a statement from the source code is converted into a feature vector that can be used within the classifier. DeepBugs uses a set of heuristics that, given a statement and a bug type, return a sequence of identifiers from the statement that are most likely to be relevant. For instance, for the call to setTimeout in Listing 1 the following sequence of identifiers would be extracted: [setTimeout, delay, function]. A detailed description of the heuristics is available in Appendix A.\nThese heuristics result in a sequence of program identifiers. These are converted to continuous vectors using word embeddings, concatenated, and this is the input to the classifier. DeepBugs uses Word2Vec embeddings trained on a corpus of code. In our experiments, we train classifiers using three different types of word embeddings. First, we kept the 10,000 most frequent identifiers/literals and assigned to each of them a random embedding of 200 features. Second, to reproduce the results of Pradel & Sen (2018), we use the CBOW variant of Word2Vec to learn representations consisting of 200 features for the 10,000 most frequent identifiers/literals. Finally, we train a FastText embeddings (Bojanowski et al., 2017) on the training set to learn identifier embeddings that contain subword information. The subwords used by FastText are all the character trigrams that appear in the training corpus. Identifiers are therefore composed of multiple subwords. To represent an identifier, we sum the embeddings of each of its subwords and summing them up. This allows the identifier embeddings to contain information about the structure and morphology of identifiers. This also allows the FastText embeddings, unlike the Word2Vec ones, to represent OOV words as a combination of character trigrams.\nNote that DeepBugs can detect bugs only in statements that do not contain OOV (out-of-vocabulary) identifiers, because its Word2Vec embeddings cannot extract features for OOV names. Instead our implementation does not skip such instances. Since the original work discarded any instances that contain OOV identifiers we neither know how the method performs on such instances nor how often those appear in the utilized dataset of DeepBugs. Moreover, DeepBugs supported only a specific subset of AST nodes and skipped the rest. For example if a call’s argument is a complex expression consisting of other expressions then the call would be skipped. However, we expanded the implementation to support all kinds of AST nodes and to not skip instances with nested expressions as discussed in Appendix A. We note that we still skip an instance if one of its main parts (e.g., a function call’s argument) is a complex expression longer than 1,000 characters as such expressions might be overly long to reason about." }, { "heading": "5.2 CONNECTING SCELMO TO THE BUG DETECTOR", "text": "We investigated two variants of the bug detection model, which query SCELMo in different ways to get features for the classifier. The first utilizes the heuristic of Section A to extract a small set of identifiers or literals that represent the code piece. For example, for an incorrect binary operand instance we extract one identifier or literal for the left and right operands respectively, and we also extract its binary operator. Then, those are concatenated to form a query to the network. In the case of function calls we extract the identifier corresponding to the name of the called function, one identifier or literal for the first and second argument respectively and an identifiers for the expression on which the function is called. We also add the appropriate syntax tokens (a ’.’ if necessary, ’,’ between the two arguments, and left and right parentheses) to create a query that resembles a function call. This baseline approach creates simplistic fixed size queries for the network but does not utilize its full potential since the queries do not necessarily resemble actual code, nor correct code similar to the sequences in the training set for the embeddings. We will refer to this baseline as No-Context ELMo.\nOur proposed method, we compute SCELMo embeddings to the language model all the tokens of the instances for which we need representations. Valid instances are functions calls that contain exactly two arguments and binary expressions. To create a fixed-size representation we extract only the features corresponding a fixed set of tokens. Specifically, for functions calls we use the representations corresponding to the first token of the expression on which the function is called, the function name, the first token of the first argument and the first token of the second argument. While, for binary expressions we use those of the first token of the left operand, the binary operator, and the first token of the right operand. Since the representations contain contextual information, the returned vectors can capture information about the rest of the tokens in the code sequence." }, { "heading": "6 RESULTS", "text": "We next discuss the experiments we performed and their corresponding results. We measured the performance of the three baselines as well as those of non-contextual ELMO and SCELMO. Measuring the performance of non-contextual ELMO allows us to evaluate how much improvement is due to specifics of the language model architecture, such as the character convolutional layer which can handle OOVs, and how much is due to the contextual information itself." }, { "heading": "6.1 PERFORMANCE ON VALIDATION SET", "text": "In our first experiment we evaluate the performance of the methods in tasks where training data from the same projects are available. The evaluation performed in this experiment gives a good estimation of how our method performs compared to the previous state-of-the-art technique of DeepBugs. One main difference however is that the evaluation now also includes instances which contain OOV. As a consequence the bug detections tasks are harder than those presented by Pradel & Sen (2018) as their evaluation does not include in both the training and validation set any instance for which an extracted identifier is OOV. Table 1 illustrates the performance of the baselines and our models. As one would expect the FastText baseline improves over Word2Vec for all bug types due to the subword information. Moreover, our model SCELMo massively outperforms all other methods. Lastly, even no-context ELMo the heuristic version of SCELMo that does not utilize contextual information at test time outperforms the baseline methods showcasing how powerful the pretrained representations are." }, { "heading": "6.2 INCLUDING COMPLEX EXPRESSIONS", "text": "In our next experiment we also included instances that contain elements that are complex or nested expressions. For instance, in the original work if one the arguments of a function call or one of the operands of a binary expression is an expression consisting of other expressions then the instance would not be included in the dataset. Several AST node\ntypes such as a NewExpression node or an ObjectExpressionwere not supported. Figure 2 a few examples of instances that would be previously skipped 2. Such instances were skipped by Pradel & Sen (2018) and not included in their results. We do note though that we still skip very long expressions that contain more than 1000 tokens.\nSimilarly to the previous experiment SCELMo significantly outperforms all other models. This is evident in Table 2. Lastly, we clarify that the results of this section should not be directly compared to those of the previous one as for this experiment the training set is also larger." }, { "heading": "6.3 EXTERNAL TEST EVALUATION", "text": "The last experiment’s objective is to showcase how the various models would perform on unseen projects as this better illustrates the generalizability of the techniques. The configuration utilized is identical to that of the previous section. By looking at Table 3 one can notice that the baselines have a major drop in performance. This is a common finding in machine learning models of code, namely, that applying a trained model to a new software project is much more difficult than to a new file in the same project. In contrast, SCELMo offers up to 15% improvement in accuracy compared to Word2Vec baseline. In fact, impressively enough SCELMo on the external test set is better than the evaluation set one of the baselines." }, { "heading": "6.4 OOV STATISTICS", "text": "In order to better understand the above results we measured the OOV rate of the basic elements of the code instances appearing in the dataset. Here the OOV rate is calculated based on the vocabulary of 10000 entries utilized by the Word2Vec and random baseline models. These are illustrated in Tables 4 and 5. We measured the OOV rates for both the version of the dataset used in Section 6.4, which we call Train and Validation, and that used in Section 6.2, which we call Extended Train and Extended Validation.\nTables 4 and 5 describe the OOV rates for different parts of the expression types that are considered by the DeepBugs bug detector. A detailed description of the identifiers extraction heuristic can be found in Appendix A. We first focus\n2The AST is extracted using the acorn parser https://github.com/acornjs/acorn\non the swapped arguments bug pattern and consider all of the method call that have exactly two arguments. Each method call contains the function name, a name of the first argument, a name of the second argument, and a base object. The base object is the identifier that would be extracted from the expression (if such an expression exists) on which the function is called. For instance, from the following expression: window.navigator.userAgent.indexOf(”Chrome”), userAgent would be extracted as the base object. Table 4 shows for each of the components how often they are OOV. In the expanded version of the dataset if one of the arguments is a complex expression then it is converted into a name based on the heuristic described in Section A. The resulting statistics contain valuable information as for instance, it is almost impossible for the Word2Vec baseline to reason about a swap arguments bug if the identifiers extracted for both arguments are OOV.\nIn a similar manner for the incorrect operand and operator bug patterns we consider all the binary operations. Each binary expression consists of a left and right operand and a name is extracted for each of them. For each operand we also measured the frequency with which the operand corresponds to certain common types such as identifier, literal or a ThisExpression." }, { "heading": "7 IS NEURAL BUG-FINDING USEFUL IN PRACTICE?", "text": "Although related work (Pradel & Sen, 2018; Allamanis et al., 2018b; Vasic et al., 2019) has shown that there is great potential for embedding based neural bug finders, the evaluation has mostly focused on synthetic bugs introduced by mutating the original code. However, there is no strong indication that the synthetic bugs correlate to real ones, apart from a small study of the top 50 warnings for each bug type produced by DeepBugs. A good example is the mutation operation utilized for the incorrect binary operator bug. A lot of the introduced bug instances could result in syntactic errors. This can potentially create a classifier with a high bias towards correlating buggy code to syntactically incorrect code, thus hindering the model’s ability to generalize on real bugs. Ideally, in an industrial environment we would like the resulting models to achieve a false positive rate of less than 10 % (Sadowski et al., 2015). Sadly, high true positive rates are not to be expected as well since static bug detectors were shown to be able to detect less than 5% of bugs\n(Habib & Pradel, 2018) contained in the Defects4J corpus (Just et al., 2014) and less than 12% in a single-statement bugs corpus (Karampatsis & Sutton, 2019). We note that in the second case the static analysis tool is given credit by reported any warning for the buggy line, so the actual percentage might lower than the reported one.\nWe next make a first step on investigating the practical usefulness of our methods by applying the classifiers of the previous section on a small corpus of real JavaScript bugs. However, we think that this is a very hard yet interesting problem that should be carefully examined in future work. In order to mine a corpus of real bug changes we used the methodology described in (Karampatsis & Sutton, 2019). We note that we adapted their implementation to utilize the Rhino JavaScript parser3. Their methodology extracts bug fixing commits and filters them to only keep those that contain small single-statement changes. Finally, it classifies each pair of modified statements by whether the fit a set of mutation patterns. The resulting dataset is shown in Table 6. Upon acceptance of the paper we will release this dataset along with our implementation, the rest of data used, and the learned representations.\nFinally, we queried the DeepBugs and SCELMo with each buggy instance as well as its fixed variant and measured the percentage of correctly classified instances for each of the two categories. We also ignored any instances for which the JavaScript parser utilized for both failed to extract an AST. We classified as bugs any instances that were assigned a probability to be a bug > 75%. In an actual system this threshold should ideally be tuned on a validation set.\nTable 7 suggests that there might indeed be some potential for future practical applications of neural bug finding techniques. Both are able to uncover some of the bugs. However, the results also suggest that careful tuning of the predictions threshold might be necessary, especially if we take into account the industrial need to comply with a low false positive rate (FPR). For instance, raising SCELMo’s prediction threshold to 80% for the swap arguments bug results in finding only 3.34% of the bugs but correctly classifying 100% of the repaired function calls, thus achieving 0.0% false positive rate. Moreover, since SCELMo could not uncover any of the real binary operator bugs, future work could investigate the effect of utilizing different mutation strategies for the purpose of artificial bug-induction. Future work could also investigate if fine-tuning on small set of real bugs could result in more robust classifiers." }, { "heading": "8 CONCLUSION", "text": "We have presented SCELMo, which is to our knowledge the first language-model based contextual embeddings for source code. Contextual embeddings have many potential advantages for source code, because surrounding tokens can indirectly provide information about tokens, e.g. about likely values of variables. We highlight the utility of SCELMo embeddings by using them within a recent state-of-the-art machine learning based bug detector. The SCELMo embeddings yield a dramatic improvement in the synthetic bug detection performance benchmark, especially on lines of code that contain out-of-vocabulary tokens and complex expressions that can cause difficulty for the method. We also showed and discussed the performance of the resulting bug detectors on a dataset of real bugs raising useful insights for future work.\n3https://github.com/mozilla/rhino" }, { "heading": "A NAME EXTRACTION HEURISTIC", "text": "In order for DeepBugs to operate it is necessary to extract identifiers or literals for each expression part of the statement. The bug detector for swapped arguments utilizes the following elements of the function call:\nBase Object: The expression on which the function is called. Callee: The called function. Argument 1: The expression consisting the first argument of the called function. Argument 2: The expression consisting the first argument of the called function.\nSimilarly the bug detectors for incorrect binary operators and operands utilize the following elements of the binary expression:\nBinary Operator: The binary operator utilized in the expression. Left Operand: The left operand of the binary expression. Right Operand: The right operand of the binary expression.\nWe next describe the extraction heuristic, which is shared by all the bug detectors. The heuristic takes as input a node n representing an expression and returns name(n) based on the following rules:\n• Identifier: return its name. • Literal: return its value. • this expression: return this. • Update expression with argument x: return name(x). • Member expression accessing a property p: return name(p). • Member expression accessing a property base[p]: return name(base). • Call expression base.callee(...): return name(callee). • Property node n: If n.key does not exist return name(n.value). If name(n.key) does not exist return name(n.value) . Otherwise randomly return either name(n.value) or name()n.key).\n• Binary expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).\n• Logical expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise randomly return either name(l) ir name(r).\n• Assignment expression with left operand l and right operand r: Run the heuristic on both l and r to retrieve name(l) and name(r). If name(l) does not exist return name(r). If name(r) does not exist return name(l). Otherwise, randomly return either name(l) ir name(r).\n• Unary expression with argument u : Return name(u). • Array expression with elements li : For all li that name(li) exists randomly choose one of them and return name(li).\n• Conditional expression with operands c, l, and r: Randomly choose one out of c, l, r for which a name exists and return its name.\n• Function expression: return function. • Object expression: return {. • New expression with a constructor function call c: return name(c).\nAll random decisions follow a uniform distribution." } ]
2,019
SCELMO: SOURCE CODE EMBEDDINGS
SP:f16157d7cd025cddd1b7f8024983737cfed9e8d4
[ "This paper introduces a strategy to prune a convolutional neural network during training. To speed up training, the proposed method prunes the weights with the smallest magnitude during only a small number of epochs at the beginning of training, later on continuing training with a fixed sparsity pattern. Several granularity levels for convolutional and fully-connected layers are studied. Furthermore, the robustness of the resulting pruned networks to adversarial attacks is investigated.", "This paper explores a series of incremental variations of existing pruning techniques for compressing Resnet-50 for ImageNet. Specifically, it proposes concentrating all pruning during an early \"era\" of training (the first 20-50 epochs out of 100 total). It also explores hybrids between sparse pruning and structured pruning. Finally, it considers the adversarial robustness of the resulting networks to the FGSM attack. " ]
This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity. We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable.
[]
[ { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert A. Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "CoRR, abs/1711.05136,", "year": 2017 }, { "authors": [ "Xiaoliang Dai", "Hongxu Yin", "Niraj K Jha" ], "title": "Nest: A neural network synthesis tool based on a grow-and-prune paradigm", "venue": null, "year": 2017 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Song Han", "Xingyu Liu", "Huizi Mao", "Jing Pu", "Ardavan Pedram", "Mark A. Horowitz", "William J. Dally" ], "title": "EIE: efficient inference engine on compressed deep neural network", "venue": "CoRR, abs/1602.01528,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CoRR, abs/1512.03385,", "year": 2015 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Zehao Huang", "Naiyan Wang" ], "title": "Data-driven sparse structure selection for deep neural networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Alex Krizhevsky" ], "title": "One weird trick for parallelizing convolutional neural networks", "venue": null, "year": 2014 }, { "authors": [ "Vadim Lebedev", "Victor Lempitsky" ], "title": "Fast convnets using group-wise brain damage", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "arXiv preprint arXiv:1608.08710,", "year": 2016 }, { "authors": [ "Zhuang Liu", "Jianguo Li", "Zhiqiang Shen", "Gao Huang", "Shoumeng Yan", "Changshui Zhang" ], "title": "Learning efficient convolutional networks through network slimming", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu", "Weiyao Lin" ], "title": "Thinet: A filter level pruning method for deep neural network compression", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Sangkug Lym", "Esha Choukse", "Siavash Zangeneh", "Wei Wen", "Mattan Erez", "Sujay Shanghavi" ], "title": "Prunetrain: Gradual structured pruning from scratch for faster neural network training", "venue": "URL http://arxiv.org/abs/1901.09290", "year": 1901 }, { "authors": [ "Huizi Mao", "Song Han", "Jeff Pool", "Wenshuo Li", "Xingyu Liu", "Yu Wang", "William J Dally" ], "title": "Exploring the granularity of sparsity in convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Hesham Mostafa", "Xin Wang" ], "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization", "venue": "CoRR, abs/1902.05967,", "year": 2019 }, { "authors": [ "Sharan Narang", "Erich Elsen", "Gregory Diamos", "Shubho Sengupta" ], "title": "Exploring sparsity in recurrent neural networks", "venue": "arXiv preprint arXiv:1704.05119,", "year": 2017 }, { "authors": [ "Xavier Suau", "Luca Zappella", "Vinay Palakkode", "Nicholas Apostoloff" ], "title": "Principal filter analysis for guided network compression", "venue": "arXiv preprint arXiv:1807.10585,", "year": 2018 }, { "authors": [ "Lei Sun" ], "title": "Resnet on tiny imagenet", "venue": "Submitted on,", "year": 2016 }, { "authors": [ "Luyu Wang", "Gavin Weiguang Ding", "Ruitong Huang", "Yanshuai Cao", "Yik Chau Lui" ], "title": "Adversarial robustness of pruned neural networks", "venue": null, "year": 2018 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv e-prints,", "year": 2017 } ]
[ { "heading": null, "text": "This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity.\nWe study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To ensure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable." }, { "heading": "1 INTRODUCTION", "text": "Pruning weights can compress a network into a smaller model so that the model can fit into faster/smaller memory and therefore result in execution speedups (Han et al., 2016; 2015a). To increase the accuracy Han et al. (2015b) and Mao et al. (2017) explore training the network dense after pruning. The resulting network can maintain accuracy based on the specified level of sparsity (Mostafa & Wang, 2019; Zhu & Gupta, 2017; Han et al., 2015a).\nStructured sparsity has been explored for RNNs and also CNNs where a certain number of nonzeros is allowed across various cross-sections of the weight tensors. These methods aim to speed up computation and reach some final level of sparsity for deployment. Narang et al. (2017) have shown promising results for structured training of RNNs while sparse CNNs could not achieve the same performance (Mao et al., 2017).\nRecent work has demonstrated that structurally sparse training can speed up execution on GPUs (He et al., 2017; Lym et al., 2019; Zhu & Gupta, 2017). However, these training mechanisms add regularization and computational overhead to eliminate unnecessary weights. The regularization term modifies the original training and can be expensive in hardware. While enforcing coarse-grain sparsity Lym et al. (2019) provides significant speedups, the final network contains an insufficient degree of sparsity for deployment on edge devices.\nMostafa & Wang (2019) show that with adaptive sparse training and dynamic reallocation of nonzeros sparsity levels up to 80% can be achieved. However, even though an additional 10 epochs of training are required, an accuracy loss of around 1-2% is still observed. The main drawback is the overhead incurred while implementing such technique on the target platform. Continuous reconfiguration of the sparsity pattern is expensive as it does not allow for compression of weights during training.\nTo achieve speedups and a desired final degree of sparsity, we aim to apply the techniques in Han et al. (2015b) and Mao et al. (2017) at earlier stages in training at higher frequency within a period which we call the pruning era, usually a period of 20-30 epochs. During the pruning era, with fine granularity of at most a kernel size, we exploit one of the three proposed sparsity regimes. Subsequently, we fix the mask for the rest of the training to speed it up. Our motivation came from the insight that having a fixed sparse multiply-accumulate pattern allows weight compression during training and can save compute and energy in hardware (Han et al., 2016).\nWe explore the impact of various pruning granularities, sparsity levels, and learning-rate schedules on the network’s convergence as well as adversarial robustness for CNNs like Resnet-50 (He et al., 2015) on ImageNet and tinyImagenet (CS231N, 2015).\nRecent literature has shown that adversarial attacks are more successful on pruned neural networks than they are on regular neural networks (Wang et al., 2018). Given the danger of adversarial attacks in real world situations, we find that it is important to evaluate our sparsity techniques under adversarial robustness. We leverage the FGSM mechanism (Goodfellow et al., 2014) to evaluate the adversarial robustness on our sparse models. This paper makes the following contributions:\n1. We propose a mechanism to train and prune a convolutional network during the earlier stages of training such that this sparsity can be harvested for the computational speedups. To do this, we fix the sparse weight masks for the remainder of the training.\n2. For fully connected sparsification, we eliminate blocks of fully connected weights based on their connection to the zeros in the previous convolutional layer.\n3. We enforce structural, regularization free, magnitude-based pruning across two distinct dimensions and a combined version. These dimensions are inside convolution window R ×S and across input/output feature matrix (C K ).\n4. Our sparse models are as robust to adversarial FGSM attacks as fully dense models.\n5. We demonstrate that early stage dense training is crucial for maintaining high accuracy.\n6. The proposed technique is tolerant to sparsity levels of up to 60-70% with under 1% accuracy degradation. We can compensate by scheduling an extra learning rate drop and training for an extra 10 epochs.\nThe rest of the paper is organized as follows. Section 2 explains our pruning methodology. Section 3 describes the experimental setup framework. Section 4 presents results and discusses their interpretation. Section 5 presents the related work. Section 6 concludes the paper." }, { "heading": "2 PRUNING METHODOLOGY", "text": "Our proposed pruning mechanism works by always pruning the weights of smallest magnitude after each weight update. After a forward and backward pass (one batch update), the model is pruned. If a weight is already zero, the gradient is also set to zero. This means that once a weight becomes zero, it will remain zero for the rest of the training period.\nThis mechanism is similar to Han et al. (2015b), except that we only prune in the earlier stages of the training as opposed to post training. Additionally, this work is similar to Narang et al. (2017) although we set the sparsity threshold instead of using a heuristic to calculate it. We chose this pruning mechanism because of its negligible computational overhead.\nIn our pruning algorithm, the sparsity threshold refers to the percentage of weights in the network that are currently pruned. Before or during the first epoch of pruning, we will have a sparsity threshold of zero. As we continue training, we gradually increase the sparsity threshold so that by the final epoch of pruning the network sparsity will have reached our final, desired threshold. Finally, we also define the pruning era to be the epochs between the first and final epochs of pruning depicted in Figure 1b.\nWe evaluate the pruning mask after every training step until we reach the final epoch of pruning. After the final epoch, the pruned values in the network will remain zero for the rest of training; no new pruning will occur, and only the non-zero weights will be updated." }, { "heading": "2.1 PRUNING METHODOLOGY BY LAYER", "text": "Pruning the smallest magnitude weights in the entire network is inefficient because it involves sorting the weights over the network. Instead, we prune the smallest magnitude weights or sum of weights, within a certain locale of the network. When pruning, we examine each layer individually and apply a separate technique to evaluate which weights to prune, depending on the type of layer we are currently pruning." }, { "heading": "2.1.1 CONVOLUTIONAL LAYER PRUNING", "text": "Window pruning for 3x3 Convolutional Layers Figure 2a shows the result of a pruned 3×3 convolutional weight tensor under the window pruning method. In this scheme, window layer pruning refers to pruning of weights within the 3×3 convolution kernels. We allow a maximum number of non-zero values for each kernel in the 3×3 convolutional layers and eliminate the weights of smallest magnitude.\nAlgorithm 1 CK Pruning Algorithm\ngenerate_ck_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do\nfor all c in C do for all k in K do\nkernel_maxc,k = max(θc,k ) end for cutoff_index = size(θc ) ∗ sparsity_threshold n = max(cutoff_index, size(θc ) − max_non_zero − 1) cutoff_value = nth largest value in kernel_maxc for all k in K do\nmaskc,k = 1 if kernel_maxc,k > cutoff_value, else 0 end for\nend for end for\nCK Pruning Methodology Figure 2b shows the result of a pruned 3×3 convolutional weight tensor under the CK pruning method. In this scheme, the weights of a certain layer can be viewed\nas a CK matrix of R×S kernels. The CK pruning method involves pruning the 3×3 convolutions along the channel and kernel dimensions of each convolutional filter, i.e., we prune whole kernels (CK matrix of R×S windows) at once and can ultimately prune all the input channels in an output channel. As defined by Algorithm 1, we determine which filter to prune by examining the max of the magnitudes of all the weights in a kernel, which is the max of nine weights. This max is used to evaluate whether the whole kernel should be pruned or not.\nCombined Pruning Methodology To combine window and CK pruning, we introduce an intraepoch combined pruning method, which we refer to hereafter as “intra-epoch pruning” or “intra”, for short. As shown by appendix Algorithm 4 in the Appendix, in a given epoch we first apply window pruning to each 3×3 convolutional layer at a fraction of the sparsity threshold for that epoch. Then, we prune the remaining fraction of the sparsity threshold with CK Pruning." }, { "heading": "2.1.2 FULLY CONNECTED PRUNING", "text": "Like pruning for convolutional layers, we apply a two-tier pruning scheme from Mao et al. (2017) for fully connected layers: micro-level pruning within a block and macro-level pruning that eliminates entire blocks.\nBlock FC Pruning Figure 2d refers to pruning of individual blocks. Here, we prune an entire n×n (n<5) window within the dense layer and create coarse grained sparsity. To do this, we sum the magnitude of the weights in each window and prune the windows with the smallest magnitude.\nFine FC Pruning Figure 2c refers to the pruning of individual weights. Here, we prune the individual weights in the entire FC Layer, where we compare the magnitude of all the weights to each other.\nThe produced zero patterns in the last convolution layer allow for eliminating more weights in fully connected layer as depicted in Figure 3. If all the C windows for a specific Ki are zeros, the output activation for the corresponding Ki is also zero. The corresponding neurons in the following fully connected layer are therefore receiving zero input activations and can be eliminated along with their associated weights. This enables us to get sparsity without having to evaluate the weights in the fully connected layer.\nWhen pruning just the small weights in the FC layer, one can inadvertently cut off relevant connections between the input and output layers. Accordingly, we structure the pruning mechanism such that each output neuron should be influenced by the input. This means every column in the weight matrix of the fully connected layer in Figure 3 has at least one non-zero element." }, { "heading": "3 EXPERIMENTAL SETUP", "text": "To validate each type of pruning (window, CK, or intra-epoch) we selected ResNet50 (He et al., 2015) v1 and v1.5 with the ImageNet and/or Tiny-ImageNet (CS231N, 2015) datasets. We evaluated each pruning method by varying sparsity levels and pruning era.\nWe experimented with ResNet50 v1.5, in addition to v1, to explore how changing the network structure would affect the top-1 accuracy. For window pruning, we tested with ResNet50v1 on Tiny-ImageNet as well as ResNet50v1 and v1.5 on ImageNet to compare the impact of strided convolutions on our sparse training. Also, we experimented with the learning rate schedule of the training regime. Our typical schedule for ResNet50v1.5 included learning rate drops at epochs 30,\n60, and 90, but we experimented with placing the last drop at epoch 80 instead. Unlike typical ResNet50 training, which uses a batch size of 256 and starts the learning rate at 0.1, we used batch size 64 as this is what could fit in our GPUs. As suggested by Krizhevsky (2014), we scaled the starting learning rate by 1p\n4 = 12 to 0.05 in order to compensate for the smaller batch size." }, { "heading": "3.1 SPARSE TRAINING EXPERIMENTS", "text": "For ResNet50v1 and Tiny-ImageNet, we did gradual pruning until epoch 10. We subsequently enforced the final sparsity requirement, set a maximum number of non-zero values in each window/kernel of each layer, and fixed this sparsity pattern for the rest of training. We chose the 10th epoch as the final epoch of pruning because we wanted to see if we could fix the sparsity mask early in the training process.\nFor ResNet50v1 and ImageNet, our goal was to start pruning as early as possible, while maintaining high accuracy. We set our pruning era to epochs 0-30. Our hypothesis was that 30th epoch would be a suitable epoch to stop pruning, because this is where the learning rate is first decreased, in addition, there would be a large drop in accuracy if we stop pruning at epoch 20. However, this schedule did not perform well for ResNet50v1.5 and ImageNet and therefore we set our pruning era to 30-50.\nTo test training using CK and intra-epoch pruning, we mainly used ResNet50v1 and ResNet50v1.5 with ImageNet, but also performed CK pruning on ResNet50v1 and Tiny-ImageNet. We adopted a similar approach to Han et al. (2015b) to train with CK or intra-epoch pruning by setting the first epoch of pruning to 20, 30, or 40 with a pruning era of 20 or 30 epochs. Then, we continued to train the sparsified network until the final epoch." }, { "heading": "3.2 ADVERSARIAL ROBUSTNESS", "text": "Since there was evidence that increasing sparsity lowers adversarial robustness (Wang et al., 2018), we evaluated this robustness in our models. To do so, we applied Fast Gradient Sign Method (FGSM) attacks defined in Goodfellow et al. (2014) on one of our sparse models, to generate its own adversarial examples, and measured the validation accuracy again. We used the same validation set as ImageNet and applied the attack’s image transformation to each input image. Moreover, we experimented with a variety of different ² in order to see how our accuracy decayed. Lastly, in our experiments we leveraged the examples provided in Pytorch tutorials 1." }, { "heading": "4 RESULTS", "text": "" }, { "heading": "4.1 RESNET50 ON TINY-IMAGENET", "text": "From our experiments with Tiny-Imagenet (shown in Appendix in Table 5), we see that even with up to 80% sparsity, both window and CK pruning are able to achieve levels of accuracy comparable to the dense baseline. CK pruning performs even better than the baseline." }, { "heading": "4.2 RESNET50 ON IMAGENET", "text": "Our ResNet50 v1.5 experiments (Table 1 and Appendix Figure 11) with the first epoch of pruning at epoch 30 show that all of our pruning methods are able to achieve over 73% accuracy, and we can achieve above 74% accuracy up to 70% sparsity. Table 2 shows that on ResNet50 v1, our methods can achieve between 0.1-0.3% less than the baseline.\nBy comparing the sparsity curves of the window, CK, and intra-epoch pruning runs in Figure 4 (top right), we observe that the sparsity of window pruning is not as smooth as the other methods. This is likely indicative of the more rigid structure of CK and intra-epoch pruning, which causes the degree of sparsity to be much more uniform from epoch to epoch.\nFigure 4 (top left, bottom right) also shows on ResNetv1.5, the window is slightly better than the CK and intra, which have similar performance, but the window is worse than the other two on\n1https://pytorch.org/tutorials/beginner/fgsm_tutorial.html\nResNetv1. Furthermore, starting the pruning era later improves performance (Figure 4-(bottom left)).\nTable 3 demonstrates that our sparsity mechanism can have a minimal drop in adversarial robustness (approximately 1-1.5%) compared to the dense baseline model, whereas other methods see more accuracy degradation (Wang et al., 2018).\nThe sparsity of each layer, depicted in Figure 5, emphasizes that early layers tolerate sparsity better, as they have consistently higher sparsity in the last 1×1 convolutional layer of each residual block. This may be due to their vicinity to the residual connection, which provides additional information to the layer." }, { "heading": "4.3 DISCUSSION", "text": "Overall, we notice that there is a tolerance for sparsity (up to 70%), which yields around 1% accuracy loss compared to the dense baseline. However, this loss can be compensated by dropping the learning rate and performing another 10 epochs of training, which provides a 0.7-0.9% accuracy increase. With high levels of sparsity this extension is computationally cheap.\nWe observed the early stages of dense training are important for high accuracy, as longer periods of dense training consistently outperformed shorter ones. Moreover, widening the pruning era slightly (10 epochs) improves the final convergence accuracy (by around 0.2%).\nWe also observed that pushing the learning rate drop schedule to earlier epochs or aligning it with pruning era does not improve the final accuracy. However, pushing the last learning rate drop from epoch 90 to 80 can improve the accuracy by around 0.1%. (See Appendix Table 8 and Table 1)\nWe postulate that window pruning performs worse for ResNetv1.5 compared to ResNetv1 due to the strided nature of convolutions in ResNetv1.5." }, { "heading": "5 RELATED WORK", "text": "To give a broad comparison stage, we extended Mostafa & Wang (2019)’s table on alternative sparsity mechanisms in Table 4 with respect to characteristics of their mechanisms: training/compression focus, regularization, the period in which pruning is applied, strictness of parameter budget, and pruning granularity. We explain each of the columns below:\n1. Training Focus: Trying to train while maintaining/increasing sparsity of the network. The opposite is Compression Focus, i.e., methods that only seek to provide a smaller network for inference.\n2. Regularization: Applying a regularization value to the loss, in order to find and prune irrelevant weights, while others use magnitude-based pruning.\n3. Pruning Era: The period during training in which the pruning is applied.\n4. Strictness of Parameter Budget Era wrt to Pruning: A strict parameter budget is fixed to the size of the final sparse model. Mostafa & Wang (2019) have a strict budget throughout training. Our method is only strict after the pruning era. Some networks do not have a strict parameter budget and only prune weights that appear to be irrelevant and without a sparsity target.\n5. Pruning Granularity: The level of granularity within in the network at which values are pruned. For example, at the kernel level, we determine which values to prune by examining only the weights in the kernel (Mao et al., 2017). See Figure 2 for more information.\nWe chose these concepts because their characteristics can enable faster and lower-energy training. A strict parameter allows the hardware mapping to plan for a fixed number of multiply-accumulate\noperations. The granularity of the sparsity mechanism indicates how easy it is to adapt the mechanism to an existing hardware. The coarser the granularity, the more adaptable it is to existing hardware (Mao et al., 2017). Regularization, although useful in forcing the network to learn prunable weights, adds more irregularity to computation flow. The pruning era starting at the beginning of the training enables us to train with a compressed network.\nMao et al. (2017) explores pruning on a range of granularities including window, kernel, and filter, and their effect on accuracy. They also qualitatively and quantitatively show that coarsegrain pruning, like kernel- or filter-level sparsity, is more energy-efficient due to fewer memory references. Similarly, our work surveys sparsity at the window, kernel, and filter levels. We improve on Mao et al.’s work in two ways. First, we show higher top-5 accuracy at higher sparsity levels on a complex benchmark, ImageNet on ResNet50 (92.338% at 40% CK sparsity), and we also show high top-1 accuracy whereas Mao et al. only report top-5.\nPrunetrain (Lym et al., 2019) explores a way to create sparse channels and even layers to speed up training with around a 1% drop in accuracy. However, this requires a shift in the training mechanism, including a regularization term that could effect how the mechanism scales to large and distributed settings and that must be computed throughout training. The resulting network is only around 50% sparse and the accuracy loss due to sparse training is high enough that a baseline network with same accuracy could result into same computational savings by just terminating training at much earlier stage/epoch.\nIn contrast to other pruning mechanisms, our proposed window, CK, and combined sparsity mechanisms have strict parameter budgets after the pruning era. The CK and combined schemes have channel-level and kernel-level pruning granularities." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this work, we introduced techniques to train CNNs with structured sparsity and studied the tradeoffs associated with various implementation options. We demonstrated on ResNet50 with the full ImageNet dataset that the proposed sparse training method outperforms all related work and is comparable to a dense model in terms of convergence accuracy. We also observed that delaying the start of enforced, gradual pruning to at least epoch 20 was necessary to reach high convergence accuracy, highlighting the importance of the early epochs of dense training. Moreover, performing an additional 10 epochs of training provides substantial (around 1%) accuracy gains of the final model. In the future, we would like to study the tradeoffs of sparse training on low-precision networks." }, { "heading": "7 APPENDIX", "text": "" }, { "heading": "7.1 DETAILS OF PRUNING ALGORITHMS", "text": "Here we provide full descriptions of our other pruning methods and our general methodology sparse training.\nSparse Training Methodology Algorithm 2 shows how we modify normal training in order to train sparsely.\nAlgorithm 2 Pruning Algorithm\ncurrent_iter = 0 while training do\nif current_iter > first epoch of pruning and current_iter < last epoch of pruning then mask = generate_sparsity_mask( θ, current_iter, sparsity threshold ) end if θpr uned = mask ⋂ θ ŷ = forward_pass( θpr uned , x ) θ = weight_update( y, ŷ, θpr uned ) current_iter = current_iter + 1\nend while\nWindow Pruning Methodology Algorithm 3 shows how we prune with window sparsity.\nAlgorithm 3 Window Pruning Algorithm\ngenerate_window_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do\nfor all c in C do for all k in K do\ncutoff_index = size(θc,k ) ∗ sparsity_threshold n = max(cutoff_index, size(θc,k ) − max_non_zero − 1) cutoff_value = nth largest value in θc,k for all i,j in R,S do\nmaski , j ,c,k = 1 if θi , j ,c,k > cutoff_value, else 0 end for\nend for end for\nend for\nCombined Pruning Methodology To combine Window and CK pruning, we introduce intraepoch pruning. As shown by Algorithm 4, in a given epoch we first apply Window Pruning to each 3×3 convolutional layer at a fraction of the sparsity threshold for that epoch. Then, we prune the remaining fraction of the sparsity threshold with CK Pruning. The idea being that kernels that lose many of their parameters during window pruning can be fully pruned during the CK pruning phase. Our intuition is that first pruning parameters within a kernels guides the subsequent CK pruning towards the less important kernels. Thus, we pick out better kernels to prune. We also gain more structured sparsity but sacrifice the precision of window pruning.\nAlgorithm 4 Intra-Epoch Pruning Algorithm\ngenerate_intra_epoch_sparsity_mask(θl ayer , sparsity_threshold): for θ in θl ayer do\nwindow_mask = generate_ck_sparsity_mask(θ, sparsity_threshold) ck_mask = generate_ck_sparsity_mask(θ, sparsity_threshold) mask = window_mask and ck_mask\nend for\nFor completeness, we also tried another method of combining called inter-epoch pruning, which involved splitting the pruning era into CK pruning and window pruning phases. However, from our initial experiments we determined that intra-epoch pruning, performed better (though it was more computationally expensive) than inter-epoch pruning. With inter-epoch pruning we were only able to achieve 74.1% top-1 accuracy with a first epoch of sparsity of 40 and a final sparsity of 40% on Resnet50v1.5 and Imagnet. The same setup trained with intra-epoch pruning achieved 74.9% accuracy. Thus, we pursued intra-epoch pruning as our method to combine the two sparsification methods." }, { "heading": "7.2 ADDITIONAL DETAILS ON EXPERIMENTAL SETUP", "text": "This section goes into more detail on the exact details of the models and dataset combinations we sued for experimentation." }, { "heading": "7.2.1 RESNET50 ON TINY-IMAGENET", "text": "For this training domain, we trained using the Tiny-imagenet dataset CS231N (2015) with resnet50 He et al. (2015). However, we changed the training mechanism in order to get validate our results. Each network we train for 40 epochs, with a batch size of 64. Additionally, we use the Adam optimizer to train with learning rate set to 0.001 and momentum set to 0.9. We also use weight decay set to 0.0001, and we anneal the learning rate to 0.0001 after 30 epochs of training in order to converge faster. We apply the same image transforms as on full Imagenet.\nWe chose this optimization method because we felt that it achieved a good overall accuracy at a baseline level and represents the results in Sun (2016) in their vanilla model. We do not use the same preprocessing or image transforms in the report Sun (2016). Moreover, we wanted a quick way to estimate how our method would perform on full Imagenet." }, { "heading": "7.2.2 RESNET50 ON IMAGENET", "text": "Here, we train each network for 90 epochs with a reduced batch size of 128 instead of 256 because 256 would not fit on a GPU in addition to our pruning layers. We found that changing the batch size to 128 but retaining all other hyperparameters as specified in He et al. (2015) we were able to achieve the same benchmark 74.94% accuracy as the paper. We train for 90 epochs with SGD with momentum set to 0.9 and weight decay is 1×10−4. We set the initial learning rate to be 0.1 and then anneal the learning rate to 0.01 at epoch 30 and then finally to 0.001 at epoch 60.\nFor dataset transformations, we perform the same transformations as 2. This means that during training we perform a random sized crop to size 224x224, randomly flip the image horizontally, and normalize the image. The batches are shuffled during training. For validation, we resize the image to 256 and then center crop to size 224x224 and then normalize." }, { "heading": "7.2.3 RESNET50V1.5 ON IMAGENET", "text": "We train our model for 90/100 epochs and use SGD with momentum (0.9) to optimize. The standard model says that learning rate should 0.1 for 256 batch size, but since that didn’t fit in our GPUs with our sparsity mechanism, we used batch size 64 and linearly scaled the learning rate to be 0.05. We set the learning rate decay such that we multiply by 0.1 after 30, 60, and 90 epochs. We have weight decay set to 1×10−4. Resnet1.5 is defined in detail here 3." }, { "heading": "7.3 MISCELLANEOUS RESULTS", "text": "" }, { "heading": "7.3.1 RESNET50 ON TINY-IMAGENET", "text": "Our models actually perform better than the baseline with the following configurations: window pruning with 60% sparsity as well as CK pruning with 20%, 60% and 80% sparsity. The number of\n2https://github.com/pytorch/examples/tree/master/imagenet 3https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_\nfor_pytorch\nepochs required to reach the converge to the final accuracies is the same for CK and earlier for window at 40% and 60% sparsity." } ]
2,019
STARFIRE: REGULARIZATION FREE ADVERSARIALLY ROBUST STRUCTURED SPARSE TRAINING
SP:79a050f3b4f6466e5bee5533a7b018b2f200cb01
[ "This paper studies the accuracy vs model-size trade-off of quantized CNNs under different channel width multipliers. The authors demonstrated that while all-to-all convolution works well under low bit settings, depthwise conv needs a different sweet spot. The authors then proceed to use the insight to design quantized cnns that have two different schemes for depthwise and normal conv.", "The author studies the quantization strategy of CNNs in terms of Pareto Efficiency. Through a series of experiments with three standard CNN models (ResNet, VGG11, MobileNetV2), the authors demonstrated that lower precision value can be better than high precision values in term of Pareto efficiency under the iso-model size scenario. They also study cases with and without depth-wise convolution, and propose a new quantization method, DualPrecision. DualPrecision empirically outperformed 8-bit quantization and flexible quantization methods on ImageNet." ]
Weight quantization for deep convolutional neural networks (CNNs) has shown promising results in compressing and accelerating CNN-powered applications such as semantic segmentation, gesture recognition, and scene understanding. Prior art has shown that different datasets, tasks, and network architectures admit different iso-accurate precision values, which increase the complexity of efficient quantized neural network implementations from both hardware and software perspectives. In this work, we show that when the number of channels is allowed to vary such that networks of different precision values have the same model size, lower precision values outperform higher precision ones in a Pareto sense (accuracy vs. model size) for networks with standard convolutions. Relying on comprehensive empirical analyses, we find that the optimal precision value of a convolution layer depends on the number of input channels per output filters and provide theoretical insights for it. To this end, we develop a simple algorithm to select the precision values for CNNs that outperforms corresponding 8-bit quantized networks by 0.9% and 2.2% in top-1 accuracy on ImageNet for ResNet50 and MobileNetV2, respectively.
[ { "affiliations": [], "name": "QUANTIZED CNNS" } ]
[ { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Ting-Wu Chin", "Ruizhou Ding", "Cha Zhang", "Diana Marculescu" ], "title": "Legr: Filter pruning via learned global ranking", "venue": "arXiv preprint arXiv:1904.12368,", "year": 2019 }, { "authors": [ "Jungwook Choi", "Pierce I-Jen Chuang", "Zhuo Wang", "Swagath Venkataramani", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "Bridging the accuracy gap for 2-bit quantized neural networks (qnn)", "venue": "arXiv preprint arXiv:1807.06964,", "year": 2018 }, { "authors": [ "Ruizhou Ding", "Ting-Wu Chin", "Zeye Liu", "Diana Marculescu" ], "title": "Regularizing activation distribution for training binarized deep networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Zhen Dong", "Zhewei Yao", "Amir Gholami", "Michael Mahoney", "Kurt Keutzer" ], "title": "Hawq: Hessian aware quantization of neural networks with mixed-precision", "venue": null, "year": 1905 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker", "Olivier Temam", "Scott Gray", "Jongsoo Park", "Cliff Young", "Utku Evci", "Niki Parmar", "Ashish Vaswani" ], "title": "Micronet challenge hosted at neurips 2019", "venue": "https: //micronet-challenge.github.io/,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yihui He", "Xianggen Liu", "Huasong Zhong", "Yuchun Ma" ], "title": "Addressnet: Shift-based primitives for efficient convolutional neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2019 }, { "authors": [ "Lu Hou", "James T. Kwok" ], "title": "Loss-aware weight quantization of deep networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Gao Huang", "Shichen Liu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Condensenet: An efficient densenet using learned group", "venue": "convolutions. group,", "year": 2017 }, { "authors": [ "Benoit Jacob", "Skirmantas Kligys", "Bo Chen", "Menglong Zhu", "Matthew Tang", "Andrew Howard", "Hartwig Adam", "Dmitry" ], "title": "Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Sangil Jung", "Changyong Son", "Seohyung Lee", "Jinwoo Son", "Jae-Joon Han", "Youngjun Kwak", "Sung Ju Hwang", "Changkyu Choi" ], "title": "Learning to quantize deep networks by optimizing quantization intervals with task loss", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "International Conference on Learning Representation (ICLR),", "year": 2017 }, { "authors": [ "Christos Louizos", "Matthias Reisser", "Tijmen Blankevoort", "Efstratios Gavves", "Max Welling" ], "title": "Relaxed quantization for discretized neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Eldad Meller", "Alexander Finkelstein", "Uri Almog", "Mark Grobman" ], "title": "Same, same but different: Recovering neural network quantization error through weight factorization", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Asit Mishra", "Eriko Nurvitadhi", "Jeffrey J Cook", "Debbie Marr" ], "title": "WRPN: Wide reduced-precision networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Markus Nagel", "Mart van Baalen", "Tijmen Blankevoort", "Max Welling" ], "title": "Data-free quantization through weight equalization and bias correction", "venue": null, "year": 1906 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Tao Sheng", "Chen Feng", "Shaojie Zhuo", "Xiaopeng Zhang", "Liang Shen", "Mickey Aleksic" ], "title": "A quantization-friendly separable convolution for mobilenets", "venue": "1st Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Dimitrios Stamoulis", "Ting-Wu Rudy Chin", "Anand Krishnan Prakash", "Haocheng Fang", "Sribhuvan Sajja", "Mitchell Bognar", "Diana Marculescu" ], "title": "Designing adaptive neural networks for energyconstrained image classification", "venue": "In Proceedings of the International Conference on ComputerAided Design,", "year": 2018 }, { "authors": [ "Dimitrios Stamoulis", "Ruizhou Ding", "Di Wang", "Dimitrios Lymberopoulos", "Bodhi Priyantha", "Jie Liu", "Diana Marculescu" ], "title": "Single-path nas: Designing hardware-efficient convnets in less than 4 hours", "venue": null, "year": 1904 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Quoc V Le" ], "title": "Mnasnet: Platformaware neural architecture search for mobile", "venue": "arXiv preprint arXiv:1807.11626,", "year": 2018 }, { "authors": [ "Lucas Theis", "Iryna Korshunova", "Alykhan Tejani", "Ferenc Huszár" ], "title": "Faster gaze prediction with dense networks and fisher pruning", "venue": "arXiv preprint arXiv:1801.05787,", "year": 2018 }, { "authors": [ "Kuan Wang", "Zhijian Liu", "Yujun Lin", "Ji Lin", "Song Han" ], "title": "Haq: Hardware-aware automated quantization with mixed precision", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Bichen Wu", "Alvin Wan", "Xiangyu Yue", "Peter Jin", "Sicheng Zhao", "Noah Golmant", "Amir Gholaminejad", "Joseph Gonzalez", "Kurt Keutzer" ], "title": "Shift: A zero flop, zero parameter alternative to spatial convolutions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Bichen Wu", "Yanghan Wang", "Peizhao Zhang", "Yuandong Tian", "Peter Vajda", "Kurt Keutzer" ], "title": "Mixed precision quantization of convnets via differentiable neural architecture search", "venue": "arXiv preprint arXiv:1812.00090,", "year": 2018 }, { "authors": [ "Jianbo Ye", "Xin Lu", "Zhe Lin", "James Z Wang" ], "title": "Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers", "venue": "International Conference on Learning Representation (ICLR),", "year": 2018 }, { "authors": [ "Ruichi Yu", "Ang Li", "Chun-Fu Chen", "Jui-Hsin Lai", "Vlad I. Morariu", "Xintong Han", "Mingfei Gao", "Ching-Yung Lin", "Larry S. Davis" ], "title": "Nisp: Pruning networks using neuron importance score propagation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Xin Yuan", "Liangliang Ren", "Jiwen Lu", "Jie Zhou" ], "title": "Enhanced bayesian compression via deep reinforcement learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Ritchie Zhao", "Yuwei Hu", "Jordan Dotzel", "Chris De Sa", "Zhiru Zhang" ], "title": "Improving neural network quantization without retraining using outlier channel splitting", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ritchie Zhao", "Yuwei Hu", "Jordan Dotzel", "Christopher De Sa", "Zhiru Zhang" ], "title": "Building efficient deep neural networks with unitary group convolutions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": "arXiv preprint arXiv:1606.06160,", "year": 2016 }, { "authors": [ "Chenzhuo Zhu", "Song Han", "Huizi Mao", "William J Dally" ], "title": "Trained ternary quantization", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Zhuangwei Zhuang", "Mingkui Tan", "Bohan Zhuang", "Jing Liu", "Yong Guo", "Qingyao Wu", "Junzhou Huang", "Jinhui Zhu" ], "title": "Discrimination-aware channel pruning for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent success of convolutional neural networks (CNNs) in computer vision applications such as image classification and semantic segmentation, have fueled many important applications in energyconstrained devices, e.g., virtual reality headsets, drones, and robots. As a result, improving the energy-efficiency of CNNs while maintaining their attractive features (e.g., accuracy for a task) has gained tremendous research momentum in recent years.\nAmong the efforts of improving CNNs’ efficiency, weight quantization was shown to be an effective technique (Zhou et al. (2016; 2017); Hou & Kwok (2018); Ding et al. (2019)). The majority of research efforts in quantization have been devoted to develop better quantization algorithms such that an iso-figure-of-merit (i.e., accuracy) is achieved with lowest possible weight precision value. Nevertheless, the iso-accurate precision value depends on the dataset, task, and network architecture of interest, which greatly increases the neural network implementation complexity from both hardware and software perspectives. For example, hardware and software implementations optimized for executing an 8 bit network are sub-optimal for executing a 4 bit network, and vice versa. The design optimization complexity further increases for recently proposed mixed-precision networks (Wang et al. (2019); Wu et al. (2018b); Dong et al. (2019)).\nThe key observation we have is that most prior literature in this space studies quantization for fixed network architectures, which is reasonable for evaluating the effectiveness of quantization algorithms, but unnecessary when considering the Pareto efficiency (accuracy vs. model size) of neural networks. In this work, we relax the restriction of fixing the network architecture and allow the number of channels of the CNN under consideration to vary. More concretely, we use the widthmultiplier1 (Howard et al. (2017)) as a tool to compare the performance of different weight precision values under the same model size.\nOverall, we systematically analyze the model size and accuracy trade-offs considering both weight precision values and the number of channels for various modern networks architectures (variants of\n1Width-multiplier grows or shrinks the number of channels across the layers with identical proportion for a certain network, e.g., grow the number of channels for all the layers by 2×.\nResNet, VGG, and MobileNet) and datasets (CIFAR and ImageNet) and have the following nontrivial and novel contributions:\n• We are the first to empirically show that when considering channel counts, lower precision weight values outperform higher precision weight values in a Pareto sense (accuracy vs. model size) for networks with standard convolutions. This is intriguing since it implies that scaling up (in terms of model size) along the channel count dimension is more effective for accuracy than the precision value dimension.\n• We are the first to show that the fan-in channel counts per output filter for a convolution layer determine the effectiveness of accuracy improvement when the model is scaled along the weight precision dimension and provide both theoretical and empirical reasoning for this.\n• We are the first to show that with a simple model scaling rule (the proposed DualPrecision), one can achieve a more accurate model (given the same model size) even compared to mixed-precision prior art that uses deep reinforcement learning to search for layer-wise weight precision values. Moreover, the results are validated on the large-scale dataset, i.e., ImageNet. This is a manifestation of our two previous findings.\nThe remainder of the paper is organized as follows. Section 2 discusses related work. Section 3 discusses the methodology used to discover our findings. Section 4 shows lower precision values are preferable for networks with standard convolutions. Section 5 discusses how fan-in channel count per output filter affects precision scaling for convolution layers. Section 6 discusses DualPrecision, our simple yet effective model scaling rule. Section 7 concludes the paper." }, { "heading": "2 RELATED WORK", "text": "Several techniques for improving the efficiency of CNNs have been recently proposed. For instances, pruning removes the redundant connections of a trained neural network (Zhuang et al. (2018); Ye et al. (2018); Theis et al. (2018); Li et al. (2017); Frankle & Carbin (2019); Chin et al. (2019); Yu et al. (2018)), neural architecture search (NAS) tunes the number of channels, size of kernels, and depth of a network (Tan et al. (2018); Stamoulis et al. (2019); Cai et al. (2018); Stamoulis et al. (2018)), and convolution operations can be made more efficient via depth-wise convolutions (Howard et al. (2017)), group convolutions (Huang et al. (2017); Zhao et al. (2019b)), and shift-based convolutions (He et al. (2019); Wu et al. (2018a)). In addition to the aforementioned techniques, network quantization introduces an opportunity for hardware-software co-design to achieve better efficiency for CNNs.\nThere are in general two directions for weight quantization in prior literature, post-training quantization (Nagel et al. (2019); Meller et al. (2019); Zhao et al. (2019a); Sheng et al. (2018)) and quantization-aware training (Rastegari et al. (2016); Zhu et al. (2017); Jacob et al. (2018); Jung et al. (2019); Yuan et al. (2019); Hou & Kwok (2018); Choi et al. (2018)). The former assumes training data is not available when quantization is applied. While being fast and training-data-free, its performance is worse compared to quantization-aware training. In contrast, our work falls under the category of quantization-aware training.\nIn quantization-aware training, (Rastegari et al. (2016)) introduces binary neural networks, which lead to significant efficiency gain by replacing multiplications with XNOR operations, but at the expense of significant accuracy degradation. Later, (Zhu et al. (2017)) propose ternary quantization and (Zhou et al. (2016); Jacob et al. (2018)) bridge the gap between floating-point networks and binarized neural networks by introducing fixed-point quantization. Building upon prior art, the vast majority of existing work focuses on reducing the accuracy degradation by improving the training strategy (Zhou et al. (2017); Yang et al. (2019); Louizos et al. (2019); Ding et al. (2019)) and better quantization schemes (Jung et al. (2019); Wang et al. (2019); Yuan et al. (2019)). However, prior art studies quantization by fixing the network architecture, which may lead to sub-optimal precision decisions in terms of Pareto efficiency (model size vs. accuracy).\nRelated to our work, Mishra et al. (2018) have also considered the impact of channel count in quantization. In contrast, our work has the following novel features. First, we find that in CNNs with standard convolutions, lower precision values outperform higher ones in a Pareto sense. Second, we\nfind that the Pareto optimal precision value depends on the number of input channels per output filter and provide theoretical insights for it. Last, we propose an algorithm to select the precision values of a given network which as a result outperforms 8 bit fixed-point and mixed-precision baselines." }, { "heading": "3 METHODOLOGY", "text": "We conduct all of our experiments on image classification datasets including CIFAR-100 and ImageNet. All the experiments are trained from scratch to ensure different precision values are trained equally long. While we do not start from a pre-trained model, we note that our baseline fixedpoint models (i.e., 4 bit for CIFAR and 8 bit for ImageNet) achieve iso-accurate results compared to their floating-point counterparts. For all the experiments on CIFAR, we run the experiments three times and report the mean and standard deviation. The training hyper-parameters are detailed in Appendix A." }, { "heading": "3.1 QUANTIZATION", "text": "While our work focuses on weight quantization, we still quantize the activations since they are normally quantized for efficient deployment (Jacob et al. (2018)). For activation quantization, we use the technique proposed in prior art (Jacob et al. (2018)) and use 4 bit for CIFAR-100 and 8 bit for ImageNet experiments. We note that the precision value is chosen such that iso-accurate results can be achieved when compared to the floating-point baselines.\nFor weight quantization, we use a straight-through estimator (Bengio et al. (2013)) to conduct quantization-aware training. Specifically, for precision beyond 2 bit (b > 2), we use the following quantization function for weights during the forward pass:\nQ(Wi,:) = b clamp(Wi,:,−ai, ai)\nsi e × si, si = ai 2b−1 − 1 , (1)\nwhere b·e denotes the round-to-nearest-neighbor function, W ∈ RCout×d, d = CinKwKh denotes the real-value weights for the ith output filter of a convolution layer that has Cin channels and Kw × Kh kernel size. a ∈ RCout denotes the vector of clipping factors which are selected to minimize ‖Q(Wi,:)−Wi,:‖22 by assuming Wi,: ∼ N (0, σ2I). More details about the determination of ai is in Appendix B.\nFor special cases such as 2 bit and 1 bit, we use schemes proposed in prior art. Specifically, let us first define:\n¯|Wi,:| = 1\nd d∑ j=1 |Wi,j |. (2)\nFor 2 bit precision, we follow trained ternary networks (Zhu et al. (2017)) and define the quantization function as follows:\nQ(Wi,:) = (sign(Wi,:) Mi,j)× ( ¯|Wi,:|)\nMi,j = { 0, Wi,j < 0.7 ¯|Wi,:|. 1, otherwise.\n(3)\nFor 1 bit precision, we follow DoReFaNets (Zhou et al. (2016)) and define the quantization function as follows:\nQ(Wi,:) = sign(Wi,:)× ( ¯|Wi,:|) . (4)\nFor the backward pass for all the precision values, we use a straight-through estimator as in prior art to make the training differentiable. That is,\nQ(Wi,:)\n∂Wi,: = I. (5)\nIn the sequel, we quantize the first and last layers to 8 bit. They are fixed throughout the experiments. We note that it is a common practice to leave the first and the last layer un-quantized (Zhou et al. (2016)), however, we find that using 8 bit can achieve iso-accurate results." }, { "heading": "3.2 COST METRICS", "text": "To measure the cost of CNN models, we use the size of the model (Csize) defined as:\nCsize = O∑ i=1 b(i)Cin(i)Kw(i)Kh(i) (6)\nwhere O denotes the total number of filters and b(i) denotes the precision for filter i, Cin(i) denotes the number of channels for filter i, andKw(i) andKh(i) denote the kernel height and width for filter i. We choose model size as a metric because it is relevant to both machine learning and systems community. Specifically, model size is of interest for the machine learning community since it represents a proxy of model complexity. On the other hand, for the systems community, model size is related to latency and energy for CNNs with weight fetch dominating memory accesses (e.g., the streaming inference scenario where inference is done with single data instance per batch). We note that this metric is also adopted in the MicroNet Challenge (Gale et al. (2019)) held at NeurIPS 2019." }, { "heading": "4 DIFFERENT PRECISION VALUES HAVE DIFFERENT PARETO EFFICIENCY", "text": "To be precise in the following discussion, we define Pareto domination as follows:\nDefinition 4.1 (Pareto domination) When comparing precision valuesA andB for a network family F , we say A Pareto dominates B if,\nAcc(N(A, s)) > Acc(N(B, s)) ∀s,\nwhere Acc evaluates the validation accuracy of a network, N(A, s) uses width-multiplier to find a network in F such that it has A precision value and s model size.\nWe study three kinds of commonly adopted CNNs, namely, ResNets with Basic Block (He et al. (2016)), VGG (Simonyan & Zisserman (2014)), and MobileNetV2 (Sandler et al. (2018)). These networks differ in the convolution operations, connections, and filter counts. For ResNets, we explored the network from 20 layers up to 56 layers in a step of six layers. For VGG, we investigate VGG with eleven layers. Additionally, we also study MobileNetV2, which is a mobile-friendly network. We note that we modify the stride count in of the original MobileNetV2 to match the number of strides of ResNet for CIFAR. The used architectures are discussed in detail in Appendix C.\nFor CIFAR-100, we only study precision values below 4 bit since the latter can achieve iso-accurate results compared to its floating-point counterpart. Specifically, we consider 4 bit, 2 bit, and 1 bit precision values. To compare the Pareto efficiency of different precision values, we use the widthmultiplier to align the model size among them. For example, one can make a 1-bit CNN 2× wider to align with the model size of a 4-bit CNN 2. For each network, we sweep the width-multiplier to consider points at multiple model sizes. As it can be observed from Figure 1, across the three types of networks we study, there exists some precision value that Pareto dominates others. For ResNets and VGG, it is 1 bit. In contrast, for MobileNetV2, it is 4 bit. The results for ResNets and VGG are particularly interesting, since we observe that the lower precision value Pareto dominates the higher precision ones. This implies that for networks such as ResNets and VGG, scaling the model along the channel dimension is always more preferable in accuracy-vs-size trade-off compared to scaling the model along the weight precision value dimension." }, { "heading": "5 THE OPTIMAL PRECISION VALUE DEPENDS ON THE NUMBER OF FAN-IN CHANNELS", "text": "With the empirical results from Section 4, we have learned that lower precision values are better for two of the networks we study but not for MobileNetV2, which has a reversed behavior. In this section, we are interested in identifying the underlying cause for this different trend. Through a\n2Increase the width of a layer increases the number of output filters for that layer as well as the number of channels for the subsequent layer. Thus, number of parameters and number of operations grow approximately quadratically with the width-multiplier.\nseries of controlled experiments, we empirically identify that more channels per output filter leads to lower optimal precision value. In addition, we provide theoretical insights behind this empirical result.\n5.1 DEPTH-WISE CONVOLUTION\nAs it can be observed in Figure 1, MobileNetV2 is a special case where higher precision values Pareto dominate lower ones. When comparing MobileNetV2 to the other two networks, there are many differences, including how convolutions are connected, how many convolution layers are there, how many filters in each of them, and how many channels for each convolution. To narrow down which of these impacts the reversed trend, we first consider the inverted residual blocks, i.e., the basic component in MobileNetV2. To do so, we replace all basic blocks (two consecutive convolutions) of ResNet26 with the inverted residual blocks. We refer to this new network as Inv-ResNet26. As\nshown in Figure 2, the Pareto efficiency trend of Inv-ResNet26 resembles the one of MobileNetV2 and recall that in case of ResNet26, lower precision values Pareto dominate higher ones. Thus, we can infer that the inverted residual block itself or its components are responsible for such a reversed trend.\nSince an inverted residual block is composed of a point-wise convolution and a depth-wise separable convolution, we further consider the case of depth-wise separable convolution (DWSConv). To identify whether DWSConv can cause the trend reversion, we use VGG11 as a starting point\nand gradually replace each of the convolution with DWSConv. We note that replacing all convolutions with DSWConvs results in an architecture that resembles MobileNetV1 (Howard et al. (2017)). Specifically, we introduce three variants of VGG11 that have an increasing number of convolutions replaced by DWSConvs. Starting with the second layer, variant A has one layer replaced by DWSConv, variant B has four layers replaced by DWSConvs, and variant C has all of the layers except for the first layer replaced by DWSConvs (the architectures are detailed in Appendix C).\nAs shown in Figure 3, as the number of DWSConv increases (from variant A to variant C), the optimal precision value shifts from 1 bit to 4 bit, which implies that depth-wise separable convolutions or the layers within it are affecting the optimal precision value. To identify which of the layers of the DWSConv (i.e., the depth-wise convolution or the point-wise convolution) is more important in affecting the optimal precision value, we keep the precision value of depth-wise convolutions fixed at 4 bit and quantize other layers. As shown in Figure 3d, the optimal curve shifts from 4 bit being the best back to 1 bit, with a similarly performing 2 bit. Thus, depth-wise convolutions appear to directly affect the optimal precision trends." }, { "heading": "5.2 SENSITIVITY ANALYSIS", "text": "In our setup, to obtain a lower precision network that has the same model size as a higher precision network we follow two steps: (1) quantize the network weights to lower-precision values and (2) grow the network with width-multiplier to the model size of the higher-precision one. The two steps introduce accuracy differences of ∆AccQ = Acclow − Acchigh and ∆AccG = Acclow,grown − Acclow, respectively. Since depth-wise convolutions introduce a reverse trend in Pareto efficiency, which is the result of ∆AccQ + ∆AccG, the reason can potentially be due to them being quantization-unfriendly, growing-unfriendly, or both.\nTo further diagnose the reason why depth-wise convolutions have a reverse Pareto efficiency trend, we analyze the accuracy differences for networks with and without quantizing depth-wise convolutions, i.e., Figure 3c and Figure 3d. Specifically, we use width-multipliers of 1×, 1.25×, 1.5×, 1.75×, and 2× for the 4-bit variant C as networks of higher precision. Thus, ∆AccQ is evaluated against the corresponding 1-bit quantized model and ∆AccG is measured by comparing the 1-bit model and its 2× grown counterpart. As shown in Table 1, when quantizing depth-wise convolutions, ∆AccQ becomes more negative such that ∆AccQ + ∆AccG < 0. This implies that the main reason for the optimal precision value change is that depth-wise convolutions are quantizationunfriendly when going below 4 bit. We note that we expected that quantizing the depth-wise convolutions would incur smaller ∆AccQ compared to their no-quantization baseline because we essentially quantized more layers. However, depth-wise convolutions only account for 2% of the model size but incur on average near 4× more accuracy degradation when quantized. We note that Sheng et al. (2018) also find that depth-wise separable convolutions are quantizationunfriendly. However, their results are based on post-training layer-wise quantization. As mentioned in their work (Sheng et al. (2018)), the quantization challenges in their setting could be resolved by quantization-aware training, which is the scheme considered in this work. As a result, our finding is different and novel." }, { "heading": "5.3 QUANTIZATION AND DEPTH-WISE CONVOLUTIONS", "text": "Having uncovered that depth-wise convolutions introduce large accuracy degradation when weights are quantized below 4 bit, in this section, we investigate depth-wise convolutions from a quantization perspective. When comparing depth-wise convolutions and standard convolutions in the context of\nquantization, they differ in the number of elements to be quantized, i.e., Cin = 1 for depth-wise convolutions and Cin >> 1 for standard convolutions.\nWhy does the number of elements matter? In quantization-aware training, one needs to estimate some statistics of the vector to be quantized (i.e., a in Equation 1 and ¯|w| in Equations 3,4) based on the elements in the vector. The number of elements affect the robustness of the estimate that further decides the quantized weights. More formally, we provide the following proposition.\nProposition 5.1 Let w ∈ Rd be a the weight vector to be quantized where wi has distribution of N (0, σ2) ∀ i without assuming samples are drawn independently and d = CinKwKh. If the average correlation of the weights is denoted by ρ, the variance of ¯|w| can be written as follows:\nVar( ¯|w|) = σ 2\nd +\n(d− 1)ρσ2\nd − 2σ\n2\nπ . (7)\nThe proof is in Appendix D. This proposition states that, as the number of elements (d) increases, the variance of the estimate can be reduced due to the first term. The second term depends on the correlation between weights. Since the weights might not be independent during training, the variance is also affected by their correlations.\nWe empirically validate Proposition 5.1 by looking into the sample variance of ¯|w| across the course of training3 for different d values by growing (Kw,Kh) or Cin. To do so, we consider the 0.5× VGG variant C by changing the number of elements of the depth-wise convolutions. Since d = (Cin × Kw × Kh) for a convolution layer, we consider the original depth-wise convolution, i.e., d = 1 × 3 × 3 and increasing channels with d = 4 × 3 × 3 and d = 16 × 3 × 3, and increasing kernel size with d = 1× 6× 6, , and d = 1× 12× 12. The numbers are selected such that growing the channel has the same d for the corresponding higher kernel size.\nIn Figure 4, we analyze the layer-level sample variance by averaging it for all the filters in the same layer. First, we observe that one can reduce the variance by increasing the number of elements along both the channel and kernel size dimensions. Second, we find that increasing the number of channels is more effective than increasing the kernel size in reducing the variance, which could be due to a different correlation of the weights, i.e., intra-channel weights have larger correlation than inter-channel weights.\nHowever, lower variance might not necessarily imply lower quantization error for\nthe quantized models. Thus, we conduct the ∆Acc analysis for different d values. More specifically, we want to understand how d affects the accuracy difference between lower precision (1 bit) and higher precision (4 bit) models (∆AccQ) and the accuracy difference between the lower precision (1 bit) and its grown (2×) counterpart (∆AccG). As shown in Table 2, we empirically find that lower variance reflects larger ∆AccQ (less degradation). On the other hand, when comparing channel counts and kernel sizes, we observe that increasing the number of channels is more effective than increasing the kernel size in reducing accuracy degradation (larger ∆AccQ). Moreover, we find that increasing kernel size reduces AccG more than increasing the number of channels; this may be because a larger kernel is harder to optimize and the CIFAR dataset does not benefit from larger receptive field. Indeed, from the last row of Table 2, we can observe that increasing the kernel size reduces the accuracy for the 4 bit models.\nOverall, from the Pareto efficiency perspective, we are interested in ∆Acc, which determines whether the lower precision can have better accuracy when grown to the same model size as the\n3We treat the calculated ¯|w| at each training step as a sample and calculate the sample variance across training steps.\nhigher precision model. In this case, we find empirically that, as the number of channels per output filter increases, ∆Acc increases. This implies that higher fan-in channel counts per output filter can benefit more from using lower weight precision values." }, { "heading": "6 DUALPRECISION: PRECISION SELECTION FOR CNNS", "text": "From previous results, we find that the optimal precision value depends on the number of fan-in channels per output filter in a convolution layer and as the number of fan-in channels grows, the optimal precision value becomes smaller. Together with the observation that convolution layers in modern CNNs, except for depth-wise convolutions, have many channels per filter, we propose DualPrecision, which uses one precision value (presumably higher) for depth-wise convolutions and another precision value (presumably lower) for other convolution layers. Once the precision values are found, we use width-multipliers to grow or shrink the network to the desired model size.\nWith this heuristic, the search space of precision selection becomes so small that grid search is feasible, i.e., |B| × |B| for networks with depth-wise convolutions and |B| otherwise. B denotes the set of considered precision values and is typically small, e.g., {1, 2, 4, 8}. We note that the search space for mixed precision (Wang et al. (2019); Wu et al. (2018b)) is |B|L with L being the number of layers. In DualPrecision, one can explore the grid more efficiently by using heuristics that incorporate our findings. Specifically, we find that one precision value Pareto dominates others and as a result, one can compare precision values at the regime of low computational cost so as to train the network faster.\nWe evaluate the proposed DualPrecision with ResNet50 and MobileNetV2 on the ImageNet dataset. Since we keep the precision of the first and last layer quantized at 8 bit, scaling them in terms of width will grow the number of parameters much more quickly than other layers. As a result, we keep the number of channels for the first and last channel fixed for the ImageNet experiments. We first conduct grid search (|B| = {1, 2, 4, 8}) for ResNet50 and MobileNetV2 by scaling them down with width-multipliers so as to make the grid search faster. Once the optimal precision is decided, we use width-multipliers to traverse the trade-off curve. Specifically, we use the model size of the 0.25× 8-bit model to conduct grid search for both networks. For ResNet50, there are only four precision values to be searched while MobileNetV2 has 16 such values. The grid search results are shown in Appendix E.\nSimilar to our CIFAR experiments, we find that for networks with standard convolutions, i.e., ResNet50, the lower the precision value the better accuracy is. Thus, the selected precision is\n1 bit. On the other hand, for MobileNetV2, we find that 4 bit for standard convolution and 4 bit for depth-wise convolution perform the best. We consider two baselines to benchmark the proposed approach including 8-bit fixed-point and mixed-precision networks (Wang et al. (2019)) with widthmultipliers. For mixed-precision, we follow (Wang et al. (2019)) and use a reinforcement learning approach to search for iso-accurate networks (iso- compared to the 8-bit fixed-point models). Then, we use width-multipliers on top of the searched network to obtain models of different sizes. We consider networks of three sizes, i.e., the size of 0.25×, 0.5× and 1× 8-bit fixed-point models. As shown in Table 3, our proposed simple heuristic outperforms both baselines by a significant margin for both networks considered." }, { "heading": "7 CONCLUSION", "text": "In this work, we discuss the Pareto efficiency of quantized convolutional neural networks (CNNs). We find that a lower weight precision value produces a more accurate network than higher weight precision one when the model size is aligned using a width-multiplier (i.e., growing or shrinking the number of channels proportionally.) for CNNs with standard convolutions. Furthermore, from both theoretical and empirical analyses, we find that the fan-in channel counts per output filter of a convolution layer determine the optimal precision value for that layer, which explains our observed phenomenon that depth-wise convolutions are less quantization-friendly compared to their standard counterparts. Based on our findings, we propose DualPrecision, a simple yet effective heuristic for precision selection of a given network. We show empirically that, when applied on ImageNet, DualPrecision outperforms the 8-bit fixed-point baseline and prior art in mixed-precision by a significant margin." }, { "heading": "A TRAINING HYPER-PARAMETERS", "text": "For CIFAR, we use a learning rate of 0.05, cosine learning rate decay, linear learning rate warmup (from 0 to 0.05) with 5 epochs, batch size of 128, total training epoch of 300, weight decay of 5e−4, SGD optimizer with Nesterov acceleration and 0.9 momentum.\nFor ImageNet, we have identical hyper-parameters as CIFAR except for the following hyperparameters. Batch size of 256, 120 total epochs for MobileNetV2 and 90 for ResNets, weight decay 4e−5, and 0.1 label smoothing." }, { "heading": "B CLIPPING POINT FOR QUANTIZATION-AWARE TRAINING", "text": "As mentioned earlier, a ∈ RCout denotes the vector of clipping factors which is selected to minimize ‖Q(Wi,:) −Wi,:‖22 by assuming Wi,: ∼ N (0, σ2I). More specifically, we run simulations for weights drawn from a zero-mean Gaussian distribution with several variances and identify the best a∗i = arg minai‖Qai(Wi,:) −Wi,:‖22 empirically. According to our simulation, we find that one can infer ai from the sample mean ¯|Wi,:|, which is shown in Figure 5. As a result, for the different precision values considered, we find c =\n¯|Wi,:| a∗i\nvia simulation and use the obtained c to calculate ai on-the-fly throughout training." }, { "heading": "C NETWORK ARCHITECTURES", "text": "For the experiments in Section 4, the ResNets used are detailed in Table 4. Specifically, for the points in Figure 1a, we consider ResNet20 to ResNet56 with width-multipliers of 0.5×, 1×, 1.5×, and 2× for the 4-bit case. Based on these values, we consider additional width-multipliers 2.4× and 2.8× for the 2-bit case and 2.5×, 3×, 3.5×, and 3.9× for the 1-bit case. We note that the right-most points in Figure 1a is a 10× ResNet26 for the 4 bit case. On the other hand, VGG11 is detailed in Table 6 for which we consider width-multipliers from 0.25× to 2× with a step of 0.25 for the 4 bit case (blue dots in Figure 1b). The architecture of MobileNetV2 used in the CIFAR-100 experiments follows the original MobileNetV2 (Table 2 in Sandler et al. (2018)) but we change the stride of all the bottleneck blocks to 1 except for the fifth bottleneck block, which has a stride of 2. As a result, we down-sample the image twice in total, which resembles the ResNet design for the CIFAR experiments (He et al. (2016)). Similar to VGG11, we consider width-multipliers from 0.25× to 2× with a step of 0.25 for MobileNetV2 for the 4 bit case (blue dots in Figure 1c)." }, { "heading": "D PROOF FOR PROPOSITION 5.1", "text": "Based on the definition of variance, we have:\nVar( 1\nd d∑ i=1 |wi|) := E (1 d d∑ i=1 |wi| )2 − ( E 1 d d∑ i=1 |wi| )2 = E (1 d d∑ i=1 |wi| )2 − 2σ 2 π\n = 1\nd2 E\n( d∑\ni=1\n|wi| )2 − 2σ 2\nπ\n= σ2 d + d− 1 d ρσ2 − 2σ 2 π ." }, { "heading": "E GRID SEARCH ON IMAGENET", "text": "From Table 7, we can observe a trend similar to the CIFAR-100 experiments, i.e., for networks without depth-wise convolutions, the lower precision the better, and for networks with depth-wise convolutions, there are sweet spots for depth-wise convolution and other convolutions. Specifically, the final precision value selected for MobileNetV2 is 4 bit for both depth-wise convolutions and standard convolutions. On the other hand, the selected precision value for ResNet50 is 1 bit." }, { "heading": "F MEMORY FOOTPRINT FOR INFERENCE", "text": "We calculate and report the memory footprint needed for the proposed DualPrecision models and the baseline 8-bit models to do inference with a single image per batch. Specifically, the memory footprint of inference equals the largest input feature maps plus the largest output feature maps plus the weight sizes for the entire network. As shown in Figure 6, DualPrecision outperforms the baseline. That is, considering the streaming inference setting (a single image per batch), DualPrecision requires less memory to achieve equal accurate results compared to the 8-bit models." } ]
2,019
null
SP:624274b6944826b6f9597298b290ae50566d6e5c
[ "The authors propose a local label propagation approach for large-scale semi-supervised learning. The approach learns a representation that tries to minimize a combination of the cross-entropy loss on the labeled data and a negative inner-product-based likelihood between the propagated pseudo-label and other examples with the same true label. The pseudo-labels on the unlabeled data are then calculated with a weighted k-NN scheme, where the weights take a heuristic correction of a soft similarity. Some further computational speedup is done with a memory cache described in an earlier work (Wu 2018b). Experimental results seem significantly superior to the competitors. The design choices are mostly justified with ablation studies.", "The paper introduces an approach for semi-supervised learning based on local label propagation. The idea is to leverage the geometric structure in the embedding space, such that data near to each other in the embedding space should have the same labels. The labels of the K-nearest labeled examples are weighted to form the propagated pseudo label of the data point. And the objective aims to match the propagated pseudo label and the predicted label from the classification model. An extra term is added to the objective to force data points with similar pseudo labels to get close to each other in the embedding space. The local propagation strategy makes the method scalable compared to similar methods in the literature. The method is tested on different experimental setups and show superior performance than the state of the art baselines. " ]
A significant issue in training deep neural networks to solve supervised learning tasks is the need for large numbers of labeled datapoints. The goal of semisupervised learning is to leverage ubiquitous unlabeled data, together with small quantities of labeled data, to achieve high task performance. Though substantial recent progress has been made in developing semi-supervised algorithms that are effective for comparatively small datasets, many of these techniques do not scale readily to the large (unlabeled) datasets characteristic of real-world applications. In this paper we introduce a novel approach to scalable semi-supervised learning, called Local Label Propagation (LLP). Extending ideas from recent work on unsupervised embedding learning, LLP first embeds datapoints, labeled and otherwise, in a common latent space using a deep neural network. It then propagates pseudolabels from known to unknown datapoints in a manner that depends on the local geometry of the embedding, taking into account both inter-point distance and local data density as a weighting on propagation likelihood. The parameters of the deep embedding are then trained to simultaneously maximize pseudolabel categorization performance as well as a metric of the clustering of datapoints within each psuedo-label group, iteratively alternating stages of network training and label propagation. We illustrate the utility of the LLP method on the ImageNet dataset, achieving results that outperform previous state-of-the-art scalable semi-supervised learning algorithms by large margins, consistently across a wide variety of training regimes. We also show that the feature representation learned with LLP transfers well to scene recognition in the Places 205 dataset.
[]
[ { "authors": [ "Alexis Conneau", "Holger Schwenk", "Loıc Barrault", "Yann" ], "title": "Lecun. Very deep convolutional networks for natural language processing", "venue": "arXiv preprint arXiv:1606.01781,", "year": 2016 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": null, "year": 2009 }, { "authors": [ "Li Deng", "Geoffrey Hinton", "Brian Kingsbury" ], "title": "New types of deep neural network learning for speech recognition and related applications: An overview", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Awni Hannun", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Greg Diamos", "Erich Elsen", "Ryan Prenger", "Sanjeev Satheesh", "Shubho Sengupta", "Adam Coates" ], "title": "Deep speech: Scaling up end-to-end speech recognition", "venue": "arXiv preprint arXiv:1412.5567,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George Dahl", "Abdel-rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Brian Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition", "venue": "IEEE Signal processing magazine,", "year": 2012 }, { "authors": [ "Julia Hirschberg", "Christopher D Manning" ], "title": "Advances in natural language processing", "venue": null, "year": 2015 }, { "authors": [ "Ahmet Iscen", "Giorgos Tolias", "Yannis Avrithis", "Ondrej Chum" ], "title": "Label propagation for deep semi-supervised learning", "venue": "arXiv preprint arXiv:1904.04717,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Ankit Kumar", "Ozan Irsoy", "Peter Ondruska", "Mohit Iyyer", "James Bradbury", "Ishaan Gulrajani", "Victor Zhong", "Romain Paulus", "Richard Socher" ], "title": "Ask me anything: Dynamic memory networks for natural language processing", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on Challenges in Representation Learning, ICML,", "year": 2013 }, { "authors": [ "Bin Liu", "Zhirong Wu", "Han Hu", "Stephen Lin" ], "title": "Deep metric transfer for label propagation with limited annotated data", "venue": "arXiv preprint arXiv:1812.08781,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Kuniaki Noda", "Yuki Yamaguchi", "Kazuhiro Nakadai", "Hiroshi G Okuno", "Tetsuya Ogata" ], "title": "Audio-visual speech recognition using deep learning", "venue": "Applied Intelligence,", "year": 2015 }, { "authors": [ "Siyuan Qiao", "Wei Shen", "Zhishuai Zhang", "Bo Wang", "Alan Yuille" ], "title": "Deep co-training for semi-supervised image recognition", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Bart Thomee", "David A Shamma", "Gerald Friedland", "Benjamin Elizalde", "Karl Ni", "Douglas Poland", "Damian Borth", "Li-Jia Li" ], "title": "Yfcc100m: The new data in multimedia research", "venue": "arXiv preprint arXiv:1503.01817,", "year": 2015 }, { "authors": [ "Zhirong Wu", "Alexei A Efros", "Stella X Yu" ], "title": "Improving generalization via scalable neighborhood component analysis", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via non-parametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation for consistency training", "venue": null, "year": 1904 }, { "authors": [ "I Zeki Yalniz", "Hervé Jégou", "Kan Chen", "Manohar Paluri", "Dhruv Mahajan" ], "title": "Billion-scale semi-supervised learning for image classification", "venue": null, "year": 1905 }, { "authors": [ "Tom Young", "Devamanyu Hazarika", "Soujanya Poria", "Erik Cambria" ], "title": "Recent trends in deep learning based natural language processing", "venue": "ieee Computational intelligenCe magazine,", "year": 2018 }, { "authors": [ "Xiaohua Zhai", "Avital Oliver", "Alexander Kolesnikov", "Lucas Beyer" ], "title": "S4l: Self-supervised semi-supervised learning", "venue": "arXiv preprint arXiv:1905.03670,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Jianxiong Xiao", "Antonio Torralba", "Aude Oliva" ], "title": "Learning deep features for scene recognition using places database", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Dengyong Zhou", "Olivier Bousquet", "Thomas N Lal", "Jason Weston", "Bernhard Schölkopf" ], "title": "Learning with local and global consistency", "venue": "In Advances in neural information processing systems,", "year": 2004 }, { "authors": [ "Xiaojin Zhu", "Zoubin Ghahramani" ], "title": "Learning from labeled and unlabeled data with label propagation", "venue": "Technical report, Citeseer,", "year": 2002 }, { "authors": [ "Chengxu Zhuang", "Alex Lin Zhai", "Daniel Yamins" ], "title": "Local aggregation for unsupervised learning of visual embeddings", "venue": "arXiv preprint arXiv:1903.12355,", "year": 2019 }, { "authors": [ "Yalniz" ], "title": "A more thorough parameter search on P and K may lead to even better results than reported here. For the fine-tuning process, we take the fully trained models and restart the LLP training with learning rate 0.003", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have achieved impressive performance on tasks across a variety of domains, including vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016a; 2017), speech recognition (Hinton et al., 2012; Hannun et al., 2014; Deng et al., 2013; Noda et al., 2015), and natural language processing (Young et al., 2018; Hirschberg & Manning, 2015; Conneau et al., 2016; Kumar et al., 2016). However, these achievements often heavily rely on large-scale labeled datasets, requiring burdensome and expensive annotation efforts. This problem is especially acute in specialized domains such as medical image processing, where annotation may involve performing an invasive process on patients.\nSemi-supervised learning (SSL) seeks to learn useful representations from limited amounts of labeled data, leveraging it in conjunction with extensive unlabeled data. SSL has shown significant promise (Liu et al., 2018; Iscen et al., 2019; Zhai et al., 2019; Miyato et al., 2018; Tarvainen & Valpola, 2017; Lee, 2013; Grandvalet & Bengio, 2005; Qiao et al., 2018; Xie et al., 2019). However, gaps to supervised performance levels still remain significant, and many recent SSL methods rely on techniques whose efficiency scales poorly with dataset size and thus cannot be readily applied to many real-world machine learning problems (Liu et al., 2018; Iscen et al., 2019).\nHere, we propose a novel SSL algorithm that is specifically adapted for use with large sparsely-labeled datasets. This algorithm, termed Local Label Propagation (LLP), learns a nonlinear embedding of the input data, and exploits the local geometric structure of the latent embedding space to help infer useful pseudo-labels for unlabeled datapoints. LLP borrows the framework of non-parametric embedding learning, which has recently shown utility in unsupervised learning (Wu et al., 2018b;\nZhuang et al., 2019), to first train a deep neural network that embeds labeled and unlabeled examples into a lower-dimensional latent space. LLP then propagates labels from known examples to unknown datapoints, weighting the likelihood of propagation by the local density of known examples. The neural network is then optimized to categorize all datapoints according to their pseudo-labels (with stronger emphasis on true known labels), while simultaneously encouraging datapoints sharing the same (pseudo-)labels to aggregate in the latent embedding space. The resulting embedding thus gathers both labeled images within the same class and unlabeled images sharing statistical similarities with the labeled ones. Through iteratively applying the propagation and network training steps, the LLP algorithm builds a good underlying representation for supporting downstream tasks, and trains an accurate classifier for the specific desired task.\nWe apply LLP in the context of object categorization in the ImageNet dataset (Deng et al., 2009), learning a high-performing network while discarding most of the labels. We show that LLP substantially outperforms previous scalable semi-supervised algorithms (Zhai et al., 2019; Miyato et al., 2018; Tarvainen & Valpola, 2017; Lee, 2013; Grandvalet & Bengio, 2005; Qiao et al., 2018) across a wide variety of training regimes, and that LLP-trained features support improved transfer to Places205, a large-scale scene-recognition task. We also present analyses that provide insights into the learning procedure and justification of key parameter choices." }, { "heading": "2 RELATED WORK", "text": "Below we describe conceptual relationships between our work and recent related approaches, and identify relevant major alternatives for comparison.\nDeep Label Propagation. Like LLP, Deep Label Propagation (Iscen et al., 2019) (DLP) also iterates between steps of label propagation and neural network optimization. In contrast to LLP, the DLP label propagation scheme is based on computing pairwise similarity matrices of learned visual features across all (unlabeled) examples. Unlike in LLP, the DLP loss function is simply classification with respect to pseudo-labels, without any additional aggregation terms ensuring that the pseudolabeled and true-labeled points have similar statistical structure. The DLP method is effective on comparatively small datasets, such as CIFAR10 and Mini-ImageNet. However, DLP is challenging to apply to large-scale datasets such as ImageNet, since its label propagation method is O(N2) in the number N of datapoints, and is not readily parallelizable. In contrast, LLP is O(NM), where M is the number of labeled datapoints, and is easily parallelized, making its effective complexity O(NM/P ), where P is the number of parallel processes. In addition, DLP uniformly propagates labels across the embedding space, while LLP’s use of local density-driven propagation weights specifically exploits the geometric structure in the space, improving pseudo-label inference.\nDeep Metric Transfer and Pseudolabels. The Deep Metric Transfer (Liu et al., 2018) (DMT) and Pseudolabels (Lee, 2013) methods both use non-iterative two-stage procedures. In the first stage, the representation is initialized either with a self-supervised task such as non-parametric instance recognition (DMT), or via direct supervision on the known labels (Pseudolabels). In the second stage, pseudo-labels are obtained either by applying a label propagation algorithm (DMT) or naively from the pre-trained classifier (Pseudolabels), and these are then used to fine-tune the network. As in DLP, the label propagation algorithm used by DMT cannot be applied to large-scale datasets, and does not specifically exploit local statistical features of the learned representation. While more scalable, the Pseudolabels approach achieves comparatively poor results. A key point of contrast between LLP and the two-stage methods is that in LLP, the representation learning and label propagation processes interact via the iterative training process, an important driver of LLP’s improvements.\nSelf-Supervised Semi-Supervised Learning. Self-Supervised Semi-Supervised Learning (Zhai et al., 2019) (S4L) co-trains a network using self-supervised methods on unlabeled images and traditional classification loss on labeled images. Unlike LLP, S4L simply “copies” self-supervised learning tasks as parallel co-training loss branches. In contrast, LLP involves a nontrivial interaction between known and unknown labels via label propagation and the combination of categorization and aggregation losses, both factors that are important for improved performance.\nConsistency-based regularization. Several recent semi-supervised methods rely on dataconsistency regularizations. Virtual Adversarial Training (VAT) (Miyato et al., 2018) adds small input perturbations, requiring outputs to be robust to this perturbation. Mean Teacher (MT) (Tarvainen &\nValpola, 2017) requires the learned representation to be similar to its exponential moving average during training. Deep Co-Training (DCT) (Qiao et al., 2018) requires the outputs of two views of the same image to be similar, while ensuring outputs vary widely using adversarial pairs. Very recently, Unsupervised Data Augmentation (UDA) (Xie et al., 2019) achieves state-of-the-art performance by incorporating a substantially more complex data augmentation scheme into the data-consistency framework, and employing computationally expensive but practically impactful details such as the use of very large batch size during optimization. These methods all use unlabeled data in a “point-wise” fashion, applying the proposed consistency metric separately on each. They thus differ significantly from (and are thus likely complementary to) LLP, or indeed any method that explicitly relates unlabeled to labeled points. LLP benefits from training a shared embedding space that aggregates statistically similar unlabeled datapoints together with labeled (putative) counterparts. As a result, adding more unlabeled images consistently increases the performance of LLP, unlike for (e.g.) MT." }, { "heading": "3 METHODS", "text": "We first give an overview of the LLP method. At a high level, LLP learns a model fθ(⋅) from labeled examples XL = {x1, . . . , xM}, their associated labels YL = {y1, . . . , yM}, and unlabeled examples XU = {xM+1, . . . , xN}. fθ(⋅) is realized via a deep neural network whose parameters θ are network weights. For each input x, fθ(x) generates two outputs (Fig. 1): an “embedding output”, realized as a vector v in a D-dimensional sphere, and a category prediction output ŷ. In learning fθ(⋅), the LLP procedure repeatedly alternates between two steps: label propagation and representation learning. First, known labels YL are propagated fromXL toXU , creating pseudo-labels YU = {yM+1, . . . , yN}. Then, network parameters θ are updated to minimize a loss function balancing category prediction accuracy evaluated on the ŷ outputs, and a metric of statistical consistency evaluated on the v outputs.\nIn addition to pseudo-labels, the label propagation step also generates [0,1]-valued confidence scores ci for each example xi. For labeled points, confidence scores CL = {c1, . . . , cM} are automatically set to 1, while for pseudo-labeled examples, confidence scores CU = {cM+1, cM+2, ..., cN} are computed from the local geometric structure of the embedded points, reflecting how close the embedding vectors of the pseudo-labeled points are to those of their putative labeled counterparts. The confidence values are then used as loss weights during representation learning.\nRepresentation Learning. Assume that datapoints X = XU ∪ XL, labels and pseudo-labels Y = YU ∪ YL, and confidences C = CU ∪ CL are given. Let V = {v1, . . . , vN} denote the set of corresponding embedded vectors, and Ŷ = {ŷ1, . . . , ŷN} denote the set of corresponding category\nprediction outputs. In the representation learning step, we update the network embedding parameters by simultaneously minimizing the standard cross-entropy loss LC(Y, Ŷ ) between predicted and propagated pseudo-labels, while maximizing a global aggregation metric LA(V ∣Y ) to enforce overall consistency between known labels and pseudo-labels.\nThe definition of LA(V ∣Y ) is based on the non-parametric softmax operation proposed in (Wu et al., 2018b;a). Specifically, we define the joint probability of any two embedding vectors vi and vj as:\nP (vi, vj) = exp(vTi vj/τ)\nZ ,Z =\nN\n∑ k=1\nN\n∑ l=1 exp(vTk vl/τ) (1)\nwhere temperature τ ∈ (0, 1] is a fixed hyperparameter. Using this definition, we can get the probability of vi and the conditional probability of vj given vi as:\nP (vi) = N\n∑ j=1\nP (vi, vj) = ∑Nj=1 exp(vTi vj/τ)\nZ ,P (vj∣vi) = P (vi, vj) P (vi) = exp(vTi vj/τ) ∑Nl=1 exp(vTi vl/τ) (2)\nAdditionally, for S ⊂ X , its probability given vi is P (S∣vi) = ∑vj∈S P (vj∣vi). We then define the aggregation metric as the (negative) log likelihood of the examples whose pseudo-labels are also y, the pseudo-label of the current example v: LA(v) = −log(P (A∣v)), where A = {xi∣yi = y}. Optimizing LA(v) encourages the embedding corresponding to a given datapoint to selectively become close to embeddings of other datapoints with the same pseudo-label (Fig. 1).\nThe cross-entropy and aggregation loss terms are scaled on a per-example basis by the confidence score, and an L2 weight regularization penalty is added. Thus, the final loss for example x is: L(x∣θ) = c ⋅ [LC(y, ŷ) + LA(v)] + λ∥θ∥22, where λ is a regularization hyperparameter. Label Propagation. To understand how LLP generates pseudo-labels YU and confidence scores CU , it is useful to start from the weighted K-Nearest-Neighbor classification algorithm (Wu et al., 2018b), in which a “vote” is obtained from the top K nearest labeled examples for each unlabeled example x, denoted NK(x). The vote of each xi ∈ NK(x) is weighted by the corresponding probabilities P (vi∣v). Assuming S classes, the total weight for pseudo-labeled v as class j is thus:\nwj(v) = ∑ i∈I(j) P (vi∣v), where I(j) = {i∣xi ∈ NK(v), yi = j} (3)\nTherefore, the probability pj(v) that datapoint x is of class j, the associated inferred pseudolabel y, and the corresponding confidence c, may be defined as: pj(v) = wj(v)/∑Sk=1 wk(v), y = arg maxj pj(v), and c = py(v). Although intuitive, weighted-KNN introduces a positive correlation between the local density of a labeled example and the number of unlabeled examples whose pseudo-labels are propagated from this labeled example. Moreover, if the labeled examples within one category have higher densities than other categories, there will be more unlabeled examples pseudolabled as this category. To avoid this correlation, we additionally penalize the labeled examples with higher densities through dividing the KNN weight of a labeled example by its local density. To formalize this penalization, we divide P (vi∣v) in the definition of wj(v) with a local densityweighted probability:\nP L(vi∣v) = P (vi∣v)/ρ(vi) where ρ(vi) = ∑ j∈NT (vi) P (vj , vi) (4)\nwhere NT (vi) are T nearest unlabeled examples and denominator ρ(vi) is a measure of the local embedding density. For consistency, we replace NK(v) with NLK(v), which contains the K labeled neighbors with highest locally-weighted probability, to ensure that the votes come from the most relevant labeled examples. The final form of the LLP propagation weight equation is thus:\nwj(v) = ∑ i∈I(j) P (vi∣v) ∑k∈NT (vi) P (vk, vi) ,where I (j) = {i∣i ∈ NLK(v), yi = j} (5)\nThe intuition behind the local density weighting idea is further quantitatively explored in §5.\nMemory Bank. Both described steps implicitly require access to all the embedded vectors V at every step. However, recomputing V becomes intractable for bigger dataset. We address this issue by approximating realtime V with a memory bank V̄ that keeps a running average of the embeddings. As this procedure is taken from Wu et al. (2018b), we refer readers there for a detailed description." }, { "heading": "4 RESULTS", "text": "We first evaluate the LLP method on visual object categorization in the large-scale ImageNet dataset (Deng et al., 2009), under a variety of training regimes. We also illustrate transfer learning to Places 205 (Zhou et al., 2014), a large-scale scene-recognition dataset.\nExperimental settings. We follow Wu et al. (2018b) for most of our hyperparameters and optimization settings. In the label propagation step, we set K = 10 and T = 25 (these choices are justified in Section 5). We use ResNet-18v2 and ResNet-50v2 (He et al., 2016b). We find a “rate-jump” phase beneficial: after the initial schedule for the learning rate, increasing and dropping it again improves the performance. We think this is because a larger learning rate is needed to leverage the better embedding space in the later training, especially given that the performance usually has big jumps after dropping the learning rate and the dropped learning rate is already too small to exploit the improved quality. To clearly show the effect of this phase, we list the performance after applying it using “LLP + RJ” in Table 1-2. We also apply this phase to supervised learning and Mean Teacher and their performances are not changed. More details are in Appendix A. We train on ImageNet with p% labels and q% total images available, meaning that M ∼ p%× 1.2M, N ∼ q%× 1.2M. Different regimes are defined by p ∈ {1, 3, 5, 10} and q ∈ {30, 70, 100}. Results are shown in Tables 1-3. Due to the inconsistency of reporting metrics across different papers, we alternate between comparing top1 and top5, depending on which was reported in the relevant previous work.\nThe results show that: 1. LLP significantly outperforms previous state-of-the-art methods by large margins within all training regimes tested, regardless of network architectures, p, and q. 2. When compared to the very recent UDA approach (Xie et al., 2019) for ResNet-50 with p = 10 and q = 100, LLP achieves better top5 (89.55 v.s. 88.52, LLP v.s. UDA) and top1 (70.85 v.s. 68.66). Moreover, LLP has the potential to achieve even higher performance if trained with the complex preprocessing pipeline and large batch size used in UDA (15360 for UDA vs 64 for LLP here, chosen due to computational resource limitations), as these details have been shown to meaningfully improve performance; 3. LLP shows especially large improvements to other methods when p is small. For example, ResNet-18 trained using LLP with p = 1 surpasses MT by 16.64% in top1 and our ResNet-50 with p = 1 surpasses S4L by 18.83 in top5 (UDA does not report for less than 10% labels).\nLeveraging additional unlabeled images. To examine how good LLP is at using unlabeled images, we first vary the value of q while p remains at 10. As shown in Table 3, LLP consistently benefits from additional unlabelled images and is not yet saturated using the whole ImageNet, unlike Mean Teacher, where the number of unlabelled images appears essentially irrelevant.\nTo further assess how LLP might behave in noisier real-world settings, we additionally performed a preliminary exploration using the YFCC100M (Thomee et al., 2015) dataset as a source of augmenta-\n1 For S4L, we list their S4L-Rotation performance, which is their best reported performance using ResNet50. Note that although a model with higher performance is reported by S4L, that model uses a much more complex architecture than ResNet-50.\ntion. However, because YFCC100M is drawn from a very different distribution than ImageNet, we select a subset of images that are more similar to ImageNet using the pipeline proposed by (Yalniz et al., 2019). This pipeline is applied to two randomly chosen subsets of YFCC100M with 5M and 10M images, respectively, creating two selected training subsets each of roughly 480K images — denoted FAR and NEAR — differing in how close to the ImageNet distribution the sets are. We then combine each selected set with ImagetNet of p = 10 and q = 100 and train LLP. After this training, we finetuned the model with only ImagetNet data using LLP, following the procedure in (Yalniz et al., 2019). Please refer to Appendix B for more details of the selection and fine-tuning processes.\nThe results in Table 3 show that even with this very preliminary attempt, LLP achieves a 2.02% performance improvement with augmentation of images chosen from the NEAR augmentation, compared to the 61.51% baseline (though the fine-tuning process likely accounts for part of this improvement). Almost certainly, such augmentations would be substantially greater if a larger number of images from a better-matched distribution were available and a better network is used for selecting the images (Yalniz et al., 2019). Unfortunately, a direct comparison between LLP and Yalniz et al. (2019) is not presently possible as their ResNet18 result uses the entire labeled ImageNet and all of YFCC100M to select matched augmentation images, both of which require significantly more computation resources than are available to us. However, it is worth noting that LLP benefits even from selecting the matched dataset from 10M YFCC images, while Yalniz et al. (2019) needs more than 20M images to achieve a similar gain.\nTransfer learning to Scene Recognition. To evaluate the quality of our learned representation in other downstream tasks besides ImageNet classification, we assess its transfer learning performance to the Places205 (Zhou et al., 2014) dataset. This dataset has 2.45M images total in 205 distinct scene categories. We fix the nonlinear weights learned on ImageNet, add another linear readout layer on top of the penultimate layer, and train the readout using cross-entropy loss using SGD as above. Please refer to Appendix C for other details. We only evaluate our ResNet-50 trained with p = {1, 10}, as Zhai et al. (2019) reported performance with this setting. Table 4 show that LLP again significantly outperforms previous state-of-the-art results. It is notable that when trained with p = 1, only LLP shows slightly better performance to Places205 than the Local Aggregation (LA) method, the current state-of-the-art unsupervised learning method (Zhuang et al., 2019)." }, { "heading": "5 ANALYSES", "text": "Emerging clusters during training. Intuitively, the aggregation term LA(v) should cause embedding outputs with the same label, whether known or propagated, to cluster together during training. Fig. 2a shows clustering becoming more pronounced along the training trajectory both for labelled and unlabelled datapoints, while unlabelled datapoints surround labelled datapoints increasingly densely. A simple metric measuring the aggregation of a group of embedding vectors is the L2 norm of the group mean, which, since all embeddings lie in the 128-D unit sphere, is inversely related to the group dispersion. Computing this metric for each category and averaging across categories, we obtain a quantitative description of aggregation over the learning timecourse (Fig. 2b), further supporting the conclusion that LLP embeddings become increasingly clustered. We also investigate how network\na.\nEpoch 20 Epoch 40 Epoch 100 Epoch 230 Epoch 280\narchitecture influences learning trajectory and the final representation, comparing ResNet-50 and ResNet-18 trained with 10% labels (Fig. 2b-d). The more powerful ResNet-50 achieves a more clustered representation than ResNet-18, across timepoints and categories.\nCategory structure analysis: successes, failures, and sub-category discovery. It is instructive to systematically analyze statistical patterns on a per-category basis. To do this, we visualize the embeddings for three representative categories with 2D multi-dimensional scaling (MDS). For an “easy” category with a high aggregation score (Fig. 2e), the LLP embedding identifies images with strong semantic similarity, supporting successful image retrieval. For a “hard” category with low score (Fig. 2f), image statistics vary much more and the embedding fails to properly cluster examples together. Most interestingly, for multi-modal categories with intermediate scores (Fig. 2g), the embedding can reconstruct semantically meaningful sub-clusters even when these are not present in the labelling e.g. the “labrador” category decomposing into “black” and “yellow” subcategories.\nComparison to global propagation in the small-dataset regime. To understand how LLP compares to methods that use global similarity information, but therefore lack scalability to large datasets, we test several such methods on ImageNet subsets (see Appendix D for details). Table 5 shows that LLP can be effective even in this regime, as it is comparable to the global propagation algorithm used in DMT (Liu et al., 2018) and only slightly lower than DLP (Iscen et al., 2019).\nAblation studies. To illustrate the importance of key design choices in LLP, we conduct a series of ablation studies exploring the following alternatives, using: 1. Different Ks (experiments Top50,\nTop20, and Top5 in Table 6); 2. Confidence weighting, or not (NoC); 3. Density-weighted probability, or not (NoDW). Table 6 shows the contributions of each design choice, indicating that both confidence weighting and density weighting lead to significant performance gains, across architectures.\nUnderstanding density weighting. To better explain why the density weighting method is useful, we compute two measures for each class i: the average density of its labeled examples, denoted Di; and the number of unlabeled examples pseudolabled as i, denoted Qi. Formally, these are defined by: Di = ∑j∈Li Zρ(vj)/∥Li∥, where Li = {j∣xj ∈ XL, yj = i}, and Qi = ∥{j∣xj ∈ XU , yj = i}∥. Fig. 3a illustrates the strong positive correlation between Di and Qi for the unweighted NoDW model, which leads to an imbalanced distribution of Qi shown in Fig. 3c. After applying density weighting, Di and Qi become decorrelated (Fig. 3b), creating an empirically accurate balanced pseudo-label class distribution, throughout optimization (Fig. 3d). Another potential method to enforce an empirically correct Qi distribution would be to reweight KNN coefficients to directly reflect the empirical label ratio, replacing PL(vi∣v) in eq. 4 with PR(vi∣v) = P (vi∣v)× Lyi M / Qyi N−M\n. However, this simple “ratio-based” scheme does not explicitly address the local correlation of Qi and Di. Indeed, an experiment with P\nR(vi∣v) in place of PL(vi∣v) using ResNet-18 on 10% labeled ImageNet only achieves top1 59.4%, substantially worse than LLP, further supporting the effectiveness of the more sophisticated local density weighting approach." }, { "heading": "6 CONCLUSION", "text": "In this work, we presented LLP, a method for semi-supervised deep neural network training that efficiently propagates labels from known to unknown examples in a common embedding space, ensuring high-quality propagation by exploiting the local structure of the embedding. The embedding itself is simultaneously co-trained to achieve high categorization performance while enforcing statistical consistency between real and pseudo-labels. LLP achieves state-of-the-art semi-supervised learning results across all tested training regimes, including those with very small amounts of labelled data, and transfers effectively to other non-trained tasks.\nIn future work, we seek to improve LLP by better integrating it with state-of-the-art unsupervised learning methods (e.g. Zhuang et al. (2019)). This is especially relevant in the regime with very-low fractions of known labelled datapoints (e.g. <1% of ImageNet labels), where the best unsupervised methods outperform state-of-the-art semi-supervised methods. Combining LLP with the very distinct point-wise methods in MT or UDA is also of interest, as would be the effective use of larger computational resources to enable conceptually simple but practically important optimization details such as (e.g.) significantly larger batch size in UDA. In addition, in its current formulation, LLP may be less effective on small datasets than alternatives that exploit global similarity structure (e.g. Iscen et al. (2019); Liu et al. (2018)). We thus hope to improve upon LLP by identifying methods of label propagation that can take advantage of global structure while remaining scalable." }, { "heading": "C PLACES205 EXPERIMENT DETAILS", "text": "Learning rate is initialized at 0.01 and dropped by factor of 10 whenever validation performance on Places205 saturates. Training requires approximately 500,000 steps, comprising two learning rate drops." }, { "heading": "D PROPAGATION IN THE SMALL-DATASET REGIME", "text": "ImageNet subsets are constructed through randomly sampling 50 categories from ImageNet and 50 images from each category. For each category selected, we choose 5 images to be labelled. For all methods, we use embedding outputs of our trained ResNet-50 with p = 10 as data features." } ]
2,019
null
SP:dbf67fa98a71f8c3b7b62e9b5695ced62bcb730d
[ "Through the lens of Distributional Robust Risk (DRR), this work draws a link between adversarial robustness and Lipschitz constant regularisation. The authors first provide an upper bound of the DRR (with a Wasserstein ball as the ambiguity set) in terms of the true risk and the Lipschitz constant of the loss function under the current model. They show that the standard adversarial risk can be upper bounded by the DRR, emphasizing that the Lipschitz constant regularised loss can be used as a proxy for adversarially robust training.", "This paper uses results from distributional robustness to provide bounds of p-norm-constrained adversarial risk which depend on the Lipschitz constant of the underlying classifier. The bulk of the paper focuses on sample-efficient mechanisms to approximate the Lipschitz constants of kernel methods so that a constraint on this Lip constant can be enforced during training. Empirically, the kernel methods are compared to existing deep learning approaches and are shown to be competitive at this scale." ]
Distributional robust risk (DRR) minimisation has arisen as a flexible and effective framework for machine learning. Approximate solutions based on dualisation have become particularly favorable in addressing the semi-infinite optimisation, and they also provide a certificate of the robustness for the worst-case population loss. However existing methods are restricted to either linear models or very small perturbations, and cannot find the globally optimal solution for restricted nonlinear models such as kernel machines. In this paper we resolve these limitations for a general class of kernel space, and our approach is based on a new upper bound of DRRs using an empirical risk regularised by the Lipschitz constant of the model, e.g., deep neural networks and kernel methods. As an application, we showed that it also provides a certificate for adversarial training, and global solutions can be achieved on product kernel machines in polynomial time.
[ { "affiliations": [], "name": "LIPSCHITZ REGULARISATION" } ]
[ { "authors": [ "Cem Anil", "James Lucas", "Roger Grosse" ], "title": "Sorting out Lipschitz function approximation", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Armin Askari", "Alexandre d’ Aspremont", "Laurent El Ghaoui" ], "title": "Naive feature selection: Sparsity in naive bayes", "venue": null, "year": 2019 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Jean-Pierre Aubin", "Ivar Ekeland" ], "title": "Estimates of the duality gap in nonconvex optimization", "venue": "Mathematics of Operations Research,", "year": 1976 }, { "authors": [ "A.R. Barron" ], "title": "Approximation and estimation bounds for artificial neural networks", "venue": "Machine Learning,", "year": 1994 }, { "authors": [ "Aharon Ben-Tal", "Dick den Hertog", "Anja De Waegenaere", "Bertrand Melenberg", "Gijs Rennen" ], "title": "Robust solutions of optimization problems affected by uncertain probabilities", "venue": "Management Science,", "year": 2013 }, { "authors": [ "C. Bhattacharyya", "K.S. Pannagadatta", "A.J. Smola" ], "title": "A second order cone programming formulation for classifying missing data", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2005 }, { "authors": [ "Alberto Bietti", "Julien Mairal" ], "title": "Group invariance, stability to deformations, and complexity of deep convolutional representations", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Alberto Bietti", "Gregoire Mialon", "Dexiong Chen", "Julien Mairal" ], "title": "A kernel perspective for regularizing deep neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Jose Blanchet", "Karthyek Murthy" ], "title": "Quantifying distributional model risk via optimal transport", "venue": "Mathematics of Operations Research,", "year": 2019 }, { "authors": [ "Jose Blanchet", "Yang Kang", "Karthyek Murthy" ], "title": "Robust Wasserstein profile inference and applications to machine learning", "venue": null, "year": 2016 }, { "authors": [ "Jose Blanchet", "Yang Kang", "Fan Zhang", "Karthyek Murthy" ], "title": "Data-driven optimal transport cost selection for distributionally robust optimization", "venue": null, "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Moustapha Cisse", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Koby Crammer", "Yoram Singer" ], "title": "On the algorithmic implementation of multiclass kernel-based vector machines", "venue": "Journal of machine learning research,", "year": 2001 }, { "authors": [ "Zac Cranko", "Aditya Menon", "Richard Nock", "Cheng Soon Ong", "Zhan Shi", "Christian Walder" ], "title": "Monge blunts Bayes: Hardness results for adversarial training", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Erick Delage", "Yinyu Ye" ], "title": "Distributionally robust optimization under moment uncertainty with application to data-driven problems", "venue": "Operations Research,", "year": 2010 }, { "authors": [ "P. Drineas", "M. Mahoney" ], "title": "On the nystr om method for approximating a gram matrix for improved kernel-based learning", "venue": null, "year": 2005 }, { "authors": [ "John Duchi", "Peter Glynn", "Hongseok Namkoong" ], "title": "Statistics of robust optimization: A generalized empirical likelihood approach", "venue": null, "year": 2016 }, { "authors": [ "Gregory E Fasshauer" ], "title": "Positive definite kernels: Past, present and future", "venue": "Dolomite Res. Notes Approx.,", "year": 2011 }, { "authors": [ "Laurent El Ghaoui", "Hervé Lebret" ], "title": "Robust solutions to least-squares problems with uncertain data", "venue": "SIAM J. Matrix Anal. Appl.,", "year": 1997 }, { "authors": [ "Peyman Mohajerin Esfahani", "Daniel Kuhn" ], "title": "Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations", "venue": "Mathematical Programming,", "year": 2018 }, { "authors": [ "Farzan Farnia", "David Tse" ], "title": "A minimax approach to supervised learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Farzan Farnia", "Jesse Zhang", "David Tse" ], "title": "Generalizable adversarial training via spectral normalization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "G. Fasshauer", "M. McCourt" ], "title": "Stable evaluation of Gaussian radial basis function interpolants", "venue": "SIAM Journal on Scientific Computing,", "year": 2012 }, { "authors": [ "Rui Gao", "Anton J Kleywegt" ], "title": "Distributionally robust stochastic optimization with Wasserstein distance", "venue": null, "year": 2016 }, { "authors": [ "Emmanuel Giner" ], "title": "Necessary and sufficient conditions for the interchange between infimum and the symbol of integration", "venue": "Set-Valued and Variational Analysis,", "year": 2009 }, { "authors": [ "Joel Goh", "Melvyn Sim" ], "title": "Distributionally robust optimization and its tractable approximations", "venue": "Operations Research,", "year": 2010 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": null, "year": 2015 }, { "authors": [ "Henry Gouk", "Eibe Frank", "Bernhard Pfahringer", "Michael Cree" ], "title": "Regularisation of neural networks by enforcing Lipschitz continuity", "venue": null, "year": 2018 }, { "authors": [ "J.-B. Hiriart-Urruty" ], "title": "A general formula on the conjugate of the difference of functions", "venue": "Canadian Mathematical Bulletin,", "year": 1986 }, { "authors": [ "Jean-Baptiste Hiriart-Urruty" ], "title": "From convex optimization to nonconvex optimization. necessary and sufficient conditions for global optimality", "venue": null, "year": 1989 }, { "authors": [ "Jean-Baptiste Hiriart-Urruty", "Claude Lemaréchal" ], "title": "Convex Analysis and Minimization", "venue": "Algorithms II. Springer-Verlag,", "year": 2010 }, { "authors": [ "Zhaolin Hu", "Jeff Liu Hong" ], "title": "Kullback-Leibler divergence constrained distributionally robust optimization, 2016", "venue": null, "year": 2016 }, { "authors": [ "Todd Huster", "Cho-Yu Jason Chiang", "Ritu Chadha" ], "title": "Limitations of the Lipschitz constant as a defense against adversarial examples", "venue": null, "year": 2018 }, { "authors": [ "L. Kantorovitch" ], "title": "On the translocation of masses", "venue": "Management Science,", "year": 1958 }, { "authors": [ "Thomas Kerdreux", "Igor Colin", "Alexandre d" ], "title": "Aspremont. An approximate Shapley-Folkman theorem. 2019", "venue": "URL http://arxiv.org/abs/1712.08559", "year": 2019 }, { "authors": [ "H. König" ], "title": "Eigenvalue Distribution of Compact Operators", "venue": "Birkhäuser, Basel,", "year": 1986 }, { "authors": [ "John Lafferty", "Guy Lebanon" ], "title": "Diffusion kernels on statistical manifolds", "venue": "Journal of Machine Learning Research, 6:129–163,", "year": 2005 }, { "authors": [ "C. Lemaréchal", "A. Renaud" ], "title": "A geometric study of duality gaps, with applications", "venue": "Mathematical Programming,", "year": 2001 }, { "authors": [ "Shao-Bo Lin", "Xin Guo", "Ding-Xuan Zhou" ], "title": "Distributed learning with regularized least squares", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "D.J.C. MacKay" ], "title": "Introduction to Gaussian processes", "venue": "Neural Networks and Machine Learning,", "year": 1998 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "E.J. McShane" ], "title": "Extension of range of functions", "venue": "Bull. Amer. Math. Soc.,", "year": 1934 }, { "authors": [ "Charles A. Micchelli", "Yuesheng Xu", "Haizhang Zhang" ], "title": "Universal kernels", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "Ha Quang Minh" ], "title": "Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory", "venue": "Constructive Approximation,", "year": 2010 }, { "authors": [ "Ha Quang Minh", "Partha Niyogi", "Yuan Yao" ], "title": "Mercer’s theorem, feature maps, and smoothing", "venue": "Conference on Computational Learning Theory (COLT),", "year": 2006 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jean-Paul Penot" ], "title": "On the minimization of difference functions", "venue": "Journal of Global Optimization,", "year": 1998 }, { "authors": [ "Aldo Pratelli" ], "title": "On the equality between Monge’s infimum and Kantorovich’s minimum in optimal mass transportation", "venue": "Annales de l’Institut Henri Poincare (B) Probability and Statistics,", "year": 2007 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy Liang" ], "title": "Certified defenses against adversarial examples", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "C.E. Rasmussen", "C.K.I. Williams" ], "title": "Gaussian Processes for Machine Learning", "venue": null, "year": 2006 }, { "authors": [ "Kevin Scaman", "Aladin Virmaux" ], "title": "Lipschitz regularity of deep neural networks: analysis and efficient estimation", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Hans Schneider" ], "title": "An inequality for latent roots applied to determinants with dominant principal diagonal", "venue": "Journal of the London Mathematical Society,", "year": 1953 }, { "authors": [ "Soroosh Shafieezadeh-Abadeh", "Daniel Kuhn" ], "title": "Distributionally robust logistic regression", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2015 }, { "authors": [ "Soroosh Shafieezadeh-Abadeh", "Daniel Kuhn", "Peyman Mohajerin Esfahani" ], "title": "Regularization via mass transportation", "venue": null, "year": 2017 }, { "authors": [ "Uri Shaham", "Yutaro Yamada", "Sahand Negahban" ], "title": "Understanding adversarial training: Increasing local stability of supervised models through robust optimization", "venue": null, "year": 2018 }, { "authors": [ "S. Shalev-Shwartz", "O. Shamir", "K. Sridharan" ], "title": "Learning kernel-based halfspaces with the 0-1 loss", "venue": "SIAM Journal on Computing,", "year": 2011 }, { "authors": [ "Z.-C. Shi", "B.-Y. Wang" ], "title": "Bounds for the determinant, characteristic roots and condition number of certain types of matrices", "venue": "Acta Math. Sinica,", "year": 1965 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Ingo Steinwart", "Andreas Christmann" ], "title": "Support vector machines", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Arun Sai Suggala", "Adarsh Prasad", "Vaishnavh Nagarajan", "Pradeep Ravikumar" ], "title": "Revisiting adversarial risk", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "John F. Toland" ], "title": "A duality principle for non-convex optimisation and the calculus of variations", "venue": "Archive for Rational Mechanics and Analysis,", "year": 1979 }, { "authors": [ "Joel A. Tropp" ], "title": "An introduction to matrix concentration inequalities", "venue": "Foundations and Trends in Machine Learning,", "year": 2015 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Madeleine Udell", "Stephen Boyd" ], "title": "Bounding duality gap for separable problems with linear constraints", "venue": "Computational Optimization and Applications,", "year": 2016 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Luca Daniel", "Duane S. Boning", "Inderjit S. Dhillon" ], "title": "Towards fast computation of certified robustness for relu networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Hassler Whitney" ], "title": "Analytic extensions of differentiable functions defined in closed sets", "venue": "Transactions of the American Mathematical Society,", "year": 1934 }, { "authors": [ "Wolfram Wiesemann", "Daniel Kuhn", "Melvyn Sim" ], "title": "Distributionally robust convex optimization", "venue": "Operations Research,", "year": 2014 }, { "authors": [ "C.K.I. Williams", "M. Seeger" ], "title": "Using the Nyström method to speed up kernel machines", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2000 }, { "authors": [ "R.C. Williamson", "A.J. Smola", "B. Schölkopf" ], "title": "Generalization bounds for regularization networks and support vector machines via entropy numbers of compact operators", "venue": "IEEE Transactions on Information Theory,", "year": 2001 }, { "authors": [ "Eric Wong", "J Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Jan Hendrik Metzen", "J. Zico Kolter" ], "title": "Scaling provable adversarial defenses", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Huan Xu", "Constantine Caramanis", "Shie Mannor" ], "title": "Robust regression and lasso", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2009 }, { "authors": [ "Huan Xu", "Constantine Caramanis", "Shie Mannor" ], "title": "Robustness and regularization of support vector machines", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Yuichi Yoshida", "Takeru Miyato" ], "title": "Spectral norm regularization for improving the generalizability of deep learning", "venue": null, "year": 2017 }, { "authors": [ "Yuchen Zhang", "Jason D Lee", "Michael I Jordan" ], "title": "`1-regularized neural networks are improperly learnable in polynomial time", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Yuchen Zhang", "Percy Liang", "Martin Wainwright" ], "title": "Convexified convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Chaoyue Zhao", "Yongpei Guan" ], "title": "Data-driven risk-averse stochastic optimization with Wasserstein metric", "venue": "Operations Research Letters,", "year": 2018 }, { "authors": [ "Ding-Xuan Zhou" ], "title": "The covering number in learning theory", "venue": "Journal of Complexity,", "year": 2002 }, { "authors": [ "H Zhu", "C.K.I Williams", "R. J Rohwer", "M Morciniec" ], "title": "Gaussian regression and optimal finite dimensional linear models", "venue": "Neural Networks and Machine Learning. Springer-Verlag,", "year": 1998 }, { "authors": [ "Constantin Zălinescu" ], "title": "Convex Analysis in General Vector Spaces", "venue": "World Scientific,", "year": 2002 }, { "authors": [ "Williamson" ], "title": "Consider κ(x) = exp(−x2/(2σ2)) when x ∈ [−v/2, v/2], and then extend κ to R as a periodic function with period v. Again let μ be the uniform distribution on [−v/2, v/2", "venue": null, "year": 2001 }, { "authors": [ "Lin" ], "title": "later corrected by Zhou (2002) and Steve Smale. Indeed, uniform boundedness is not known even for Gaussian kernels with uniform distribution on", "venue": "König", "year": 1986 } ]
[ { "heading": "1 INTRODUCTION", "text": "Regularised risk minimisation has been the workhorse of learning nonlinear hypotheses such as deep neural networks and kernel machines. Recently, distributional robust risk (DRR) minimization has emerged as a promising instance with marked efficacy and flexibility. Instead of perturbing the observed data points, DRRs consider perturbations to the empirical distribution, constituting an ambiguity set P that lives in the space of data distributions. Let Ω be an outcome space with (true) distribution µ, e.g., the joint space of input and output. Given a loss function `, a model f suffers a loss value `f (ω) over an outcome ω, and the risk of f under µ is risk`(f, µ) := Eµ[`f ]. In DRR minimisation, a model f is sought that minimises the expectation of loss ` over an ambiguity set P , i.e., that minimises supν∈P risk`(f, ν) (Delage & Ye, 2010; Goh & Sim, 2010; Wiesemann et al., 2014). The ambiguity sets can be constructed by moment matching (Bhattacharyya et al., 2005; Farnia & Tse, 2016), divergence balls (Ben-Tal et al., 2013; Duchi et al., 2016; Hu & Hong, 2016), or Wasserstein distance balls (Kantorovitch, 1958). In this work we focus on the last due to its favorable properties in statistics and computation, along with extensive applications in DRR (Esfahani & Kuhn, 2018; Gao & Kleywegt, 2016; Zhao & Guan, 2018).\nDespite the generality of DRR, its computational efficiency remains a challenge, since the supremum is over a (typically) uncountably infinite dimension space. Tractable equivalent convex programs can be derived only for a limited range of loss functions along with linear hypothesis spaces (Blanchet et al., 2016; El Ghaoui & Lebret, 1997; Shafieezadeh-Abadeh et al., 2015; 2017; Xu et al., 2009a;b). Although Shafieezadeh-Abadeh et al. (2017) developed lifted variants for reproducing kernel Hilbert spaces (RKHS) to accommodate nonlinear hypotheses, the perturbation was applied to Φ(ω), where Φ is the implicit feature map. This still falls short of robustness with respect to distributions over Ω.\nA more promising technique for optimizing DRRs over nonlinear hypothesis spaces—including deep neural networks and kernel machines—is by dualising it to a form that is amenable to (approximate) optimisation (Esfahani & Kuhn, 2018). The fundamental strong duality result was established independently by Blanchet & Murthy (2019) and Gao & Kleywegt (2016), and has been applied to various tasks such as specification of regularisation parameter (Blanchet et al., 2016), design of transport cost (Blanchet et al., 2017), and selection of ambiguity region size for optimal confidence interval (Duchi et al., 2016). In particular, Sinha et al. (2018) used it to construct an efficiently computable certificate on the level of robustness for the worst-case population loss. However, these methods are still subject to marked restrictions when applied to smooth nonlinear models. First, Sinha et al. (2018) restricts the perturbation to be small, which despite the common interest of\nimperceptible perturbations, leaves unaddressed the equally interesting regimes of medium to large perturbations; see discussions in Openreview (2018). Moreover, although the global solvability of the inner Lagrangian penalty problem (robust surrogate loss) can be ensured by small enough perturbation, there is no practical procedure to compute the threshold. Finally, the overall optimisation of the nonlinear model is still subject to nonconvexity, precluding tractable global solutions for restricted but still general classes of nonlinear models such as kernel methods.\nThe first goal of this work, therefore, is to develop a novel certificate on distributional robustness that dispenses with these restrictions (§3). Specifically, we will leverage the McShane-Whitney extension theorem (McShane, 1934; Whitney, 1934) to upper bound DRRs by the empirical risk regularized with the Lipschitz constant of the model f , while additionally accounting for the underlying transport cost and the loss `. The result vastly generalises the vector norm regularisation in linear binary classification (Shafieezadeh-Abadeh et al., 2017, Thm. 3.11) to nonlinear models and extended real-valued cost functions that encode constraints, along with an arbitrary metric space of labels that is general enough for multiclass problems. Appealing to any magnitude of perturbation, it also enjoys improved computational efficiency compared with the robust surrogate loss in Sinha et al. (2018).\nA particularly effective domain to apply this new certificate is adversarial learning (Szegedy et al., 2014), where models are trained to be resilient to malicious distortions on the data. Although Lipschitz regularisation has been a popular recipe for robustness (e.g., Anil et al., 2019; Cisse et al., 2017; Farnia et al., 2019; Gouk et al., 2018; Huster et al., 2018; Scaman & Virmaux, 2018) and generalisation accuracy (Miyato et al., 2018; Yoshida & Miyato, 2017), it remains a heuristic and therefore our second major contribution is to reveal in §3.1 that adversarial risks (Goodfellow et al., 2015; Madry et al., 2018; Shaham et al., 2018) can be bounded by a DRR. Such a rigorous justification has hitherto been restricted to logistic loss (Suggala et al., 2019, Thm. 9), and a similar tightness result has been established only for linear models (Shafieezadeh-Abadeh et al., 2017, Thm. 3.20). As a result, our new certificate amounts to a new bound on the worst-case risk under attacks, complementing the existing certificates (Raghunathan et al., 2018; Tsuzuku et al., 2018; Weng et al., 2018; Wong & Kolter, 2018; Wong et al., 2018) with a more computationally efficient approach. It further achieved state-of-the-art accuracy under a range of attacks on standard benchmark datasets (§5).\nIn practice, however, the evaluation of Lipschitz constant L is NP-hard for neural networks (Scaman & Virmaux, 2018), compelling approximations of it, or explicit engineering of layers to respect Lipschitz while analyzing the expressiveness in specific cases (e.g., `∞ norm in Anil et al. (2019)). We, instead, pursue a new path and explore the following question: does there exist a hypothesis space which: a) is expressive enough in modeling; b) allows the exact value of L to be computed efficiently; c) enforcing the Lipschitz constant leads to a convex constraint that renders efficient optimisation.\nInterestingly, kernel machines satisfy all these requirements for some kernels. For example, Gaussian kernels are universal, whose RKHS can approximate any continuous function on a compact set in a uniform sense (Micchelli et al., 2006). The RKHS of multi-layer inverse kernels compactly encompasses `1-regularized neural networks (Shalev-Shwartz et al., 2011), degrading the generalisation performance by only a polynomial constant (Zhang et al., 2016; 2017). Similar results have been conjectured for Gaussian kernels (Shalev-Shwartz et al., 2011). Our third contribution proves that b) can be achieved for product kernels such as Gaussian kernels with high probability by using the Nyström approximation (Drineas & Mahoney, 2005; Williams & Seeger, 2000), and ε approximation error of L requires only O(1/ε2) samples (§4). Empirically this approximation is also effective for non-product kernels like inverse kernels. Such a sampling based approach also leads to a single convex constraint, making it scalable to 60k examples with even an interior-point solver (§5). The convenience in evaluating L renders our certificate of DRR even more favorable than those based on robust surrogate losses (Blanchet & Murthy, 2019; Gao & Kleywegt, 2016; Sinha et al., 2018)." }, { "heading": "2 PRELIMINARIES", "text": "The vast majority of our technical results and proofs are deferred to Appendix A, for which theoremlike statements are numbered to be consistent. The extended real line is R̄ := [−∞,+∞], R̄≥0 := [0,∞], and [n] := {1, 2, . . . , n}. For topological spaces X,Y , the Borel subsets are B(X), and the Borel probability measures are P(X). The universal sigma algebra is U(X) := ⋂ µ∈P(X) Bµ(X) where Bµ(X) is the completion of the Borel sets with respect to µ ∈ P(X). Let L0(X,Y ) denote the Borel measurable mappings X → Y , and L1(X,µ) denote the Borel functions f ∈ L0(X,R) with ∫ |f |dµ <∞ for µ ∈ P(X).\nFor two measures µ, ν ∈ P(Ω) the set of (µ, ν)-couplings is Π(µ, ν) := {π ∈ P(Ω ×Ω) | µ = ∫ π( · ,dω), ν = ∫ π(dω, · )}.\nLet c ∈ L0(Ω ×Ω, R̄). The c-transportation cost of µ, ν ∈ P(Ω), and c-transportation cost ball of radius r ≥ 0, centred at µ ∈ P(Ω) are respectively\ncostc(µ, ν) := inf { ∫ cdπ : π ∈ Π(µ, ν) } and Bc(µ, r) := {ν ∈ P(Ω) | costc(µ, ν) ≤ r}. (1)\nA function f : Ω → R̄ is c-Lipschitz if there exists L ≥ 0 such that ∀ω1, ω2 ∈ dom f : |f(ω1)− f(ω2)| ≤ Lc(ω1, ω2). (2)\nThe least c-Lipschitz constant of f (cf. Cranko et al., 2019) is the infimum over L ≥ 0 satisfying (2), and is denoted by lipc(f), so that when (X, d) is a metric space lipd(f) agrees with the usual Lipschitz notion. When c : X → R̄ (e.g., when c is a norm), we take c(x, y) := c(x − y) for all x, y ∈ X in (1) and (2)." }, { "heading": "3 CERTIFICATE FOR DISTRIBUTIONAL ROBUSTNESS", "text": "While an elegant concept, the DRR suffers from a lack of tractability. That is, in order to effectively minimise it, we first need to be able to compute or estimate it. When the loss function is convex with respect to the input space this is straight-forward, however in general approximations are necessary. Our first contribution is an upper bound for it.\nFor a function f : X → R̄ there is another function co f : X → R̄, called the convex envelope of f . It is the greatest closed convex function that minorises f . The quantity ρ(f) := supx∈X(f(x)− co f(x)) was first suggested by Aubin & Ekeland (1976) to quantify the lack of convexity of a function, and has since shown to be of considerable interest for a variety of nonconvex applications (Askari et al., 2019; Kerdreux et al., 2019; Lemaréchal & Renaud, 2001; Udell & Boyd, 2016). When X = Rn there are well-known ways to compute both co f and ρ(f), and a brief discussion on these appears in the appendix (Remark 2 on p. 18). ρ(f) = 0 when f is closed convex. Theorem 1. Assume X is a separable Fréchet space and fix µ ∈ P(X). Assume c : X → R̄≥0 is sublinear and continuous, and f ∈ L1(X,µ) is upper semicontinuous. Then for all r ≥ 0,\nDRR := sup ν∈Bc(µ,r) risk`(f, ν) ≤ r lipc(`f ) + risk`(f, µ). (3)\nThe tightness of the bound can be quantified as follows. Let ∆(µ) := r lipc(`f ) + risk`(f, µ) − supν∈Bc(µ,r) risk`(f, ν). If lipc(f) <∞ then\n∆(µ) ≤ r ( lipc(`f )− [ lipc(co `f )− 1\nr\n∫ (`f − co `f ) dµ ] + ) , (4)\nwhere [ · ]+ := max{ · , 0} and 1/0 :=∞, so that when `f is closed convex there is equality in (3).\nClearly (4) is tight for convex `f . Furthermore, Proposition 1 shows that (4) is also tight for a large family of nonconvex functions and distributions — particularly the upper-semicontinuous loss functions on a compact set X0 ⊆ X , with the collection of probability distributions supported on X0. Proposition 1. Assume X is a separable Fréchet space with X0 ⊆ X . Assume c : X → R̄≥0 is sublinear and continuous, and `f ∈ ⋂ µ∈P(X0) L1(X,µ) is upper semicontinuous, has lipc(`f ) <∞, and attains its maximum on X0. Then for all r ≥ 0 with 1/0 :=∞,\nsup µ∈P(X0)\n∆(µ) = r ( lipc(`f )− [ lipc(co `f )− 1\nr ρ(`f ) ] + ) .\nTheorem 1 subsumes many existing results (viz. Gao & Kleywegt, 2016, Cor. 2 (iv), Cisse et al., 2017, §3.2, Sinha et al., 2018, Shafieezadeh-Abadeh et al., 2017, Thm. 3.20) with a great deal more generality, applying to a very broad family of models, loss functions, and outcome spaces. It is the first time to our knowledge that the slackness in (3) has been characterised tightly.\nThe extension of Theorem 1 for robust classification in the absence of label noise is straight-forward. Corollary 1. Assume X is a separable Fréchet space and Y is a topological space. Fix µ ∈ P(X × Y ). Assume c : (X × Y )× (X × Y )→ R̄ satisfies c(x, y, x′, y′) = cX(x− x′) whenever\ny = y′ and c(x, y, x′, y′) =∞ whenever y 6= y′, where cX : X → R̄ is symmetric, sublinear, and continuous, and f ∈ L1(X × Y, µ) is upper semicontinuous. Then for all r ≥ 0 there is (3). To see the tightness of the bound, if lipc(`f ) < ∞ there is (4), where the closed convex hull is interpreted as co(`f )(x, y) := co(`f ( · , y))(x). If additionally `f ( · , y) is closed convex for all y ∈ Y , there is equality in (3)." }, { "heading": "3.1 DISTRIBUTIONAL ROBUSTNESS AS ADVERSARIAL ROBUSTNESS", "text": "We next show how Theorem 1 can be useful for adversarial learning. The following objective function has been proposed to build a robust classifier. Let X and Y be topological spaces, fix µ ∈ P(X × Y ) and let d be a metric on X . The following objective has been proposed (viz Goodfellow et al., 2015; Madry et al., 2018; Shaham et al., 2018) as a means of learning models that are robust to adversarial perturbations\nadversarial risk := ∫\nsup x̃∈Bd(x,r)\n`f (x̃, y)µ(dx× dy) = ∫\nsup ω̃∈Bd̃(ω,r) `f (ω̃)µ(dω), (5)\nwhere in the equality we extend d to a metric on Ω := X × Y with d̃((x, y), (x′, y′)) := d(x, x′) +∞ Jy 6= y′K .\nWe refer to (5) as the adversarial risk. Theorem 2. Assume (X, c) is a separable Banach space. Fix µ ∈ P(X) and let Rµ(r) := {g ∈ L0(X,R≥0) | ∫ g dµ ≤ r}. Then for f ∈ L0(Ω, R̄), r > 0 there is\nvariable-radius risk := sup g∈Rµ(r)\n∫ µ(dω) sup\nω′∈Bc(ω,g(ω)) `f (ω ′) ≤ sup ν∈Bc(µ,r) risk`(f, ν) = DRR. (6)\nThe equality holds in (6) if µ is non-atomically concentrated on a compact subset of X , on which f is continuous with the subspace topology.\nWe refer to the left-hand side (LHS) of (6) as the variable-radius risk. The variable-radius risk has appeared in various forms in similar results, usually formulated using empirical distributions, that is, an average of Dirac masses, (viz. Gao & Kleywegt, 2016; Shafieezadeh-Abadeh et al., 2017). Of course any finite set is compact, and so any empirical distribution satisfies the concentration assumption. Likewise the subspace topology on a finite set is the discrete topology, which makes the continuity assumption trivial.\nBoth the adversarial risk and the variable-radius risk imply an uncertainty set over a collection of adversaries that may perturb the data. Figure 5 in the appendix (on p. 20) shows the practical difference between the kinds of adversaries in these uncertainty sets. Immediately there is a corollary similar to Corollary 1 for Theorem 2.\nIt is easy to see that the variable-radius risk upper bounds the adversarial risk (5) by observing that the constant function gr ≡ r is included in the supremum over Rµ(r) in (6). As a result,\nadversarial risk ≤ variable-radius risk (a) ≤ DRR (b)\n≤ Lipschitz regularised risk (RHS of (3)), (7) where (a) is by Theorem 2 and (b) is by Theorem 1.\nIn general, it is difficult to characterise the tightness of the upper bounds in Theorem 1 and 2. So we resorted to an empirical demonstration that the sum of all the three gaps in (7) is relatively low. We randomly generated 100 Gaussian kernel classifiers f = ∑100 i=1 γik(x\ni, ·), with xi sampled from the MNIST dataset and γi sampled uniformly from [−2, 2]. The bandwidth was set to the median of pairwise distances. In Figure 1, the x-axis is the adversarial risk in (5) where the perturbation δ is bounded in `p ball and computed by PGD. The y-axis is the Lipschitz regularised empirical risk. The scattered dots lie closely to the diagonal, demonstrating that the above bounds are tight in practice." }, { "heading": "4 PROVABLE LIPSCHITZ REGULARISATION FOR KERNEL METHODS", "text": "Theorems 1 and 2 open up a new path to optimising the adversarial risk (5) by Lipschitz regularisation (RHS of (4)), where the upper bounding relationship is established through DRR. In general, however, it is still hard to compute the Lipschitz constant for a nonlinear model. However, we will show that\n0 5 10 0\n5\n10\n(a) ‖δ‖2 ≤ 3\n0 10 20 0\n10\n20\n(b) ‖δ‖∞ ≤ 0.3\nFigure 1: Empirical evaluation of the sum of the gaps from Theorems 1 and 2. The Lipschitz constants supx∈X ‖∇f(x)‖q (left: p = 2, right: p=∞, 1/p+1/q=1) were estimated by BFGS.\n0 20 40 60 80 0\n20\n40\n60\n80\n(a) 5-layer inverse kernel\n0 10 20 30 0\n10\n20\n30\n(b) Gaussian kernel (σ=3)\nFigure 2: Comparison of λmax(G>G) and the RHS of (8), as upper bounds for the Lipschitz constant. Smaller values are tighter. 100 functions sampled in the same way as in Figure 1.\nfor some types of kernels, this can be done efficiently on functions in its RKHS. Thanks to the known connections between kernel method and deep learning, this technique will also potentially benefit the latter. For example, `1-regularised neural networks are compactly contained in the RKHS of multi-layer inverse kernels k(x, y) = (2− x>y)−1 with ‖x‖2 ≤ 1 and ‖y‖2 ≤ 1 (Zhang et al., 2016, Lemma 1 and Theorem 1) and (Shalev-Shwartz et al., 2011; Zhang et al., 2017), and even possibly Gaussian kernels k(x, y) = exp(−‖x− y‖2 /(2σ2)) (Shalev-Shwartz et al., 2011, §5). Let us consider a Mercer’s kernel k on a convex domain X ⊆ Rd, with the corresponding RKHS denoted as H. The standard kernel method seeks a discriminant function f from H with the conventional form of finite kernel expansion f(x) = 1l ∑l a=1 γak(x\na, ·), such that the regularised empirical risk can be minimised with the standard (hinge) loss and RKHS norm. We start with real-valued f for univariate output such as binary classification, and later extend it to multiclass.\nOur goal here is to additionally enforce, while retaining a convex optimisation in γ := {γa}, that the Lipschitz constant of f falls below a prescribed threshold L > 0, which is equivalent to supx∈X ‖∇f(x)‖2 ≤ L thanks to the convexity of X . A quick but primitive solution is to piggyback on the standard RKHS norm constraint ‖f‖H ≤ C, in view that it already induces an upper bound on ‖∇f(x)‖2 as shown in Example 3.23 of Shafieezadeh-Abadeh et al. (2017),\nsup x∈X ‖∇f(x)‖2 ≤ ‖f‖H sup z>0 z−1g(z), where g(z) ≥ sup x,x′∈X:‖x−x′‖2=z ‖k(x, ·)− k(x′, ·)‖H . (8) For Gaussian kernels, g(z) = max{σ−1, 1}z. For exponential and inverse kernels, g(z) = z (Bietti & Mairal, 2019). Bietti et al. (2019) justified that the RKHS norm of a neural network may serve as a surrogate for Lipschitz regularisation. But the quality of such an approximation, i.e., the gap in (8), can be loose as we will see later in Figure 2. Besides, C and L are independent parameters.\nHow can we tighten the approximation? A natural idea is to directly bound the gradient norm at n random locations {ws}ns=1 sampled i.i.d. from X . These are obviously convex constraints on γ. But how many samples are needed in order to ensure ‖∇f(x)‖2 ≤ L+ ε for all x ∈ X? Unfortunately, as shown in Appendix A.1, n may have to grow exponentially by 1/εd for a d-dimensional space. Therefore we seek a more efficient approach by first slightly relaxing ‖∇f(x)‖2. Let gj(x) := ∂jf(x) be the partial derivative with respect to the j-th coordinate of x, and ∂i,jk(x, y) be the partial derivative to xi and yj . i or j being 0 means no derivative. Assuming supx∈X k(x, x) = 1 and gj ∈ H (true for various kernels considered by Assumptions 1 and 2 below), we get a new bound\nsup x∈X ‖∇f(x)‖22 = sup x∈X ∑d j=1 〈gj , k(x, ·)〉2H ≤ sup\nϕ:‖ϕ‖H=1\n∑d j=1 〈gj , ϕ〉2H = λmax(G >G), (9)\nwhere λmax evaluates the maximum eigenvalue, and G := (g1, . . . , gd). The “matrix” is only a notation because each column is a function in H, and obviously the (i, j)-th entry of G>G is 〈gi, gj〉H. Interestingly, λmax(G\n>G) delivers significantly lower (i.e., tighter) value in approximating the Lipschitz constant supx∈X ‖∇f(x)‖2, compared with ‖f‖Hmaxz>0 g(z) z from (8). Figure 2 compared these two approximants, where λmax(G>G) was computed from (11) derived below, and the landmarks {ws} consisted of all training examples; drawing more samples led to little difference. Such a positive result motivated us to develop refined algorithms to address the only remaining obstacle to leveraging λmax(G>G): no analytic form for computation. Interestingly, it is readily approximable in both theory and practice. Indeed, the role of gj can be approximated by g̃j , where g̃j ∈ Rn is the Nyström approximation (Drineas & Mahoney, 2005; Williams & Seeger, 2000):\na g̃j := K−1/2(gj(w1), . . . , gj(wn))>=(Z>Z)−1/2Z>gj (noting gj(w1) = 〈 gj , k(w 1, ·) 〉 H) (10)\nwhere K := [k(wi, wi ′ )]i,i′ , Z := (k(w\n1, ·), k(w2, ·), . . . , k(wn, ·)), G̃ := (g̃1, . . . , g̃d). So to ensure λmax(G>G) ≤ L2 + ε, intuitively we can resort to enforcing λmax(G̃>G̃) ≤ L2, which also retains the convexity in the constraint in γ. However, to guarantee ε error, the number of samples (n) required is generally exponential (Barron, 1994). Fortunately, we will next show that n can be reduced to polynomial for quite a general class of kernels that possess some decomposed structure." }, { "heading": "4.1 A COORDINATE-WISE NYSTRÖM APPROXIMATION FOR PRODUCT KERNELS", "text": "A number of kernels factor multiplicatively over the coordinates, such as periodic kernels (MacKay, 1998), Gaussian kernels, and Laplacian kernels. We will consider k(x, y) = ∏d j=1 k0(xj , yj) where X = Xd0 and k0 is a base kernel on X0. Let the RKHS of k0 be H0, and let µ0 be a finite Borel measure with supp[µ0] = X0. Periodic kernels have k0(xj , yj) = exp ( −sin ( π v (xj − yj) )2 /(2σ2) ) .\nThe key benefit of this decomposition is that the derivative ∂0,1k(x, y) can be written as ∂0,1k0(x1, y1) ∏d j=2 k0(xj , yj). Since k0(xj , yj) can be easily dealt with, approximation will be\nneeded only for ∂0,1k0(x1, y1). Applying this idea to g1 = 1l ∑l a=1 γa∂ 0,1k(xa, ·), we can derive\n‖g1‖2H = l −2 ∑l\na,b=1 γaγb\n〈 ∂0,1k0(x a 1 , ·), ∂0,1k0(xb1, ·) 〉 H0 ∏d j=2 k0(x a j , x b j), (11)\n〈g1, g2〉H = l −2 ∑l\na,b=1 γaγb∂\n0,1k0(x a 1 , x b 1)∂ 0,1k0(x b 2, x a 2) ∏d\nj=3 k0(x\na j , x b j).\nSo the off-diagonal entries ofG>G can be computed exactly. To approximate the diagonal, we sample {w11, . . . , wn1 } from µ0, set Z1 = (k0(w11, ·), . . . , k0(wn1 , ·)), and apply Nyström approximation:〈\n∂0,1k0(x a 1 , ·), ∂0,1k0(xb1, ·) 〉 H0 ≈ ∂0,1k0(xa1 , ·)>Z1 · (Z>1 Z1)−1 · Z>1 ∂0,1k0(xb1, ·) (12)\nwhere Z>1 ∂ 0,1k0(x a 1 , ·) = (∂0,1k0(xa1 , w11), . . . , ∂0,1k0(xa1 , wn1 ))>, (13)\nand analogously for Z>1 ∂ 0,1k0(x b 1, ·). We will denote this approximation of G>G as P̃G. Clearly, λmax(P̃G) ≤ L2 is a convex constraint on γ, based on i.i.d. samples {wsj : s ∈ [n], j ∈ [d]} from µ0. It is now important to analyse how many samples wsj are needed, such that\nλmax(P̃G) ≤ L2 =⇒ λmax(G>G) ≤ L2 + ε with high probability." }, { "heading": "4.2 GENERAL SAMPLE COMPLEXITY AND ASSUMPTIONS ON THE PRODUCT KERNEL", "text": "Fortunately, product kernels only require approximation bounds for each coordinate, making the sample complexity immune to the exponential growth in the dimensionality d. Specifically, we first consider base kernels k0 with a scalar input, i.e., X0 ⊆ R. Recall from Steinwart & Christmann (2008, Chapter 4) that the integral operator for k0 and µ0 is defined by\nTk0 = I ◦ S : L2(X0, µ0)→ L2(X0, µ0) where S : L2(X0, µ0)→ C(X0), (Sf)(x) = ∫ k0(x, y)f(y)dµ0(y), f ∈ L2(X0, µ0),\nand I: C(X0) ↪→ L2(X0;µ0) is the inclusion operator. By the spectral theorem, if Tk0 is compact, then there is an at most countable orthonormal set {ẽj}j∈J of L2(X0, µ0) and {λj}j∈J with λ1 ≥ λ2 ≥ . . . > 0 such that Tk0f = ∑ j∈J λj 〈f, ẽj〉L2(X0;µ0) ẽj for all f ∈ L2(X0, µ0). It is easy to\nsee that ϕj := √ λjej is an orthonormal basis ofH0 (Steinwart & Christmann, 2008).\nOur proof is built upon the following two assumptions on the base kernel. The first one asserts that fixing x, the energy of k0(x, ·) and ∂0,1k0(x, ·) “concentrates” on the leading eigenfunctions. Assumption 1. Suppose k0(x, x) = 1 and ∂0,1k0(x, ·) ∈ H0 for all x ∈ X0. For all ε > 0, there exists Nε ∈ N such that the tail energy of ∂0,1k0(x, ·) beyond the Nε-th eigenpair is less than ε, uniformly for all x ∈ X0. That is, denoting Φm := (ϕ1, . . . , ϕm),\nNε := inf m {∥∥∂0,1k0(x, ·)− ΦmΦ>m∂0,1k0(x, ·)∥∥H0 < ε for all x ∈ X0 and∥∥k0(x, ·)− ΦmΦ>mk0(x, ·)∥∥H0 < ε for all x ∈ X0} <∞.\nThe second assumption asserts the smoothness and range of eigenfunctions in a uniform sense.\nAssumption 2. Under Assumption 1, {ej(x) : j ∈ Nε} is uniformed bounded over x ∈ X0, and the RKHS inner product of ∂0,1k0(x, ·) with {ej : j ∈ Nε} is also uniformly bounded over x ∈ X0:\nMε := sup x∈X0 max j∈[Nε] ∣∣∣〈∂0,1k0(x, ·), ej〉H0 ∣∣∣ <∞, and Qε := supx∈X0 maxj∈[Nε] |ej(x)| <∞. Theorem 3. Suppose k0, X0, and µ0 satisfy Assumptions 1 and 2. Let {wsj : s ∈ [n], j ∈ [d]} be sampled i.i.d. from µ0. Then for any f whose coordinate-wise Nyström approximation (11) and (12) satisfy λmax(P̃G) ≤ L2, the Lipschitz condition λmax(G>G) ≤ L2 + ε is met with probability 1− δ, as long as n ≥ Θ̃ ( 1 ε2N 2 εM 2 εQ 2 ε log dNε δ ) , almost independent of d. Here Θ̃ hides all poly-log terms.\nSatisfaction of Assumptions. In Appendix A.4 and A.5, we will show that for periodic kernel and Gaussian kernel, Assumptions 1 and 2 hold true with Õ(1) values ofNε,Mε, andQε. It remains open whether non-product kernels such as inverse kernel also enjoy this polynomial sample complexity. Appendix A.6 suggests that the complexity is quasi-polynomial for inverse kernels." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We studied the empirical robustness and accuracy of the proposed Lipschitz regularisation technique for adversarial training of kernel methods, under both Gaussian kernel and inverse kernel. Comparison will be made with state-of-the-art defense algorithms under effective attacks.\nDatasets. We tested on three datasets: MNIST, Fashion-MNIST, and CIFAR10. The number of training/validation/test examples for the three datasets are 54k/6k/10k, 54k/6k/10k, 45k/5k/10k, respectively. Each image in MNIST and Fashion-MNIST is represented as a 784-dimensional feature vector, with each feature/pixel normalised to [0, 1]. For CIFAR10, we trained it on a residual network to obtain a 512-dimensional feature embedding, which were subsequently normalised to [0, 1]. They were used as the input for training all the competing algorithms and were subject to attack.\nAttacks. To evaluate the robustness of the trained model, we attacked them on test examples using the random initialized Projected Gradient Descent method with 100 steps (PGD, Madry et al., 2018) under two losses: cross-entropy and C&W loss (Carlini & Wagner, 2017). The perturbation δ was constrained in an `2 or `∞ ball. To evaluate robustness, we scaled the perturbation bound δ from 0.1 to 0.6 for `∞ norm, and from 1 to 6 for `2 norm (when δ = 6, the average magnitude per coordinate is 0.214).\nAlgorithms. We compared four training algorithms. The Parseval network orthonormalises the weight matrices to enforce the Lipschitz constant (Cisse et al., 2017). We used three hidden layers of 1024 units and ReLU activation (Par-ReLU). Also considered is the Parseval network with MaxMin activations (Par-MaxMin), which enjoys much improved robustness (Anil et al., 2019). Both algorithms can be customised for `2 or `∞ attacks, and were trained under the corresponding norms. Using multi-class hinge loss, they constitute strong baselines for adversarial learning.\nBoth Gaussian and inverse kernel machines applied Lipschitz regularisation by randomly and greedily selecting {ws}, and they will be referred to as Gauss-Lip and Inverse-Lip, respectively. In practice, Gauss-Lip with the coordinate-wise Nyström approximation (λmax(P̃G) from Eq (12)) can approximate λmax(G>G) with a much smaller number of sample than if using the holistic approximation as in (10). Furthermore, we found an even more efficient approach. Inside the iterative training algorithm, we used L-BFGS to find the input that yields the steepest gradient under the current solution, and then added it to the set {ws} (which was initialized with 15 random points). Although L-BFGS is only a local solver, this greedy approach empirically reduces the number of samples by an order of magnitude. See the empirical convergence results in Appendix A.9. Its theoretical analysis is left for future investigation. We also applied this greedy approach to Inverse-Lip.\nExtending binary kernel machines to multiclass. The standard kernel methods learn a discriminant function f c := ∑ a γ c ak(x\na, ·) for each class c ∈ [10], based on which a large supply of multiclass classification losses can be applied, e.g., CS (Crammer & Singer, 2001) which was used in our experiment. Since the Lipschitz constant of the mapping from {f c} to a real-valued loss is typically at most 1, it suffices to bound the Lipschitz constant of x 7→ (f1(x), . . . , f10(x))> by\naa max x λmax(G(x)G(x) >) ≤ max‖ϕ‖H=1 λmax\n(∑10\nc=1 G>c ϕϕ >Gc\n)\n≤ L2, (14)\nwhere G(x) := [∇f1(x), · · · ,∇f10(x)] = [G>1 k(x, ·), . . . , G>10k(x, ·)] with Gc := (gc1, . . . , gcd). The last term in (14) can be approximated using the same technique as in the binary case. Furthermore, the principle can be extended to `∞ attacks, whose details are relegated to Appendix A.10.\nParameter selection. We used the same parameters as in Anil et al. (2019) for training Par-ReLU and Par-MaxMin. To defend against `2 attacks, we set L = 100 for all algorithms. GaussLip achieved high accuracy and robustness on the validation set with bandwidth σ = 1.5 for FashionMNIST and CIFAR-10, and σ = 2 for MNIST. To defend against `∞ attacks, we set L = 1000 for all the four methods as in Anil et al. (2019). The best σ for Gauss-Lip is 1 for all datasets. Inverse-Lip used 5 stacked layers.\nResults. Figures 3 and 4 show how the test accuracy decays as an increasing amount of perturbation (δ) in `2 and `∞ norm is added to the test images, respectively. Clearly Gauss-Lip achieves higher accuracy and robustness than Par-ReLU and Par-MaxMin on the three datasets, under both `2 and `∞ bounded PGD attacks with C&W loss. In contrast, Inverse-Lip only performs similarly to Par-ReLU. Interestingly, we noticed that `2 based Par-MaxMin are only slightly better than Par-ReLU under `2 attacks, although the former does perform significantly better under `∞ attacks.\nFor the sake of space, the results for cross-entropy PGD attacks are deferred to Figures 8 and 9 in Appendix A.11. Here cross-entropy PGD attackers find stronger attacks to Parseval networks but not to our kernel models. Our Gauss-Lip again significantly outperforms Par-MaxMin on all the three datasets and under both `2 and `∞ norms. The improved robustness of Gauss-Lip does not seem to be attributed to the obfuscated gradient (Athalye et al., 2018), because as shown Figures 3, 4, 8, 9, increased distortion bound does increase attack success, and unbounded attacks drive the success rate to very low. In practice, we also observed that random sampling finds much weaker attacks, and taking 10 steps of PGD is much stronger than just one step.\nVisualization. The gradient with respect to inputs is plotted in Figure 10 (in the appendix on p. 31) for `2 trained Par-MaxMin and Gauss-Lip. The i-th row and j-th column corresponds to the targeted attack of turning the original class j into a new class i, hence the gradient is on the cross-entropy loss with class i as the ground truth. These two figures also explained why Gauss-Lip is more robust than Par-MaxMin: the attacker can easily reduce the targeted cross-entropy loss by following the gradient as shown in Figure 10a, and hence successfully attack Par-MaxMin. In contrast, the gradient shown in Figure 10b does not provide much information on how to flip the class.\nConclusion. In this paper, we derived a new certificate for distributional robust risk minimization by using Lipschitz regularization. Application to adversarial learning based on kernel methods exhibited superior robustness, with provably polynomial sample complexity for product kernels. We will apply this function space to GANs to witness the difference between probability distributions, leading to a more stable training scheme as the inner level optimization becomes convex." }, { "heading": "Appendix", "text": "The pseudo-code of training binary SVMs by enforcing Lipschitz constant is given in Algorithm 1.\nAlgorithm 1: Training binary SVMs by enforcing Lipschitz constant L 1 Initialise the constraint set S by some random samples from X . 2 for i = 1, 2, . . . do 3 Train SVM using one of the following constraints:\n1© Brute-force: ‖∇f(w)‖22 ≤ L2, ∀ w ∈ S\n2© Nyström holistic: λmax(G̃>G̃) ≤ L2 using S as the set {w1, . . . , wn} in Eq (10)\n3© Nyström coordinate wise: λmax(P̃G) ≤ L2 using S as the set {w1, . . . , wn} in Eq (12)\n4 Let the trained SVM be f (i). 5 Find a new w to add to S by one of the following methods:\na© Random: randomly sample w from X . b© Greedy: find argmaxx∈X ∥∥∇f (i)(x)∥∥ (local optimisation) by L-BFGS with 10 random initialisations. Add the distinct results upon convergence to S.\n6 Return if L(i) := maxx∈X ∥∥∇f (i)(x)∥∥ falls below L.\nFinding the exact argmaxx∈X ∥∥∇f (i)(x)∥∥ is intractable, so we used a local maximum found by L-BFGS with 10 random initialisations as the Lipschitz constant of the current solution f (i) (L(i) in step 6). The solution found by L-BFGS is also used as the new greedy point added in step 5b.\nFurthermore, the kernel expansion f(x) = 1l ∑l a=1 γak(x\na, ·) can lead to high cost in optimisation (our experiment used l = 54000), and therefore we used another Nyström approximation for the kernels. We randomly sampled 1000 landmark points, and based on them we computed the Nyström approximation for each k(xa, ·), denoted as ϕ̃(xa) ∈ R1000. Then f(x) can be written as 1 l ∑l a=1 γaϕ̃(x a)>ϕ̃(x). Defining w = 1l ∑l a=1 γaϕ̃(x\na), we can equivalently optimise over w, and the RKHS norm bound on f can be equivalently imposed as the `2-norm bound on w.\nTo summarise, Nyström approximation is used in two different places: one for approximating the kernel function, and one for computing ‖gj‖H either holistically or coordinate wise. For the former, we randomly sampled 1000 landmark points; for the latter, we used greedy selection as option b in step 5 of Algorithm 1.\nDetailed algorithm for multiclass classification. It is easy to extend Algorithm 1 to multiclass. For example, with MNIST dataset, we solve the following optimisation problem to defend `2 attacks:\nmin γ1,...,γ10 n∑ i=1 `(F (x),y), where F = [ n∑ i=1 γ1i k(xi, ·); . . . ; n∑ i=1 α10i k(xi, ·) ]\ns.t. sup ‖ϕ‖H≤1 λmax ( 10∑ c=1 G>c ϕϕ >Gc ) ≈ sup ‖v‖2≤1 λmax ( 10∑ c=1 G̃>c vv >G̃c ) ≤ L2,\nwhere `(F (x),y) is the Crammer & Singer loss, and the constraint is derived from (14) by using its Nyström approximation G̃c = [g̃c1, . . . , g̃ c d], which depends on {γ1, . . . ,γ10} linearly. Note that the constraint itself is a supremum problem:\nsup ‖v‖2≤1 λmax ( 10∑ c=1 G̃>c vv >G̃c ) = sup ‖v‖2≤1,‖u‖2≤1 u> ( 10∑ c=1 G̃>c vv >G̃c ) u.\nSince there is only one constraint, interior point algorithm is efficient. It requires the gradient of the constraint, which can be computed by Danskin’s theorem. In particular, we alternates between\nupdating v and u, until they converge to the optimal v∗ and u∗. Finally, the derivative of the constraint with respect to {γc} can be calculated from ∑10 c=1(u > ∗ G̃ > c v∗)\n2, as a function of {γc}. To defend `∞ attacks, we need to enforce the `∞ norm of the Jacobian matrix:\nsup x∈X ∥∥∥[g1(x), . . . , g10(x)]>∥∥∥ ∞ = sup x∈X max 1≤c≤10 ‖gc(x)‖1\n= max 1≤c≤10 sup x∈X ‖gc(x)‖1\n≤ max 1≤c≤10 sup ‖ϕ‖2≤1,‖u‖∞≤1 u>G̃>c ϕ,\nwhere the last inequality is due to\nsup x∈X ‖g(x)‖1 = sup x∈X sup ‖u‖∞≤1 u>g(x) ≤ sup ‖v‖2≤1,‖u‖∞≤1 u>G̃>v.\nTherefore, the overall optimisation problem to defense `∞ attacks is\nmin γ1,...,γ10 n∑ i=1 `(F (x),y), where F = [ n∑ i=1 γ1i k(xi, ·); . . . ; n∑ i=1 γ10i k(xi, ·) ] s.t. sup ‖v‖2≤1,‖u‖∞≤1 u>G̃>c v ≤ L, ∀c ∈ {1, . . . , 10} (15)\nFor each c, we alternatively update v and u in (15), converging to the optimal v∗ and u∗. Finally, the derivative of sup‖v‖2≤1,‖u‖∞≤1 u >G̃>c v with respect to γ c can be calculated from u>∗ G̃ > c v∗, as a function of γc." }, { "heading": "A PROOFS OF RESULTS", "text": "The following appendix contains the complete set of proofs and auxiliary results." }, { "heading": "PROOFS FOR §3: CERTIFICATE FOR DISTRIBUTIONAL ROBUSTNESS", "text": "Duality results like Lemma 1 have been the basis of a number of recent theoretical efforts in the theory of adversarial learning (Blanchet et al., 2016; Gao & Kleywegt, 2016; Shafieezadeh-Abadeh et al., 2017; Sinha et al., 2018), the results of Blanchet & Murthy (2019) being the most general to date. Lemma 1 (Blanchet & Murthy (2019, Thm. 1)). Assume Ω is a Polish space and fix µ ∈ P(Ω). Let c : Ω ×Ω → R̄≥0 be lower semicontinuous with c(ω, ω) = 0 for all ω ∈ Ω, and f ∈ L1(Ω,µ) is upper semicontinuous. Then for all r ≥ 0 there is\nsup ν∈Bc(µ,r)\n∫ f dν = inf\nλ≥0\n( λr + ∫ fλc dµ ) . (16)\nThe necessity for such duality results like Lemma 1 is because while the supremum on the left hand side of (16) is over a (usually) infinite dimensional space, the right hand side only involves only a finite dimensional optimisation. The generalised conjugate in (16) also hides an optimisation, but when the outcome space Ω is finite dimensional, this too is a finite dimensional problem.\nThe following is sometimes stated a consequence of or in the proof of the McShane–Whitney extension theorem, but it is immediate to observe. Lemma 2 (McShane–Whitney). Let X be a set. Assume c : X ×X → R̄≥0 satisfies c(x, x) = 0 for all x ∈ X , and f : X → R. Then\n∀x, y ∈ X : f(x)− f(y) ≤ λc(x, y) =⇒ ∀y ∈ X : f(y) = sup x∈X\n( f(x)− λc(x, y) ) .\nLemma 3. Assume X is a locally convex Hausdorff topological vector space. Let c : X → R̄ be lower semicontinuous, sublinear, and continuous at 0, let f : X → R̄ be closed convex. Then for λ > 0 there is\n∀y ∈ X : sup x∈X\n( f(x)− λc(x− y) ) = { f(y) ∂f(X) ⊆ ∂λc(0) ∞ ∂f(X) 6⊆ ∂λc(0).\nProof. Because f is closed convex, it is equal to its biconjugate (Zălinescu, 2002, Thm. 2.3.3), because c is sublinear and lower semicontinuous λc(x) = supx∗∈∂λc(0) 〈x, x∗〉 for all x ∈ X (Zălinescu, 2002, Thm. 2.4.14 (iv)). It follows that\nsup x∈X\n( f(x)− λc(x− y) ) = sup x∈X sup x∗∈∂f(X) inf g∗∈∂λc(0) ( 〈x, x∗〉 − f∗(x∗)− 〈g∗, x− y〉 ) = sup x∈X sup x∗∈∂f(X) inf g∗∈∂λc(0) ( 〈x, x∗ − g∗〉+ 〈y, g∗〉 − f∗(x∗) ) .\nBecause c is continuous at 0, ∂c(0) is weak∗-compact and convex (Zălinescu, 2002, Thm. 2.4.9), and so we can apply a minimax theorem (Zălinescu, 2002, Thm. 2.10.2) to produce\nsup x∈X sup x∗∈∂f(X) inf g∗∈∂λc(0)\n( 〈x, x∗ − g∗〉+ 〈y, g∗〉 − f∗(x∗) ) = sup x∗∈∂f(X) inf g∗∈∂λc(0) sup x∈X ( 〈x, x∗ − g∗〉+ 〈y, g∗〉 − f∗(x∗)\n) = sup x∗∈∂f(X) inf g∗∈∂λc(0) { 〈y, x∗〉 − f∗(x∗) g∗ = x∗ ∞ g∗ 6= x∗\n= sup x∗∈∂f(X) { 〈y, x∗〉 − f∗(x∗) x∗ ∈ ∂λc(0) ∞ x∗ /∈ ∂λc(0)\n= { f(y) ∂f(X) ⊆ ∂λc(0) ∞ ∂f(X) 6⊆ ∂λc(0),\nas claimed.\nRemark 1. The minimisation of g−h, where g and h are convex functions, is called difference convex (DC) programming (Hiriart-Urruty, 1989). The condition in Lemma 3 bears a striking resemblance to the common necessary condition (e.g. Hiriart-Urruty, 1989; Penot, 1998) for such problems\nx ∈ arginf x′∈X\nf(x′) =⇒ ∂h(x) ⊆ ∂g(x).\nLikewise there are similar sufficient conditions. The proof of Lemma 4 is also quite similar to the proofs of the Toland (1979) duality formula (viz. Hiriart-Urruty, 1986)\ninf x∈X (g(x)− h(x)) = inf x∗∈X∗\n(h∗(x∗)− g∗(x∗)),\nwhich suggests that the principles of Lemma 4 may be more general. A generalisation to a general convex function c satisfying c(0) = 0, would remove the positive homogeneity requirement of Lemma 4, and allow any translation invariant metric in place of c. The assumptions we have made are compatible with metrics which arise from norms, that is, the translation invariant and positively homogeneous metrics. Lemma 4. Assume X is a topological vector space. Let c : X → R̄≥0, and f : X → R. Then for λ > 0 there is\n∀x, y ∈ X : f(x)− f(y) ≤ λc(x− y) ⇐⇒ ∂f(X) ⊆ ∂λc(0).\nProof. Suppose λ > 0 is such that f(x) − f(y) ≤ λc(x − y) for all x, y ∈ X . Let x∗ ∈ ∂f(X). Then there is x ∈ X with\n∀y ∈ X : 〈y − x, x∗〉 ≤ f(y)− f(x) ≤ λc(y − x) =⇒ ∀y ∈ X : 〈y, x∗〉 ≤ f(y + x)− f(x) ≤ λc(y),\nthis shows x∗ ∈ ∂λc(0). Next assume λ > 0 satisfies ∂f(X) ⊆ ∂λc(0). Then ∀x ∈ X, ∃x∗ ∈ ∂f(x),∀y ∈ X : f(x)− f(y) ≤ 〈x− y, x∗〉 ≤ λc(x− y),\nwhere the second inequality is because x∗ ∈ ∂λc(0) for all x∗ ∈ ∂f(x).\nTheorem 1. Assume X is a separable Fréchet space and fix µ ∈ P(X). Assume c : X → R̄≥0 is sublinear and continuous, and f ∈ L1(X,µ) is upper semicontinuous. Then for all r ≥ 0,\nDRR := sup ν∈Bc(µ,r) risk`(f, ν) ≤ r lipc(`f ) + risk`(f, µ).\nThe tightness of the bound can be quantified as follows. Let ∆(µ) := r lipc(`f ) + risk`(f, µ) − supν∈Bc(µ,r) risk`(f, ν). If lipc(f) <∞ then\n∆(µ) ≤ r ( lipc(`f )− [ lipc(co `f )− 1\nr\n∫ (`f − co `f ) dµ ] + ) ,\nwhere [ · ]+ := max{ · , 0} and 1/0 :=∞, so that when `f is closed convex there is equality in (3).\nProof. Because c is assumed sublinear, it is positively homogeneous and there is c(x, x) = c(x−x) = c(0) = 0 for all x ∈ X . Therefore we can apply Lemma 1 and Lemma 2 to obtain\nsup ν∈Bc(µ,r)\n∫ `f dν = inf\nλ≥0\n[ rλ+ ∫ `λcf dµ ] ≤ inf λ≥lipc(`f ) [ rλ+ ∫ `λcf dµ\n] = r lipc(`f ) + ∫ `f dµ.\nObserving that co `f ≤ `f , applying Lemma 3 and Lemma 4 we find for all x ∈ X\nsup λ∈[0,∞)\n( `f (x)− `λcf (x)− rλ ) = sup λ∈[0,∞) ( `f (x)− sup y∈X ( `f (y)− λc(x− y) ) − rλ ) = sup λ∈[0,∞) inf y∈X ( `f (x)− `f (y) + λc(x− y)− rλ\n) ≤ sup λ∈[0,∞) inf y∈X ( `f (x)− co `f (y) + λc(x− y)− λr\n) = sup λ∈[0,∞) ( `f (x)− co `f (x)−∞ Jlipc(co `f ) > λK− λr\n) = `f (x)− co `f (x)− r lipc(co `f ). (1)\nSimilarly, for all x ∈ X there is\nsup λ∈[0,∞)\n( `f (x)− `λcf (x)− rλ ) ≤ sup λ∈[0,∞) ( `f (x)− `λcf (x) ) + sup λ∈[0,∞) −rλ\n= sup λ∈[0,∞)\n( `f (x)− `λcf (x) ) = sup λ∈[0,∞) inf y∈X ( `f (x)− `f (y) + λc(x− y)\n) ≤ inf y∈X sup λ∈[0,∞) ( `f (x)− `f (y) + λc(x− y)\n) = inf y∈X { ∞ c(x− y) > 0 0 c(x− y) = 0\n= 0. (2)\nThen, using (1) and (2) we find( r lipc(`f ) + ∫ `f dµ ) − inf λ∈[0,∞) ( rλ− ∫ `λcf dµ ) = r lipc(`f ) + sup\nλ∈[0,∞)\n∫ ( `f − `λcf − λr ) dµ\n≤ r lipc(`f ) + ∫\nsup λ∈[0,∞)\n( `f − `λcf − λr ) dµ\n(1),(2) ≤ r lipc(`f ) + min {∫ (`f − co `f ) dµ− r lipc(co `f ), 0 } .\nThe proof is complete.\nProposition 1. Assume X is a separable Fréchet space with X0 ⊆ X . Assume c : X → R̄≥0 is sublinear and continuous, and `f ∈ ⋂ µ∈P(X0) L1(X,µ) is upper semicontinuous, has lipc(`f ) <∞, and attains its maximum on X0. Then for all r ≥ 0 with 1/0 :=∞,\nsup µ∈P(X0)\n∆(µ) = r ( lipc(`f )− [ lipc(co `f )− 1\nr ρ(`f ) ] + ) .\nProof. Let x0 ∈ X0 be be the point at which `f (x0) = supx∈X0 `f (x). Then\n∆(δx0) = r lipc(`f ) + ∫ `f dδx0 − sup\nν∈Bc(δx0 ,r)\n∫ `f dδx0\n= r lipc(`f ) + ∫ `f dδx0 − ∫ `f dδx0\n= r lipc(`f ). (3)\nThen there is\nr lipc(`f ) (3) ≤ sup\nµ∈P(X0) ∆(µ)\n(4) ≤ r ( lipc(`f )−max { lipc(co `f )− 1\nr ρ(`f ), 0\n}) ≤ r lipc(`f ),\nwhich completes the proof.\nRemark 2. When f : Rn → R̄ satisfies f 6≡ ∞ and f is minorised by an affine function, there is (cf. Hiriart-Urruty & Lemaréchal, 2010, Prop. 1.5.4)\n∀x ∈ Rn : co f(x) = inf ∑ i∈[n+1] αif(xi) | ∑ i∈[n+1] αi = 1, x = ∑ i∈[n+1] αixi , where the infimum is over all sequences (αi)i∈[n+1] and (xi)i∈[n+1] ⊆ Rn satisfying the conditions above. Consequentially there is the common expression\nρ(f) = sup f( ∑ i∈[n+1] αixi ) − ∑ i∈[n+1] αif(xi) | (αi, xi)i∈[n+1] ⊆ R≥0 ×Rn, ∑ i∈[n+1] αi = 1 . In Lemma 5, by the weak∗ topology on P(Ω) we mean the coarsest topolgoy on P(Ω) that makes the bounded continuous functions on Ω its topological dual space. Likewise ⇀∗ denotes convergence in this topology.\nLemma 5. Assume (Ω, c) is a compact Polish space and µ ∈ P(Ω) is non-atomic. For any ν? ∈ P(Ω) and r > 0 there is a sequence (fi)i∈N ⊆ Aµ(r) := { f ∈ L0(Ω,Ω) | ∫ cd(id, f)#µ ≤ r } with (fi)#µ ⇀∗ ν?.\nProof. Let P (µ, ν) := {f ∈ L0(X,X) | f#µ = ν}. Since µ is non-atomic and c is continuous Pratelli (2007, Thm. B) shows\n∀ν ∈ P(Ω) : inf f∈P (µ,ν)\n∫ cd(id, f)#µ = costc(µ, ν).\nLet r? := costc(µ, ν?), obviously r? ≤ r. Assume r? > 0, otherwise the lemma is trivial. Fix a sequence (εk)k∈N ⊆ (0, r?) with εk → 0. For u ≥ 0 let ν(u) := µ+ u(ν? − µ). Then\ncostc(µ, ν(0)) = 0 and costc(µ, ν(1)) = r?,\nand because costc metrises the weak∗ topology on P(Ω) (Villani, 2008, Thm. 6.9), the mapping u 7→ costc(µ, ν(u)) is continuous. Then by the intermediate value theorem for every k ∈ N there is some uk > 0 with costc(µ, ν(uk)) = r? − εk, forming a sequence (uk)k∈N ⊆ [0, 1]. Then for every k there is a sequence (fjk)j∈N ⊆ P (µ, ν(uk)) so that (fjk)#µ ⇀ ∗ν(k) and\nlim j∈N\n∫ cd(id, fjk)#µ = inf\nf∈P (µ,ν(k))\n∫ cd(id, fk)#µ = costc(µ, ν(k)) = r ? − εk.\nTherefore for every k ∈ N there exists jk ≥ 0 so that for every j ≥ jk∫ cd(id, fjk)#µ ≤ r?. (3)\nLet us pass directly to this subsequence of (fjk)j∈N for every k ∈ N so that (3) holds for all j, k ∈ N. Next by construction we have ν(uk) → ν?. Therefore (fjk)j,k∈N has a subsequence in k so that (fjk)#µ ⇀ ∗ ν?. By ensuring (3) is satisfied, the sequences (fjk)j∈N ⊆ Aµ(r) for every k ∈ N.\nTheorem 2. Assume (X, c) is a separable Banach space. Fix µ ∈ P(X) and let Rµ(r) := {g ∈ L0(X,R≥0) | ∫ g dµ ≤ r}. Then for f ∈ L0(Ω, R̄), r > 0 there is\nvariable-radius risk := sup g∈Rµ(r)\n∫ µ(dω) sup\nω′∈Bc(ω,g(ω)) `f (ω ′) ≤ sup ν∈Bc(µ,r) risk`(f, ν) = DRR.\nThe equality holds in (6) if µ is non-atomically concentrated on a compact subset of X , on which f is continuous with the subspace topology.\nProof. Inequality (6). For g ∈ Rµ(r), let Γg : X ⇒ X denote the set-valued mapping with Γg(x) := Bc(x, g(x)). Let L0(X,Γg) denote the set of Borel a : X → X so that a(x) ∈ Γg(x) for µ-almost all x ∈ X . Let Aµ(r) := ⋃ g∈Rµ(r) L0(X,Γg). Clearly for every a ∈ Aµ(r) there is\nr ≥ ∫ c(x, a(x)) dµ = ∫ cd(id, a)#µ,\nwhich shows {a#µ | a ∈ Aµ(r)} ⊆ Bc(µ, r). Then if there is equality in (4), we have\nsup g∈Rµ(r)\n∫ sup\nx′∈Γg(x) f(x) = sup g∈Rµ(r) sup a∈L0(X,Γg)\n∫ f da#µ (4)\n= sup a∈Aµ(r)\n∫ f da#µ\n≤ sup ν∈Bc(µ,r)\n∫ f da#ν,\nwhich proves the inequality (6).\nEquality (4). To complete the proof we will now justify the exchange of integration and supremum in (4). The set L0(X,Γg) is trivially decomposable (Giner, 2009, see the remark at the bottom of p. 323, Def. 2.1). By assumption f is Borel measurable. Since f is measurable, any decomposable subset of L0(X,X) is f -decomposable (Giner, 2009, Prop. 5.3) and f -linked (Giner, 2009, Prop. 3.7 (i)). Giner (2009, Thm. 6.1 (c)) therefore allows us to exchange integration and supremum in (4).\nEquality in (6). Under the additional assumptions there exists ν? ∈ P(Ω) with (via Blanchet & Murthy, 2019, Prop. 2) ∫\nf dν? = sup ν∈Bc(µ,r)\n∫ f dν.\nThe compact subset where µ is concentrated and non-atomic is a Polish space with the Banach metric. Therefore Using Lemma 5 there is a sequence (fi)i∈N ⊆ Aµ(r) so that\nlim i∈N\n∫ fi dµ = ∫ f dν? = sup\nν∈Bc(µ,r)\n∫ f dν,\nproving equality in (6)." }, { "heading": "PROOFS FOR §4: PROVABLE LIPSCHITZ REGULARISATION FOR KERNEL METHODS", "text": "Theorem 3. Suppose k0, X0, and ν0 satisfy Assumptions 1 and 2. Let {wsj : s ∈ [n], j ∈ [d]} be sampled i.i.d. from ν0. Then for any f whose coordinate-wise Nyström approximation (11) and (12) satisfy λmax(P̃G) ≤ L2, the Lipschitz condition λmax(G>G) ≤ L2 + ε is met with probability 1− δ, as long as n ≥ Θ̃ ( 1 ε2N 2 εM 2 εQ 2 ε log dNε δ ) , almost independent of d. Here Θ̃ hides all poly-log terms." }, { "heading": "PROOFS AND MORE RESULTS FOR §4: KERNEL APPROXIMATION", "text": "" }, { "heading": "A.1 RANDOM SAMPLING REQUIRES EXPONENTIAL COST", "text": "The most natural idea of leveraging the samples is to add the constraints ‖g(ws)‖ ≤ L. For Gaussian kernel, we may sample from N (0, σ2I) while for inverse kernel we may sample uniformly from B. This leads to our training objective:\nmin f∈H\n1\nl l∑ i=1 loss(f(xi), yi) + λ 2 ‖f‖2H s.t. ‖g(w s)‖ ≤ L, ∀s ∈ [n].\nUnfortunately, this method may require O( 1 εd ) samples to guarantee ∑ j ‖gj‖ 2 H ≤ L\n2 + ε w.h.p. This is illustrated in Figure 6, where k is the polynomial kernel with degree 2 whose domain X is the unit ball B, and f(x) = 12 (v\n>x)2. We seek to test whether the gradient g(x) = (v>x)v has norm bounded by 1 for all x ∈ B, and we are only allowed to test whether ‖g(ws)‖ ≤ 1 for samples ws that are drawn uniformly at random from B. This is equivalent to testing ‖v‖ ≤ 1, and to achieve it at least one ws must be from the ε ball around v/ ‖v‖ or −v/ ‖v‖, intersected with B. But the probability of hitting such a region decays exponentially with the dimensionality d. The key insight from the above counter-example is that in fact ‖v‖ can be easily computed by∑d s=1(v >w̃s) 2, where {w̃s}ds=1 is the orthonormal basis computed from the Gram–Schmidt process on d random samples {ws}ds=1 (n = d). With probability 1, n samples drawn uniformly from B must span Rd as long as n ≥ d, i.e., rank(W ) = d where W = (w1, . . . , wn). The Gram–Schmidt process can be effectively represented using a pseudo-inverse matrix (allowing n > d) as\n‖v‖2 = ∥∥∥(W>W )−1/2W>v∥∥∥\n2 ,\nwhere (W>W )−1/2 is the square root of the pseudo-inverse of W>W . This is exactly the intuition underlying the Nyström approximation that we will leveraged." }, { "heading": "A.2 SPECTRUM OF KERNELS", "text": "Let k be a continuous kernel on a compact metric space X , and µ be a finite Borel measure on X with supp[µ] = X . We will re-describe the following spectral properties in a more general way than\nin §4. Recall from Chapter 4 of Steinwart & Christmann (2008) that the integral operator for k and µ is defined by\nTk = Ik ◦ Sk : L2(X,µ)→ L2(X,µ) where Sk : L2(X;µ)→ C(X), (Skf)(x) = ∫ k(x, y)f(y)dµ(y), f ∈ L2(X,µ),\nIk : C(X) ↪→ L2(X;µ), inclusion operator.\nBy the spectral theorem, if Tk is compact, then there is an at most countable orthonormal set (ONS) {ẽj}j∈J of L2(X,µ) and {λj}j∈J with λ1 ≥ λ2 ≥ . . . > 0 such that\nTf = ∑ j∈J λj 〈f, ẽj〉L2(X;µ) ẽj , f ∈ L2(X,µ).\nIn particular, we have 〈ẽi, ẽj〉L2(X;µ) = δij (i.e., equals 1 if i = j, and 0 otherwise), and T ẽi = λiẽi. Since ẽj is an equivalent class instead of a single function, we assign a set of continuous functions ej = λ −1 j Skẽj ∈ C(X), which clearly satisfies\n〈ei, ej〉L2(X;µ) = δij , T ej = λjej .\nWe will call λj and ej as eigenvalues and eigenfunctions respectively, and {ej}j∈J clearly forms an ONS. By Mercer’s theorem,\nk(x, y) = ∑ j∈J λjej(x)ej(y), (5)\nand all functions in H can be represented by ∑ j∈J ajej where {aj/ √ λj} ∈ `2(J). The inner\nproduct inH is equivalent to 〈∑ j∈J ajej , ∑ j∈J bjej 〉 H = ∑ j∈J ajbj/λj . Therefore it is easy to see that\nϕj := √ λjej , j ∈ J\nis an orthonormal basis of H, with Moreover, for all f ∈ H with f = ∑ j∈J ajej , we have\n〈f, ej〉H = aj/λj , 〈f, ϕj〉H = aj/ √ λj , and\nf = ∑ j 〈f, ϕj〉H ϕj = ∑ j √ λj 〈f, ej〉H ϕj = ∑ j λj 〈f, ej〉H ej .\nMost kernels used in machine learning are infinite dimensional, i.e., J = N. For convenience, we define Φm := (ϕ1, . . . , ϕm) and Λm = diag(λ1, . . . , λm)." }, { "heading": "A.3 GENERAL SAMPLE COMPLEXITY AND ASSUMPTIONS ON THE PRODUCT KERNEL", "text": "In this section, we first consider kernels k0 with scalar input, i.e., X0 ⊆ R. Assume there is a measure µ0 on X0. This will serve as the basis for the more general product kernels in the form of k(x, y) = ∏d j=1 k0(xj , yj) defined over X d 0 .\nWith Assumptions 1 and 2, we now state the formal version of Theorem 3 by first providing the sample complexity for approximating the partial derivatives. In the next subsection, we will examine how three different kernels satisfy/unsatisfy the Assumptions 1 and 2, and what the value of Nε is. For each case, we will specify µ0 on X0, and the measure on Xd0 is trivially µ = µ d 0. Theorem 4. Suppose {ws}ns=1 are drawn iid from µ0 on X0, where µ0 is the uniform distribution on [−v/2, v/2] for periodic kernels or periodized Gaussian kernels. Let Z := (k0(w 1, ·), k0(w2, ·), . . . , k0(wn, ·)), and g1 = 1l ∑l a=1 γag a 1 : X d 0 → R, where ‖γ‖∞ ≤ c1 and\nga1 (y) = ∂ 0,1k(xa, y) = ha1(y1) d∏ j=2 k0(x a j , yj) with h a 1(·) := ∂0,1k0(xa1 , ·).\nGiven ε ∈ (0, 1], let Φm = (ϕ1, . . . ϕm) where m = Nε. Then with probability 1− δ, the following holds when the sample size n = max(Nε, 53ε2NεQ 2 ε log 2Nε δ ):\n‖g1‖2H ≤ 1 l2 γ>K1γ + 3c1\n( 1 + 2 √ NεMε ) ε, (6)\nwhere (K1)a,b = (ha1) >Z(Z>Z)−1Z>hb1 d∏ j=2 k0(x a j , x b j).\nThen we obtain the formal statement of sample complexity, as stated in the following corollary, by combining all the coordinates from Theorem 4. Corollary 1. Suppose all coordinates share the same set of samples {ws}ns=1. Applying the results in (6) for coordinates from 1 to d and using the union bound, we have that with sample size n = max(Nε, 5 3ε2NεQ 2 ε log 2Nε δ ), the following holds with probability 1− dδ,\nλmax(G >G) ≤ λmax(P̃G) + 3c1 ( 1 + 2 √ NεMε ) ε. (7)\nEquivalently, if Nε, Mε and Qε are constants or poly-log terms of ε which we treat as constant, then to ensure λmax(G>G) ≤ λmax(P̃G) + ε with probability 1− δ, the sample size needs to be\nn = 15\nε2 c21\n( 1 + 2 √ NεMε )2 NεQ 2 ε log\n2dNε δ .\nRemark 1. The first term on the right-hand side of (7) is explicitly upper bounded by L2 in our training objective. In the case of Theorem 1, the values of Qε, Nε, and Mε lead to a Õ( 1ε2 ) sample complexity. If we further zoom into the dependence on the period v, then note that Nε is almost a universal constant while Mε = √ 2π v (Nε − 1). So overall, n depends on v by 1 v2 . This is not surprising because smaller period means higher frequency, hence more samples are needed. Remark 2. Corollary 1 postulates that all coordinates share the same set of samples {ws}ns=1. When coordinates differ in their domains, we can draw different sets of samples for them. The sample complexity hence grows by d times as we only use a weak union bound. More refined analysis could save us a factor of d as these sets of samples are independent of each other.\nProof of Theorem 4. Let ε′ := (1 + 2 √ mMε)ε. Since 〈 ga1 , g b 1 〉 H = 〈 ha1 , h b 1 〉 H0 ∏d j=2 k0(x a j , x b j)\nand ∣∣k0(xaj , xbj)∣∣ ≤ 1, it suffices to show that for all a, b ∈ [l],∣∣∣〈ha1 , hb1〉H0 − (ha1)>Z(Z>Z)−1Z>hb1∣∣∣ ≤ 3ε′.\nTowards this end, it is sufficient to show that for any h(·) = ϑx∂0,1k0(x, ·) + ϑy∂0,1k0(y, ·) where x, y ∈ X0 and |ϑx|+ |ϑy| ≤ 1, we have∣∣∣h>Z(Z>Z)−1Z>h− ‖h‖2H0∣∣∣ ≤ ε′. (8)\nThis is because, if so, then∣∣∣〈ha1 , hb1〉H0 − (ha1)>Z(Z>Z)−1Z>hb1∣∣∣ =\n∣∣∣∣12(∥∥ha1 + hb1∥∥2H0 − ‖ha1‖2H0 − ∥∥hb1∥∥2H0)− 12 [(ha1 + hb1)>Z(Z>Z)−1Z>(ha1 + hb1) −(ha1)>Z(Z>Z)−1Z>ha1 − (hb1)>Z(Z>Z)−1Z>hb1\n]∣∣ ≤1\n2 (4ε′ + ε′ + ε′) = 3ε′.\nThe rest of the proof is devoted to (8). Since n ≥ m, the SVD ofΛ−1/2m Φ>mZ can be written asUΣV >, where UU> = U>U = V >V = Im (m-by-m identity matrix), and Σ = diag(σ1, . . . , σm). Define\nα = n−1/2V U>Λ−1/2m Φ > mh.\nConsider the optimization problem o(α) := 12 ‖Zα− h‖ 2 H0 . It is easy to see that its minimal objective value is o∗ := 12 ‖h‖ 2 H0 − 1 2h >Z(Z>Z)−1Z>h. So\n0 ≤ 2o∗ = ‖h‖2H0 − h >Z(Z>Z)−1Z>h ≤ 2o(α).\nTherefore to prove (8), it suffices to bound o(α) = ‖Zα− h‖H0 . Since √ nΦmΛ 1/2UV >α = ΦmΦ > mh, we can decompose ‖Zα− h‖H0 by\n‖Zα− h‖H0 ≤ ∥∥(Z − ΦmΦ>mZ)α∥∥H0 + ∥∥∥(ΦmΦ>mZ −√nΦmΛ1/2m UV >)α∥∥∥H0 (9) + ∥∥ΦmΦ>mh− h∥∥H0 .\nThe last term ∥∥ΦmΦ>mh− h∥∥H0 is clearly below ε because by Assumption 1 and m = Nε∥∥ΦmΦ>mh− h∥∥H0\n≤ |ϑx| ∥∥ΦmΦ>m∂0,1k0(x, ·)− ∂0,1k0(x, ·)∥∥H0 + |ϑy|∥∥ΦmΦ>m∂0,1k0(y, ·)− ∂0,1k0(y, ·)∥∥H0\n≤(|ϑx|+ |ϑy|)ε ≤ ε. We will next bound the first two terms on the right-hand side of (9).\n(i) By Assumption 1, ∥∥k0(ws, ·)− ΦmΦ>mk0(ws, ·)∥∥H0 ≤ ε, hence ∥∥(Z − ΦmΦ>mZ)α∥∥H0 ≤\nε √ n ‖α‖2. To bound ‖α‖2, note all singular values of V U> are 1, and so Assumption 2 implies that for all i ∈ [m],∣∣∣λ−1/2j 〈ϕj , h〉H0∣∣∣ = ∣∣∣〈ej , h〉H0 ∣∣∣ = ∣∣∣〈ej , ϑx∂0,1k0(x, ·) + ϑy∂0,1k0(y, ·)〉H0∣∣∣ (10) ≤ sup x∈X\n∣∣∣〈ej , ∂0,1k(x, ·)〉H0∣∣∣ ≤Mε. As a result, ∥∥(Z − ΦmΦ>mZ)αj∥∥H0 ≤ εn1/2 · n−1/2 ∥∥∥Λ−1/2m Φ>mh∥∥∥ ≤ ε√mMε. (ii) We first consider the concentration of the matrix R := 1nΛ −1/2 m Φ>mZZ >ΦmΛ −1/2 m ∈ Rm×m. Clearly,\nE {ws} [Rij ] = E {ws}\n[ 1\nn n∑ s=1 ei(ws)ej(ws)\n] = ∫ ei(x)ej(x) dµ(x) = δij .\nBy matrix Bernstein theorem (Tropp, 2015, Theorem 1.6.2), we have Pr ( ‖R− Im‖sp ≤ ε ) ≥ 1− δ\nwhen n ≥ O(.). This is because ‖(e1(x), . . . , em(x))‖2 ≤ mQ2ε, ∥∥E{ws}[RR>]∥∥sp ≤ mQ2ε/n, and\nPr ( ‖R− Im‖sp ≤ ε ) ≥ 1− 2m exp\n( −ε2\nmQ2ε n ( 1 + 23ε )) ≥ 1− 2m exp( −ε25mQ2ε 3n ) ≥ 1− δ,\nwhere the last step is by the definition of n. Since R = 1nUΣ 2U>, this means with probability 1− δ,∥∥ 1\nnUΣ 2U> − Im ∥∥ sp ≤ ε. So for all i ∈ [m],\n∣∣∣∣ 1nσ2i − 1 ∣∣∣∣ ≤ ε which implies ∣∣∣∣ 1√nσi − 1 ∣∣∣∣ < ε ∣∣∣∣ 1√nσi + 1 ∣∣∣∣−1 ≤ ε. (11)\nMoreover, λ1 ≤ 1 since k0(x, x) = 1. It then follows that∥∥∥(ΦmΦ>mZ −√nΦmΛ1/2m UV >)α∥∥∥H0 =\n∥∥∥∥ΦmΛ1/2m UΣV > 1√nV U>Λ−1/2m Φ>mh−√nΦmΛ1/2m UV > 1√nV U>Λ−1/2m Φ>mh ∥∥∥∥ H0\n= ∥∥∥∥Λ1/2m U( 1√nΣ − Im ) U>Λ−1/2m Φ > mh ∥∥∥∥ 2\n(because Φ>mΦm = Im)\n≤ √ λ1 max\ni∈[m] ∣∣∣∣ 1√nσi − 1 ∣∣∣∣ ∥∥∥Λ−1/2m Φ>mh∥∥∥ 2\n≤ε √ mMε (by (11), (10), and λ1 ≤ 1).\nCombining (i) and (ii), we arrive at the desired bound in (6).\nProof of Corollary 1. Since P̃G approximates G>G only on the diagonal, P̃G −G>G is a diagonal matrix which we denote as diag(δ1, . . . , δd). Let u ∈ Rd be the leading eigenvector of P̃G. Then\nλmax(P̃G)− λmax(G>G) ≤ u>P̃Gu− u>G>Gu = u>(P̃G −G>G)u = ∑ j δju 2 j\n(by (6)) ≤ 3c1 ( 1 + 2 √ NεMε ) ε.\nThe proof is completed by applying the union bound and rewriting the results." }, { "heading": "A.4 CASE 1: CHECKING ASSUMPTIONS 1 AND 2 ON PERIODIC KERNELS", "text": "Periodic kernels on X0 := R are translation invariant, and can be written as k0(x, y) = κ(x − y) where κ : R→ R is a) periodic with period v; b) even, with κ(−t) = κ(t); and c) normalized with κ(0) = 1. A general treatment was given by Williamson et al. (2001), and an example was given by David MacKay in MacKay (1998):\nk0(x, y) = exp\n( − 1 2σ2 sin (π v (x− y) )2) . (12)\nWe define µ0 to be a uniform distribution on [−v2 , v 2 ], and let ω0 = 2π/v.\nSince κ is symmetric, we can simplify the Fourier transform of κ(t)δv(t), where δv(t) = 1 if t ∈ [−v/2, v/2], and 0 otherwise:\nF (ω) = 1√ 2π ∫ v/2 −v/2 κ(t) cos(ωt) dt.\nIt is now easy to observe that thanks to periodicity and symmetry of κ, for all j ∈ Z,\n1\nv ∫ v/2 −v/2 k0(x, y) cos(jω0y) dy = 1 v ∫ v/2 −v/2 κ(x− y) cos(jω0y) dy\n= 1\nv ∫ x+v/2 x−v/2 κ(z) cos(jω0(x− z)) dz (note cos(jω0(x− z)) also has period v)\n= 1\nv ∫ v/2 −v/2 κ(z)[cos(jω0x) cos(jω0z) + sin(jω0x) sin(jω0z)) dz (by periodicity)\n= 1\nv cos(jω0x) ∫ v/2 −v/2 κ(z) cos(jω0z) dz (by symmetry of κ)\n=\n√ 2π\nv F (jω0) cos(jω0x).\nAnd similarly,\n1\nv ∫ v/2 −v/2 k0(x, y) sin(jω0y) dy = √ 2π v F (jω0) sin(jω0x).\nTherefore the eigenfunctions of the integral operator Tk are\ne0(x) = 1, ej(x) := √ 2 cos(jω0x), e−j(x) := √ 2 sin(jω0x) (j ≥ 1)\nand the eigenvalues are λj = √ 2π v F (jω0) for all j ∈ Z with λ−j = λj . An important property our proof will rely on is that\ne′j(x) = −jω0e−j(x), for all j ∈ Z. Applying Mercer’s theorem in (5) and noting κ(0) = 1, we derive ∑ j∈Z λj = 1.\nChecking the Assumptions 1 and 2. The following theorem summarizes the assumptions and conclusions regarding the satisfaction of Assumptions 1 and 2. Again we focus on the case of X ⊆ R. Theorem 1. Suppose the periodic kernel with period v has eigenvalues λj that satisfies\nλj(1 + j) 2 max(1, j2)(1 + δ(j ≥ 1)) ≤ c6 · c−j4 , for all j ≥ 0, (13)\nwhere c4 > 1 and c6 > 0 are universal constants. Then Assumption 1 holds with\nNε = 1 + 2 bnεc , where nε := logc4 ( 2.1c6 ε2 max ( 1, v2 4π2 )) . (14)\nIn addition, Assumption 2 holds with Qε = √ 2 and Mε = 2 √ 2π v bnεc = √ 2π v (Nε − 1).\nFor example, if we set v = π and σ2 = 1/2 in the kernel in (12), elementary calculation shows that the condition (13) is satisfied with c4 = 2 and c6 = 1.6.\nProof of Theorem 1. First we show that h(x) := ∂0,1k0(x0, x) is in H0 for all x0 ∈ X0. Since k0(x0, x) = ∑ j∈Z λjej(x0)ej(x), we derive\nh(x) = ∑ j∈Z λjej(x0)∂ 1ej(x) = ∑ j∈Z λjej(x0)(−jω0e−j(x)) = ω0 ∑ j∈Z λjje−j(x0)ej(x). (15)\nh(x) is inH if the sequence λjje−j(x0)/ √ λj is square summable. This can be easily seen by (13):\nω−20 ‖h‖ 2 H0 = ∑ j λjj 2e2−j(x0) = ∑ j∈Z λjj 2e2−j(x0)\n= ∑ j∈Z λjj 2e2−j(x0) = λ0 + 2 ∑ j≥1 j2λj ≤ 2c4c5 c4 − 1 .\nFinally to derive Nε, we reuse the orthonormal decomposition of h(x) in (15). For a given set of j values A where A ⊆ Z, we denote as ΦA the “matrix” whose columns enumerate the ϕj over j ∈ A. Let us choose\nA := { j : λj max(1, j 2)(1 + j2)(1 + δ(j ≥ 1)) ≥ min(1, w−20 ) ε2\n2.1\n} .\nIf j ∈ A, then −j ∈ A. LettingN0 = {0, 1, 2, . . .}, we note ∑ j∈N0\n1 1+j2 ≤ 2.1. So∥∥h− ΦAΦ>Ah∥∥2H0 = w20 ∑\nj∈Z\\A\nλjj 2e2−j(x0)\n= w20 ∑\nj∈N0\\A\nλjj 2 [ (e2j (x) + e 2 −j(x))δ(j ≥ 1) + δ(j = 0) ] = w20\n∑ j∈N0\\A λjj 2(1 + δ(j ≥ 1))\n= w20 ∑\nj∈N0\\A\n{ λjj\n2(1 + j2)(1 + δ(j ≥ 1)) 1 1 + j2\n}\n≤ ε 2\n2.1 ∑ j∈N0 1 1 + j2 = ε2 2.1 ∑ j∈N0 1 1 + j2 ≤ ε2.\nSimilarly, we can bound ∥∥k0(x0, ·)− ΦAΦ>Ak0(x0, ·)∥∥H0 by∥∥k0(x0, ·)− ΦAΦ>Ak0(x0, ·)∥∥2H0\n= ∑ j∈Z\\A λje 2 j (x0) ≤ ∑ j∈Z\\A λj max(1, j 2)e2j (x0)\n= ∑\nj∈N0\\A\nλαmax(1, j 2)[ ( e2j (x) + e 2 −j(x) ) δ(j ≥ 1) + δ(j = 0)]\n= ∑\nj∈N0\\A\n{ λj max(1, j\n2)(1 + j2)(1 + δ(j ≥ 1)) 1 1 + j2 } ≤ 1\n2.1 ε2 ∑ j∈N0 1 1 + j2 ≤ ε2.\nTo upper bound the cardinality of A, we consider the conditions for j /∈ A. Thanks to the conditions in (13), we know that any j satisfying the following relationship cannot be in A:\nc6 · c−|j|4 < min(1, w −2 0 )\nε2 2.1 ⇔ c−|j|4 <\n1\n2.1 · c6 min\n( 1, 4π2\nv2\n) ε2.\nSo A ⊆ {j : |j| ≤ nε}, which yields the conclusion (14). Finally Qε ≤ √\n2, and to bound Mε, we simply reuse (15). For any j with |j| ≤ nε,∣∣〈h, ej〉H∣∣ ≤ ω0 |je−j(x0)| ≤ 2πv √2 bnεc = √ 2π v (Nε − 1)." }, { "heading": "A.5 CASE 2: CHECKING ASSUMPTIONS 1 AND 2 ON GAUSSIAN KERNELS", "text": "Gaussian kernels k(x, y) = exp(−‖x− y‖2 /(2σ2)) are obviously product kernels with k0(x1, y1) = κ(x1 − y1) = exp(−(x1 − y1)2/(2σ2)). It is also translation invariant. The spectrum of Gaussian kernel k0 on R is known; see, e.g., Chapter 4.3.1 of Rasmussen & Williams (2006) and Section 4 of Zhu et al. (1998). Let µ be a Gaussian distributionN (0, σ2). Setting ε2 = α2 = (2σ2)−1\nin Eq 12 and 13 of E Fasshauer (2011), the eigenvalue and eigenfunctions are (for j ≥ 0):\nλj = c −j−1/2 0 , where c0 =\n1 2 (3 +\n√ 5)\nej(x) = 51/8\n2j/2 exp\n( − √\n5− 1 4 x2 σ2 ) 1√ j! Hj ( 4 √ 1.25 x σ ) ,\nwhere Hj is the Hermite polynomial of order j.\nAlthough the eigenvalues decay exponentially fast, the eigenfunctions are not uniformly bounded in the L∞ sense. Although the latter can be patched if we restrict x to a bounded set, the above closed-form of eigen-pairs will no longer hold, and the analysis will become rather challenging.\nTo resolve this issue, we resort to the period-ization technique proposed by Williamson et al. (2001). Consider κ(x) = exp(−x2/(2σ2)) when x ∈ [−v/2, v/2], and then extend κ to R as a periodic function with period v. Again let µ be the uniform distribution on [−v/2, v/2]. As can be seen from the discriminant function f = 1l ∑l i=1 γik(x\ni, ·), as along as our training and test data both lie in [−v/4, v/4], the modification of κ outside [−v/2, v/2] does not effectively make any difference. Although the term ∂0,1k0(xa1 , w 1 1) in (13) may possibly evaluate κ outside [−v/2, v/2], it is only used for testing the gradient norm bound of κ. With this periodized Gaussian kernel, it is easy to see that Qε = √\n2. If we standardize by σ = 1 and set v = 5π as an example, it is not hard to see that (13) holds with c4 = 1.25 and c6 = 50. The expressions of Nε and Mε then follow from Theorem 1 directly." }, { "heading": "A.6 CASE 3: CHECKING ASSUMPTIONS 1 AND 2 ON NON-PRODUCT KERNELS", "text": "The above analysis has been restricted to product kernels. But in practice, there are many useful kernels that are not decomposable. A prominent example is the inverse kernel: k(x, y) = (2−x>y)−1. In general, it is extremely challenging to analyze eigenfunctions, which are commonly not bounded (Lafferty & Lebanon, 2005; Zhou, 2002), i.e., supi→∞ supx |ei(x)| = ∞. The opposite was (incorrectly) claimed in Theorem 4 of Williamson et al. (2001) by citing an incorrect result in König (1986, p. 145), which was later corrected by Zhou (2002) and Steve Smale. Indeed, uniform boundedness is not known even for Gaussian kernels with uniform distribution on [0, 1]d Lin et al. (2017), and (Minh et al., 2006, Theorem 5) showed the unboundedness for Gaussian kernels with uniform distribution on the unit sphere when d ≥ 3. Here we only present the limited results that we have obtained on the eigenvalues of the integral operator of inverse kernels with a uniform distribution on the unit ball. The analysis of eigenfunctions is left for future work. Specifically, in order to drive the eigenvalue λi below ε, i must be at least ddlog2 1 εe+1. This is a quasi-quadratic bound if we view d and 1/ε as two large variables.\nIt is quite straightforward to give an explicit characterization of the functions in H. The Taylor expansion of z−1 at z = 2 is 12 ∑∞ i=0(− 1 2 ) ixi. Using the standard multi-index notation with\nα = (α1, . . . , αd) ∈ (N ∪ {0})d, |α| = ∑d i=1 αi, and x α = xα11 . . . x αd d , we derive\nk(x,y) = 1\n2− x>y =\n1\n2 ∞∑ k=0 ( −1 2 )k (−x>y)k = ∞∑ k=0 2−k−1 ∑ α:|α|=k Ckαx αyα\n= ∑ α 2−|α|−1C |α|α x αyα,\nwhere Ckα = k!∏d\ni=1 αi! . So we can read off the feature mapping for x as\nϕ(x) = {wαxα : α}, where wα = 2− 1 2 (|α|+1)C |α|α ,\nand the functions inH are\nH = { f =\n∑ α ϑαwαx α : ‖ϑ‖`2 <∞\n} . (16)\nNote this is just an intuitive “derivation” while a rigorous proof for (16) can be constructed in analogy to that of Theorem 1 in Minh (2010)." }, { "heading": "A.7 BACKGROUND OF EIGENVALUES OF A KERNEL", "text": "We now use (16) to find the eigenvalues of inverse kernel.\nNow specializing to our inverse kernel case, let us endow a uniform distribution over the unit ball B: p(x) = V −1d where Vd = π d/2Γ (d2 + 1) −1 is the volume of B, with Γ being the Gamma function. Then λ is an eigenvalue of the kernel if there exists f = ∑ α ϑαwαx\nα such that∫ y∈B k(x,y)p(y)f(y) dy = λf(x). This translates to\nV −1d ∫ y∈B ∑ α w2αx αyα ∑ β ϑβwβy β dy = λ ∑ α ϑαwαx α, ∀ x ∈ B.\nSince B is an open set, that means wα ∑ β wβqα+βϑβ = λϑα, ∀ α,\nwhere\nqα = V −1 d ∫ y∈B yα dy = 2 ∏d i=1 Γ ( 1 2αi+ 1 2 ) Vd·(|α|+d)·Γ ( 1 2 |α|+ d 2 ) if all αi are even 0 otherwise .\nIn other words, λ is the eigenvalue of the infinite dimensional matrix Q = [wαwβqα+β]α,β," }, { "heading": "A.8 BOUNDING THE EIGENVALUES", "text": "To bound the eigenvalues of Q, we resort to the majorization results in matrix analysis. Since k is a PSD kernel, all its eigenvalues are nonnegative, and suppose they are sorted decreasingly as λ1 ≥ λ2 ≥ . . .. Let the row corresponding to α have `2 norm rα, and let them be sorted as r[1] ≥ r[2] ≥ . . .. Then by Schneider (1953); Shi & Wang (1965), we have\nn∏ i=1 λi ≤ n∏ i=1 r[i], ∀ n ≥ 1.\nSo our strategy is to bound rα first. To start with, we decompose qα+β into qα and qβ via CauchySchwartz:\nq2α+β = V −2 d (∫ y∈B yα+β dy )2 ≤ V −2d ∫ y∈B y2α dy · ∫ y∈B y2β dy = q2αq2β.\nTo simplify notation, we consider without loss of generality that d is an even number, and denote the integer b := d/2. Now Vd = πb/b!. Noting that there are ( k + d− 1\nk\n) values of β such that\n|β| = k, we can proceed by (fix below by changing ( k + d k ) into ( k + d− 1 k ) , or no need\nbecause the former upper bounds the latter)\nr2α = w 2 α ∑ β w2βq 2 α+β ≤ w2αq2α ∑ β w2βq2β = w 2 αq2α ∞∑ k=0 2−k−1 ∑ β:|β|=k Ckβq2β\n≤ w2αq2α ∞∑ k=0 2−k−1 ( k + d d ) max |β|=k Ckβq2β\n= w2αq2α ∞∑ k=0 2−k−1 ( k + d d ) max |β|=k k!∏d i=1 βi! · 2 ∏d i=1 Γ (βi + 1 2 ) Vd · (2k + d) · Γ (k + d2 )\n= w2αq2αV −1 d ∞∑ k=0 2−k ( k + d d ) k! (2k + d)Γ (k + d2 ) · max |β|=k d∏ i=1 Γ (βi + 1 2 ) βi!\n< w2αq2α · b! πbd! · ∞∑ k=0 2−k−1 (k + d)! (k + b)! (since Γ (βi + 12 ) < Γ (βi + 1) = βi!).\nThe summation over k can be bounded by\n∞∑ k=0 2−k−1 (k + d)! (k + b)! = 1 2 b! ( 2d + ( d b )) ≤ 1 2 ( b!2d + 2b ) ≤ b!2d,\nwhere the first equality used the identity ∑∞ k=1 2 −k ( d+ k b ) = 2d. Letting l := |α|, we can\ncontinue by\nr2α < w 2 αq2α ·\nb!\nπbd! b!2d = 2−l−1 l!∏d i=1 αi!\n2 ∏d i=1 Γ ( αi + 1 2 ) Vd · (2l + d) · Γ (l + b) (b!)22d πbd!\n≤ 2−l+dπ−2b l!(b!) 3\nd!(l + b− 1)!(2l + d) (since Γ (αi + 12 ) < Γ (αi + 1) = αi!)\n≤ 2−l+b−1π−2b ( l + b l )−1 (since (b!)2 d! ≤ 2−b).\nThis bound depends on α, not directly on α. Letting nl = ( l + d− 1\nl\n) and NL = ∑L l=0 nl =(\nd+ L L\n) , it follows that\nL∑ l=0 lnl = L∑ l=1 l(l + d)! d! · l! = (d+ 1) L∑ l=1\n(l + d)!\n(d+ 1)!(l − 1)!\n=(d+ 1) L∑ l=1 ( l + d d+ 1 ) = (d+ 1) ( L+ d+ 1 d+ 2 ) .\nNow we can bound λNL by\nλNLNL ≤ NL∏ i=1 λi ≤ L∏ l=0\n( 2−l+b−1π−2b ( l + b l )−1)nl\n⇒ log λNL ≤ N−1L L∑ l=0 nl ( −(l − b+ 1) log 2− 2b log π − log ( l + b l ))\n≤ −N−1L · log 2 · L∑ l=0 lnl (since log 2 < 2 log π as the coefficients of b)\n= − ( d+ L+ 1 d+ 1 )−1 · log 2 · (d+ 1) ( d+ L+ 1 d+ 2 ) = −d+ 1\nd+ 2 L log 2\n≈ −L log 2 ⇒ λNL ≤ 2−L.\nThis means that the eigenvalue λi ≤ ε provided that i ≥ NL where L = ⌈ log2 1 ε ⌉ . SinceNL ≤ dL+1, that means it suffices to choose i such that\ni ≥ ddlog2 1 εe+1.\nThis is a quasi-polynomial bound. It seems tight because even in Gaussian RBF kernel, the eigenvalues follow the order of λα = O(c−|α|) for some c > 1 (Fasshauer & McCourt, 2012, p.A742)." }, { "heading": "EXPERIMENTS", "text": "" }, { "heading": "A.9 EFFICIENCY OF ENFORCING LIPSCHITZ CONSTANT BY DIFFERENT METHODS", "text": "The six different ways to train SVMs with Lipschitz regularisation are summarized in Algorithm 1. Figure 7 plots how fast the regularisation on gradient norm becomes effective when more and more points w are added to the constraint set. We call them “samples” although it is not so random in the greedy method, modulo the random initialization of BFGS within the greedy method. The horizontal axis is the loop index i in Algorithm 1, and the vertical axis is L(i) therein, which is the estimation of the Lipschitz constant of the current solution f (i). We used 400 random examples (200 images of digit 1 and 200 images of digit 0) in the MNIST dataset and set L = 3 and RKHS norm ‖f‖H ≤ ∞ for all algorithms. Inverse kernel is used, hence no results are shown for coordinate-wise Nyström.\nClearly the Nyström algorithm is more efficient than the Brute-force algorithm, and the greedy method significantly reduces the number of samples for both algorithms. In fact, Nyström with greedy selection eventually fell below the prespecified L, because of the gap in (9)." }, { "heading": "A.10 EXTENSION TO `∞-NORM ATTACKS FOR OUR KERNEL BASED METHOD", "text": "We now extend our kernel based approach to `∞ norm ball attacks. Since most multiclass losses are 1-Lipschitz continuous with respect to `∞ norm on (f1(x), . . . , f10(x)), we will seek\nsup x∈X sup u:‖u‖∞≤1\n∥∥∥[g1(x), . . . , g10(x)]>u∥∥∥ ∞ ≤ L, where gc(x) := ∇f c(x).\nThe left-hand side (LHS) can be bounded by\nLHS = sup x∈X max 1≤c≤10 ‖gc(x)‖1 ≤ max1≤c≤10 sup‖ϕ‖H≤1 ∥∥G>c ϕ∥∥1 . Given the Nyström approximation G̃c of Gc, we can enforce the convex constraint of\nmax 1≤c≤10 sup ‖v‖2≤1\n∥∥∥G̃>c v∥∥∥ 1 ≤ L." }, { "heading": "A.11 MORE RESULTS ON CROSS-ENTROPY ATTACKS", "text": "A.12 VISUALIZATION OF GRADIENT\nA gradient-based attacker tries to decrease the targeted loss by following the negative gradient in Figure 10b, i.e., reduce the pixel value in red area and increase pixel value in blue area.\nIn order to verify that the robustness of Gauss-Lip is not due to obfuscated gradient, we visualised “large perturbation” adversarial examples, with the `2 norm upper bounded by 6. Figure 11 shows how the PGD attacker uses the gradients to perturb the images step by step. At the end of PGD, there are 46 cases where the original image was successfully attacked, i.e., turned into the target class. This is over 50% of the total of 90 cases, and the resulting images look realistic.\niteration 1 iteration 2 iteration 3 iteration 4 iteration 5\niteration 6 iteration 7 iteration 8 iteration 9 iteration 10\nprediction of Gauss-Lip\nFigure 11: Gradients and perturbed images at each iteration in a 10-step PGD attack using (targeted) cross-entropy approximation, with the `2 norm upper bounded by 6. Here the classifier is Gauss-Lip (σ = 2). The table in the bottom right presents the final predictions of our trained Gauss-Lip on the perturbed images.\nTo further look into the attack result, we increased PGD to 100 iterations. As shown in Figure 12, now the number of misclassified cases (i.e., unsuccessful attacks that failed to turn an image into the targeted class) drops from 46 to 22, out of 90 cases. The final images are quite realistic. We will further study these remaining cases in the future.\nIn the above experiments for Figures 11, 12, and 14, PGD was run on the cross-entropy objective. For example, the row corresponding to class 4 tries to promote the likelihood of the target class 4. Naturally the diagonal is not meaningful, hence left empty.\nWe further ran PGD for 100 iterations on C&W approximation (an untargeted attack used in Figure 3), and the resulting images after every 10 iterations are shown in Figure 13. Here 9 out of 10 images were eventually turned into a different but untargeted class, and the final images are very realistic.\nAnother random set of images. To test if the above result is due to the particularly hard images selected, we randomly selected another set of images and its results for 100-step PGD on crossentropy objective and C&W objective are shown in Figures 14 and 15, respectively. Interestingly, C&W attack succeeds on all these images, and cross-entropy attack was only unsuccessful in turning 0 into 1.\nPlease note that despite the commonality in using the cross-entropy objective, the setting of targeted attack in Figures 11, 12, and 14 is not comparable to that in Figure 8a, where to enable a batch test mode, an untargeted attacker was employed by increasing the cross-entropy loss of the correct class, i.e., decreasing the likelihood of the correct class. This is a common practice." } ]
2,019
null
SP:9ad896111e20da136d179dcd72aad658eba76d93
[ "The paper introduces an invertible deep generative model architecture for modeling molecular graphs. The model is based on graph residual flows (GRF), which is a graph variant of normalizing flows. The GRF model is a refinement of the GraphNVP generative model, which is also invertible, but which does not seem to work very well for sparse low-degree graphs (such as molecular graphs). ", "GraphNVP is the first paper to introduce the concept of \"invertible flow\", that is to construct the invertible mapping from latent vector z to the graph G. By constructing the mapping from G to z, GraphNVP first changes the discrete feature vector into continuous variables, then update this matrix representation by scaling and transforming functions (Eq. (2)-(5) in this GRF paper). In each iteration the matrix is only updated by one row (one slice for the tensor), while keep other rows intact. Then for constructing the inverse mapping, we can first sample a random vector and then apply the “inverse” of the update rule to recover the edge matrix and node matrix respectively." ]
Statistical generative models for molecular graphs attract attention from many researchers from the fields of bioand chemo-informatics. Among these models, invertible flow-based approaches are not fully explored yet. In this paper, we propose a powerful invertible flow for molecular graphs, called graph residual flow (GRF). The GRF is based on residual flows, which are known for more flexible and complex non-linear mappings than traditional coupling flows. We theoretically derive non-trivial conditions such that GRF is invertible, and present a way of keeping the entire flows invertible throughout the training and sampling. Experimental results show that a generative model based on the proposed GRF achieve comparable generation performance, with much smaller number of trainable parameters compared to the existing flow-based model.
[]
[ { "authors": [ "Takuya Akiba", "Shotaro Sano", "Toshihiko Yanase", "Takeru Ohta", "Masanori Koyama" ], "title": "Optuna: A nextgeneration hyperparameter optimization framework", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD", "year": 2019 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "Proceedings of Incerntional Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Ricky T.Q. Chen", "Jens Behrmann", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "arXiv preprint arXiv:1906.02735,", "year": 2019 }, { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "Molgan: An implicit generative model for small molecular graphs", "venue": "arXiv preprint arXiv:1805.11973,", "year": 2018 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "In Proceedings of International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamín Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Brian Hall" ], "title": "Lie groups, Lie algebras, and representations: an elementary introduction, volume 222", "venue": null, "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Michael F Hutchinson" ], "title": "A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines", "venue": "Communications in Statistics-Simulation and Computation,", "year": 1990 }, { "authors": [ "John J Irwin", "Teague Sterling", "Michael M Mysinger", "Erin S Bolstad", "Ryan G Coleman" ], "title": "Zinc: a free tool to discover chemistry for biology", "venue": "Journal of chemical information and modeling,", "year": 2012 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Lei Ba" ], "title": "Adam: a Method for Stochastic Optimization", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Proceedings of the 2nd International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised Classification with Graph Convolutional Networks", "venue": "In Proceedings of the 5th International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Ivan Kobyzev", "Simon Prince", "Marcus A Brubaker" ], "title": "Normalizing Flows: Introduction and Ideas", "venue": "arXIv, pp. 1908.09257", "year": 1908 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 1954 }, { "authors": [ "Jenny Liu", "Aviral Kumar", "Jimmy Ba", "Jamle Kiros", "Kevin Swersky" ], "title": "Graph normalizing flows", "venue": "arXiv preprint arXiv:1905.13177,", "year": 2019 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander Gaunt" ], "title": "Constrained graph variational autoencoders for molecule design", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tengfei Ma", "Jie Chen", "Cao Xiao" ], "title": "Constrained generation of semantically valid graphs via regularizing variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kaushalya Madhawa", "Katushiko Ishiguro", "Kosuke Nakago", "Motoki Abe" ], "title": "Graphnvp: An invertible flow model for generating molecular graphs", "venue": null, "year": 1905 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "On asymptotic behaviors of graph cnns from dynamical systems perspective, 2019", "venue": null, "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In Proceedings of International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "Yang Song", "Chenlin Meng", "Stefano Ermon" ], "title": "Mintnet: Building invertible neural networks with masked convolutions", "venue": "arXiv preprint arXiv:1907.07945,", "year": 2019 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Jr. Holanda de Souza", "Christopher Fifty", "Tao Yu", "Kilian Q. Weinberger" ], "title": "Simplifying Graph Convolutional Networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Zhitao Ying", "Vijay Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "We propose a deep generative model for molecular graphs based on invertible functions. We especially focus on introducing an invertible function that is tuned for the use in graph structured data, which allows for flexible mappings with less number of parameters than previous invertible models for graphs.\nMolecular graph generation is one of the hot trends in the graph analysis with a potential for important applications such as in silico new material discovery and drug candidate screening. Previous generative models for molecules deal with string representations called SMILES (e.g. Kusner et al. (2017); Gómez-Bombarelli et al. (2018)), which does not consider graph topology. Recent models such as (Jin et al., 2018; You et al., 2018; De Cao & Kipf, 2018; Madhawa et al., 2019) are able to directly handle graphs. Several researchers are investigating this topic using sophisticated statistical models such as variational autoencoders (VAEs) (Kingma & Welling, 2014), adversarial loss-based models such as generative adversarial networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015), and invertible flows (Kobyzev et al., 2019) and have achieved desirable performances.\nThe decoders of these graph generation models generate a discrete graph-structured data from a (typically continuous) representation of a data sample, which is modeled by aforementioned statistical models. In general, it is difficult to design a decoder that balances the efficacy of the graph generation and the simplicity of the implementation and training. For example, MolGAN (De Cao & Kipf, 2018) has a relatively simple decoder but suffers from generating numerous duplicated graph samples. The state-of-the-art VAE-based models such as (Jin et al., 2018; Liu et al., 2018) have good generation performance but their decoding scheme is highly complicated and requires careful training. On the contrary, invertible flow-based statistical models (Dinh et al., 2015; Kobyzev et al., 2019) do not require training for their decoders because the decoders are simply the inverse mapping of the encoders and are known for good generation performances in image generation (Dinh et al., 2017; Kingma & Dhariwal, 2018). Liu et al. (2019) proposes an invertible-flow based graph generation model. However, their generative model is not invertible because its decoder for graph structure is not built upon invertible flows. The GraphNVP by Madhawa et al. (2019) is the seminal fully invertible-flow approach for graph generation, which successfully combines the invertible maps with the generic graph convolutional networks (GCNs, e.g Kipf & Welling (2017); Schlichtkrull et al. (2017)).\nHowever, the coupling flow (Kobyzev et al., 2019) used in the GraphNVP has a serious drawback when applied to sparse graphs such as molecular graphs we are interested in. The coupling flow\nrequires a disjoint partitioning of the latent representation of the data (graph) in each layer. We need to design this partitioning carefully so that all the attributes of a latent representation are well mixed through stacks of mapping layers. However, molecular graphs are highly sparse in general: degree of each node atom is at most four (valency), and only few kind of atoms comprise the majority of the molecules (less diversity). Madhawa et al. (2019) argued that only a specific form of partitioning can lead to a desirable performance owing to sparsity: for each mapping layer, the representation of only one node is subject to update and all the other nodes are kept intact. In other words, a graph with 100 nodes requires at least 100 layers. But with the 100 layers, only one affine mapping is executed for each attribute of the latent representation. Therefore, the complexity of the mappings of GraphNVP is extremely low in contrast to the number of layer stacks. We assume that this is why the generation performance of GraphNVP is less impressive than other state-of-the-art models (Jin et al., 2018; Liu et al., 2018) in the paper.\nIn this paper we propose a new graph flow, called graph residual flow (GRF): a novel combination of a generic GCN and recently proposed residual flows (Behrmann et al., 2019; Song et al., 2019; Chen et al., 2019). The GRF does not require partitioning of a latent vector and can update all the node attributes in each layer. Thus, a 100 layer-stacked flow model can apply the (non-linear) mappings 100 times for each attribute of the latent vector of the 100-node graph. We derive a theoretical guarantee of the invertibility of the GRF and introduce constraints on the GRF parameters, based on rigorous mathematical calculations. Through experiments with most popular graph generation datasets, we observe that a generative model based on the proposed GRF can achieve a generation performance comparable to the GraphNVP Madhawa et al. (2019), but with much fewer trainable parameters.\nTo summarize, our contributions in this paper are as follows:\n• propose the graph residual flow (GRF): a novel residual flow model for graph generation that is compatible with a generic GCNs.\n• prove conditions such that the GRFs are invertible and present how to keep the entire network invertible throughout the training and sampling.\n• demonstrate the efficacy of the GRF-based models in generating molecular graphs; in other words, show that a generative model based on the GRF has much fewer trainable parameters compared to the GraphNVP, while still maintaining a comparable generation performance." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 GRAPHNVP", "text": "We first describe the GraphNVP (Madhawa et al., 2019), the first fully invertible model for chemical graph generation, as a baseline. We simultaneously introduce the necessary notations for graph generative models.\nWe use the notation G = (A,X) to represent a graph G comprising an adjacency tensor A and a feature matrix X . Let N be the number of nodes in the graph, M be the number of the types of nodes, and R be the number of the types of edges. Then, A ∈ {0, 1}N×N×R and X ∈ {0, 1}N×M . In the case of molecular graphs, G = (A,X) represents a molecule with R types of bonds (single, double, etc.) and M the types of atoms (e.g., oxygen, carbon, etc.). Our objective is to train an invertible model fθ with parameters θ that maps G into a latent point z = fθ(G) ∈ RD=(N×N×R)+(N×M). We describe fθ as a normalizing flow composed of multiple invertible functions.\nLet z be a latent vector drawn from a known prior distribution pz(z) (e.g., Gaussian): z ∼ pz(z). After applying a variable transformation, the log probability of a given graph G can be calculated as:\nlog (pG(G)) = log (pz(z)) + log (∣∣∣∣det( ∂z∂G )∣∣∣∣) , (1)\nwhere ∂z∂G is the Jacobian of fθ at G.\nIn (Madhawa et al., 2019) fθ is modeled by two types of invertible non-volume preserving (NVP) mappings (Dinh et al., 2017). The first type of mapping is the one that transforms the adjacency tensor, and the second type is the one that transforms the node attribute X .\nLet us divide the hidden variable z into two parts z = [zX , zA]; the former zX is derived from invertible mappings of X and the latter zA is derived from invertible mappings of A. For the mapping of the feature matrix X , the GraphNVP provides a node feature coupling:\nz (`) X [`, :]← z (`−1) X [`, :] ◦ exp\n( s(z\n(`−1) X [`\n−, :], A) ) + t(z\n(`−1) X [` −, :], A), (2)\nwhere ` indicates the layer of the coupling, functions s and t stand for scale and translation operations, respectively, and ◦ denotes element-wise multiplication. We use zX [`−, :] to denote a latent representation matrix of X ′ excluding the `th row (node). The rest of the rows of the feature matrix remains the same as follows.\nz (`) X [` −, :]← z(`−1)X [`−, :]. (3) s and t are modeled by a generic GCN, requiring the adjacency information of nodes, A, for better interactions between the nodes.\nFor the mapping of the adjacency tensor, the GraphNVP provides an adjacency coupling:\nz (`) A [`, :, :]← z (`−1) A [`, :, :] ◦ exp\n( s(z\n(`−1) A [`\n−, :, :]) ) + t(z\n(`−1) A [` −, :, :]). (4)\nThe rest of the rows remain as they are, as follows:\nz (`) A [` −, :, :]← z(`−1)A [`−, :, :]. (5)\nFor adjacency coupling, we employ simple multi-layer perceptrons (MLPs) for s and t.\nThe abovementioned formulations map only those variables that are related to a node ` in each `-th layer (Eqs.(2,4)) , and the remaining nodes `− are kept intact (Eqs.(3,5)); i.e. the partitioning of the variables always occurs in the first axis of tensors. This limits the parameterization of scaling and translation operations, resulting in reduced representation power of the model.\nIn the original paper, the authors mention: “masking (switching) ... w.r.t the node axis performs the best. ... We can easily formulate ... the slice indexing based on the non-node axis ... results in dramatically worse performance due to the sparsity of molecular graph.” Here, sparsity can be described in two ways: one is the sparsity of non-carbon atoms in organic chemicals, and the other is the low degrees of atom nodes (because of valency)." }, { "heading": "2.2 INVERTIBLE RESIDUAL BLOCKS", "text": "One of the major drawbacks of the partition-based coupling flow is that it covers a fairly limited family of mappings. Instead, the coupling flow offers computational cheap and analytic form of inversions. A series of recent invertible models (Behrmann et al., 2019; Song et al., 2019; Chen et al., 2019) propose a different approach for invertible mappings, called residual flow (Kobyzev et al., 2019). They formulate ResNets (He et al., 2016), which have been successful in image recognintion, as invertible mappings. The general idea is described as follows.\nOur objective is to develop an invertible residual layer for a vector z: z(`+1) = z(`) + R ( z(`) ) , (6)\nwhere z(`) is the representation vector at the `th layer, and R is a residual block. If we correctly constrain R, then we can assure the invertibility of the above-mentioned residual layer.\ni-ResNet (Behrmann et al., 2019) presents a constraint regarding the Lipschitz constant of R. MintNet (Song et al., 2019) limits the shape of the residual block R and derives the non-singularity requirements of the Jacobian of the (limited) residual block.\nNotably, the (invertible) residual connection (Eq.(6)) does not assume the partition of variables into “intact” and “afine-map” parts. This means that each layer of invertible residual connection updates all the variables at once.\nIn both the aforementioned papers, local convolutional network architecture (He et al., 2016) of the residual block R is proposed for image tensor inputs, which can be applied for image generation/reconstructions for experimental validations. For example, in i-ResNet, the residual block is\ndefined as: R (x) =W3 ◦ φ ◦W2 ◦ φ ◦W1 (x) , (7) where φ denotes a contractive nonlinear function such as ReLU and ELU, W· are (spatially) local convolutional layers (i.e. aggregating the neighboring pixels). In this case, we put a constraint that the spectral norms of all W s are less than unity for the Lipschitz condition." }, { "heading": "3 INVERTIBLE GRAPH GENERATION MODEL WITH GRAPH RESIDUAL FLOW", "text": "(GRF)\nWe observe that the limitations of the GraphNVP cannot be avoided as long we use the partition-based coupling flows for the sparse molecular graph. Therefore we aim to realize a different type of an invertible coupling layer that does not depend on the variable partitioning (for easier inversion and likelihood computation). For this, we propose a new molecular graph generation model based on a more powerful and efficient Graph Residual Flow (GRF), which is our proposed invertible flow for graphs." }, { "heading": "3.1 SETUP", "text": "The overall setup is similar to that of the original GraphNVP. We use the notation G = (A,X) to represent a graph G comprising an adjacency tensor A ∈ {0, 1}N×N×R and a feature matrix X ∈ {0, 1}N×M . Each tensor is mapped to a latent representation through invertible functions. Let zA ∈ RN×N×R be the latent representation of the adjacency tensor, and p (zA) be its prior. Similarly, let zX ∈ RN×M be the latent representation of the feature matrix, and p (zX) be its prior. We assume that both the priors are multivariate normal distributions.\nAs A and X are originally binary, we cannot directly apply the change-of-variables formula. The widely used (Dinh et al., 2017; Kingma & Dhariwal, 2018; Madhawa et al., 2019) workaround is dequantization: adding noises drawn from a continuous distribution and regarding the tensors as continuous. The dequantized graph denoted as G′ = (A′, X ′) is used as the input in Eq. 1:\nA′ = A+ cu; u ∼ U [0, 1)N×N×R , (8) X ′ = X + cu; u ∼ U [0, 1)N×M , (9)\nwhere 0 < c < 1 is a scaling hyperparameter. We adopted c = 0.9 for our experiment.\nNote that the original discrete inputs A and X can be recovered by simply applying floor operation on each continuous value in A′ and X ′. Hereafter, all the transformations are performed on dequantized inputs A′ and X ′." }, { "heading": "3.2 FORWARD MODEL", "text": "We can instantly formulate a naive model, and for doing so, we do not take into consideration the graph structure behind G′ and regard A′ and X ′ as simple tensors (multi-dimensional arrays).\nNamely, an tensor entry X ′[i,m] is a neighbor of X[i′,m′], where |i′ − i| ≤ 1, and |m′ −m| ≤ 1, regardless of the true adjacency of node i and i′, and the feature m and m′. Similar discussion holds for A′.\nIn such case, we simply apply the invertible residual flow for the tensors A′, X ′. Let z(0)A = A ′ and z (0) X = X ′.\nWe formulate the invertible graph generation model based on GRFs. The fundamental idea is to replace the two coupling flows in GraphNVP with the new GRFs. A GRF conmprises two sub-flows: node feature residual flow and adjacency residual flow.\nFor the feature matrix, we formulate a node feature residual flow for layer ` as:\nz (`) X ← z (`−1) X + R (`) X ( z (`−1) X ;A ) , (10)\nwhere R(`)X is a residual block for feature matrix at layer `. Similar to Eq.(2), we assume the condition of the adjacency tensor A for the coupling.\nFor the mapping of the adjacency tensor, we have a similar adjacency residual flow:\nz (`) A ← z (`−1) A + R (`) A ( z (`−1) A ) , (11)\nwhere R(`)A is a residual block for adjacency tensor at layer `.\nNote that there are no slice indices of tensors ZA and ZX in Eqs.(10, 11). Therefore every entry of the tensors is subject to update in every layer, making a notable contrast with Eqs.(2,4)." }, { "heading": "3.3 RESIDUAL BLOCK CHOICES FOR GRFS", "text": "One of the technical contributions of this paper is the development of residual blocks for GRFs. The convolution architecture of ResNet reminds us of GCNs (e.g. (Kipf & Welling, 2017)), inspiring possible application to graph input data. Therefore, we extend the invertible residual blocks of (Behrmann et al., 2019; Song et al., 2019) to the feature matrix and the adjacency tensor conditioned by the graph structure G.\nThe simplest approach to constructing a residual flow model is by using linear layer as layer R. In such cases, we transform the adjacency matrix and feature matrix to single vectors. However, we must construct a large weight matrix so as not to reduce its dimension. Additionally, naive transformation into vector destroys the local feature of the graphs. To address the aforementioned issues, we propose two types of residual blocks RA and RX for each of the adjacency matrix and feature matrices.\nIn this paper, we propose a residual block based on GCNs (e.g. (Kipf & Welling, 2017; Wu et al., 2019) for graph-structured data. We focus on modeling the residual block for the node feature matrix.\nThe original GCN (Kipf & Welling, 2017) perform the convolution on graphs using the adjacency information of the graph (plus a weight matrix). One layer of the GCN performs the following update of graph node representation zX :\nz`X,r ← φ ( D̃r − 12 ÃrD̃r − 12 z`−1X Wr ) , whereD̃r = Dr + I , Ãr = Ar + I , (12)\nwhere Ar ∈ RN×N is an adjacency matrix of the graph defined by the relation type r, D ∈ RN×N is a degree matrix: D = diag( ∑ j Arii)i\n1, Wr is the learnable weight matrix for the relation r, and φ is a nonlinear function. In a nutshell, Eq.(12) updates the each node representation in zX by the weighed sum of neighbor nodes defined by the adjacency tensor Ar.\nFor our residual blocks on the graph, we replace the convolution filter W in Eq.(7) with Eq.(12) to define the neighbors on a graph:\nRX (zX ; r) = φ ( vec ( D̃r −1/2 ÃrD̃r −1/2 XWr )) , (13)\nRX (zX) = sumrRX (zX:r) , (14)\n1If an entry of D̃r is 0, then we assume the corresponding entry of D̃r − 1 2 is also 0.\nwhere vec is a vectorize operator, and mat is a matrizie operaton. e zX appropriately. For RX defined in this way, the following theorem holds.\nTheorem 1. Lip(φ) ≤ L, ‖W‖op < 1\nL ⇒ Lip (RX) < 1.\nHere, Lip(·) is a Lipschitz-constant of a certain function. The proof of this theorem is provided in appendix.\nThe Lipschitz constraint not only enables inverse operation (see Section 3.4) but also facilitates the computation of the log-determinant of Jacobian matrix in Eq. (1) as performed in (Behrmann et al., 2019). In other words, the log-determinant of Jacobian matrix can be approximated to the matrix trace (Withers & Nadarajah, 2010), and the trace can be computed through power series iterations and stochastic approximation (Hutchinson’s trick) (Hall, 2015; Hutchinson, 1990)." }, { "heading": "3.4 BACKWARD MODEL OR GRAPH GENERATION", "text": "As our model is invertible, the graph generation process is as depicted in Fig.1. The adjacency tensors and the feature tensors can be simultaneously calculated during training, because their calculations are independent of each other. However, we must note that during generation, a valid adjacency tensor is required for the inverse computation of ResGCN. For this reason, we execute the following 2-step generation: first, we generate the adjacency tensor and subsequently generate the atomic feature tensor. The abovementioned generation process is shown in the right half of Fig.1. The experiment section shows that this two-step generation process can efficiently generate chemically valid molecular graphs.\n1st step: We sample z = concat(zA, zX) from prior pz and split the sampled z into two, one of which is for zA and the other is for zX . Next, we compute the inverse of zA w.r.t Residual Block by fixed-point-iteration. Consequently, we obtain a probabilistic adjacency tensor Â′. Finally, we construct a discrete adjacency tensor  ∈ {0, 1}N×N×R from Â′ by taking node-wise and edge-wise argmax operation.\n2nd step: We consider the discrete matrix  obtained above as a fixed parameter and calculate the inverse image of zX for ResGCN using fixed-point iteration. In this way, we obtain the probabilistic adjacency tensor X̂ ′. Next, we construct a discrete feature matrix X̂ ∈ {0, 1}N×M by taking node wise argmax operation. Finally, we construct the molecule from the obtained adjacency tensor and feature matrix." }, { "heading": "3.4.1 INVERSION ALGORITHM: FIXED POINT ITERATION", "text": "For the residual layer f(x)(= x+ R(x)), it is generally not feasible to compute the inverse image analytically. However, we have configured the layer to satisfy Lip(R) as described above. As was done in the i-ResNet (Behrmann et al., 2019), the inverse image of f(x) can be computed using a fixed-point iteration of Algorithm 1 in appendix. From the Banach fixed-point theorem, this iterative method converges exponentially." }, { "heading": "3.4.2 CONDITION FOR GUARANTEED INVERSION", "text": "From theorem 1, the upper bound of Lip(RX) is determined by Lip(φ) and ‖W‖op. In this work, we selected the exponential linear unit (ELU) as function φ. ELU is a nonlinear function, which satisfies the differentiability condition. By definition, Lip(ELU) = 1. For W, the constraints can be satisfied by using spectral normalization (Miyato et al., 2018). The layer RX configured in this manner holds Lip(RX) < 1. In other words, this layer is the contraction map. Here, the input can be obtained by fixed point iteration." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 PROCEDURE", "text": "For our experiments, we use two datasets of molecules, QM9 (Ramakrishnan et al., 2014) and ZINC-250k (Irwin et al., 2012). The QM9 dataset contains 134,000 molecules with four atom types, and ZINC-250k is a subset of the ZINC-250k database that contains 250,000 drug-like molecules with nine atom types. The maximum number of heavy atoms in a molecule is nine for the QM9 and 38 for the ZINC-250k. As a standard preprocessing, molecules are first kekulized and the hydrogen atoms are subsequently removed from these molecules. The resulting molecules contain only single, double, or triple bonds.\nWe represent each molecule as an adjacency tensor A ∈ {0, 1}N×N×R and a one-hot feature matrix X ∈ {0, 1}N×M . N denotes the maximum number of atoms a molecule in each dataset can have. If a molecule has less than N atoms, it is padded by adding virtual nodes to keep the dimensions of A and X identical. As the adjacency tensors of molecular graphs are sparse, we add virtual bonds, referred to as \"no bond,\" between the atoms that do not have a bond.\nThus, an adjacency tensor conmprises R=4 adjacency matrices stacked together. Each adjacency matrix corresponds to the existence of a certain type of bond (single, double, triple, and virtual bonds) between the atoms. The feature matrix represents the type of each atom (e.g., oxygen, fluorine, etc.). As described in Section 3.3, X and A are dequantized to X ′ and A′.\nWe use a standard Gaussian distribution N (0, I) as a prior distribution pz(z). The objective function (1) is maximized by the Adam optimizer (Kingma & Ba, 2015). The hyperparameters are chosen by optuna (Akiba et al., 2019) for QM9 and ZINC-250k. Please find the appendix for the selected hyperparameter values. To reduce the model size, we adopt node-wise weight sharing for QM9 and low-rank approximation and multi-scale architecture proposed in (Dinh et al., 2017) for ZINC-250k." }, { "heading": "4.2 INVERTIBILITY CHECK", "text": "We first examine the reconstruction performance of GRF against the number of fixed-point iterations by encoding and decoding 1,000 molecules sampled from QM9 and ZINC-250k. According to Figure 2b, the L2 reconstruction error converges around 10−4 after 30 fixed point iterations. The reconstructed molecules are the same as the original molecule after convergence." }, { "heading": "4.3 NUMERICAL EVALUATION", "text": "Following (Kingma & Dhariwal, 2018; Madhawa et al., 2019), we sample 1,000 latent vectors from a temperature-truncated normal distribution pz(z;TX , TA) and transform them into molecular graphs by inverse operations. Different temperatures are selected for X and A because they are handled separately in our model. We compare the performance of the proposed model with those of the baseline models using the following metrics. Validity (V) is the ratio of the chemically valid molecules to the generated graphs. Novelty (N) is the ratio of the molecules that are not included in the training set to the generated valid molecules. Uniqueness (U) is the ratio of the unique molecules to the generated valid molecules. Reconstruction accuracy (R) is the ratio of the molecules that are reconstructed perfectly by the model. This metric is not defined for GANs as they do not have encoders.\nWe choose GraphNVP (Madhawa et al., 2019), Junction Tree VAE (JT-VAE) (Jin et al., 2018), Regularizing-VAE (RVAE) (Ma et al., 2018) as state-of-the-art baseline models. Also, we choose two additional VAE models as baseline models; grammar VAE(GVAE) (Kusner et al., 2017) and character VAE (CVAE) (Gómez-Bombarelli et al., 2018), which learn SMILES(string) representations of molecules.\nWe present the numerical evaluation results of QM9 and ZINC-250K datasets on the Table 1 (QM9) and the Table 2 (ZINC-250K), respectively. As expected, GRF achieves 100% reconstruction rate, which is enabled by the ResNet architecture with spectral normalization and fixed-point iterations. This has never been achieved by any other VAE-based baseline that imposes stochastic behavior in the bottleneck layers. Also, this is achieved without incorporating the chemical knowledge, which is done in some baselines (e.g., valency checks for chemical graphs in RVAE and GVAE, subgraph\nvocabulary in JT-VAE). This is preferable because additional validity checks are computationally demanding, and the prepared subgraph vocabulary limits the extrapolation capacity of the generative model. As our model does not incorporate domain-specific procedures, it can be easily extended to general graph structures.\nIt is remarkable that our GRF-based generative model achieves good generation performance scores comparable to GraphNVP, with much fewer trainable parameters in order of magnitude. These results indicate the efficient construction of our GRF in terms of parametrization, as well as powerfulness and flexibility of the residual connections, compared to the coupling flows based on simple affine transformations. Therefore, our goal of proposing a novel and strong invertible flow for molecular graph generation is successfully achieved by the development of the GRF. We will discuss the number of parameters of GRF using Big-O notation in Section 4.4.\nThe experiments also reveal a limitation of the current formulation of the GRF. One notable limitation is the lower uniqueness compared to the GraphNVP. We found that the generated molecules contain many straight-chain molecules compared to those of GraphNVP, by examining the generated molecules manually. We attribute this phenomenon to the difficulty of generating realistic molecules without explicit chemical knowledge or autoregressive constraints. We are planning to tackle this issue as one of the future works." }, { "heading": "4.4 EFFICIENCY IN TERMS OF MODEL SIZE", "text": "As we observe in the previous section, our GRF-based generative models are compact and memoryefficient in terms of the number of trainable parameters, compared to the existing GraphNVP flow model. In this section we discuss this issue in a more formal manner.\nLetL be the number of layers,R be the number of the bond types,M be the number of atom types. For GraphNVP, We need O ( LN4R2 ) and O ( LN2M2R2 ) parameters to construct adjacency coupling\nlayers and atom coupling layers, respectively. From the above, we need O ( LN2R2(N2 +M2) ) parameters to construct whole GraphNVP. By contrast, our model only requires O ( LR2N2 ) and\nO ( LR2M2 ) parameters for res-GraphLinear and res-GCN, respectively. Therefore, whole GRF\nmodel requires O ( LR2(N2 +M2) ) parameters. In most cases of molecular graph generation settings, R ≤ 5 and N is dominant.\nOur GRF for ZINC-250k uses linear layers to handle adjacency matrices, but the number of the parameters is substantially reduced by low-rank approximation (introduced in Sec. 4.1). Let r be the approximated rank of each linear layer, and the whole GRF requires only O ( LR2(N2r +M2) ) parameters. Notably, GraphLinear is equal to low-rank approximation when r = 1.\nOur model’s efficiency in model size is much more important when generating large molecules. Suppose we want to generate molecule with N = 100 heavy atoms with batch size of 64. Estimating from the memory usage of GRF for ZINC-250k (N = 40), GRF will consume 21 GB if r = 100 and GraphNVP will consume as large as 2100 GB. Since one example of the GPUs currently used (e.g., NVIDIA Tesla V100) is equipped with 16 – 32 GB memory, GraphNVP cannot process a batch on a single GPU or batch normalization becomes unstable with small batch. On the other hand, our model will scale to larger graphs due to the reduced parameters." }, { "heading": "4.5 SMOOTHNESS OF THE LEARNED LATENT SPACE", "text": "As a final experiment, we present the visualization of the learned latent space of Z. First we randomly choose 100 molecules from the training set, and subsequently encode them into the latent representation using the trained model. We compute the first and the second principal components of the latent space by principal component analysis (PCA), and project the encoded molecules onto the plane spanned by these two principal component vectors. Then we choose another random molecule, xo, encode it and project it onto the aforementioned principal plane. Finally we decode the latent points on the principal plane, distributed in a grid-mesh pattern centered at the projection of xo, and visualize them in Fig. 3. Figure 3 indicates that the learnt latent spaces from both QM9 (panel (a)) and ZINC-250k datasets (panel (b)) are smooth where the molecules gradually change along the two axes.\nThe visualized smoothness appears to be similar to that of the VAE-based models but differs in that our GRF is a bijective function: the data points and the latent points correspond to each other in a one-to-one manner. In contrast, to generate the data points with VAE-based methods, it is required to decode the same latent point several times and select the most common molecule. Our model is\nefficient because it can generate the data point in one-shot. Additionally, smooth latent space and bijectivity are crucial to the actual use case. Our model enables molecular graph generation through querying: encode a molecule with the desired attributes and decode the perturbed latents to obtain the drug candidates with similar attributes." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed a Graph Residual Flow, which is an invertible residual flow for molecular graph generations. Our model exploits the expressive power of ResNet architecture. The invertibility of our model is guaranteed only by a slight modification, i.e. by the addition of spectral normalization to each layer. Owing to the aforementioned feature, our model can generate valid molecules both in QM9 and ZINC-250k datasets. The reconstruction accuracy is inherently 100%, and our model is more efficient in terms of model size as compared to GraphNVP, a previous flow model for graphs. In addition, the learned latent space of GRF is sufficiently smooth to enable the generation of molecules similar to a query molecule with known chemical properties.\nFuture works may include the creation of adjacency residual layers invariant for node permutation, and property optimization with GRF." }, { "heading": "A PROOF OF THEOREM", "text": "Lemma 1. ∀A ∈ RN×N , ∀X(= [x1, . . . , xN ]) ∈ RN×d, s.t. ‖AX‖F ≤ ‖A‖op ‖X‖F .\nProof.\n‖AX‖2F = N∑ i=1 ‖Axi‖22\n≤ ‖A‖2op N∑ i=1 ‖xi‖22\n= ‖A‖2op ‖X‖ 2 F .\n∴ ‖AX‖F ≤ ‖A‖op ‖X‖F\nLemma 2. ‖P‖op = ∥∥∥D̃−1/2ÃD̃−1/2∥∥∥\nop ≤ 1.\nProof. Augmented Normalized Laplacian L̃ is defined as L̃ = I − D̃−1/2ÃD̃−1/2 = I −P . Like the normal graph Laplacian, an i-th eigenvalue µ̃i of L̃ holds 0 ≤ µ̃i ≤ 2 (Oono & Suzuki, 2019). Here, for the eigenvector vi corresponding to λi, which is the i-th eigenvalue of P :\nPvi = λivi\nvi − L̃vi = λivi L̃vi = (1− λi)vi\n∴ λi = 1− µ̃i. As 0 ≤ µ̃i ≤ 2, −1 ≤ λi ≤ 1 i.e. |λi| ≤ 1 follows. Here, operation norm ‖P‖op is bounded maximum singular value σ (P ) . As P is a symmetric matrix from its construction, the maximum singular value σ (P ) is equal to the absolute eigenvalue |λmax| with the largest absolute value. From these conditions, ‖P‖op ≤ σ (P ) = |λmax| ≤ 1.\nTheorem 1. Lip(φ) ≤ L, ‖W‖op < 1\nL ⇒ Lip (RX) < 1.\nProof.\n‖RX(x)−RX(y)‖2 = ‖φ (vec (PXW ))− φ (vec (PYW ))‖2 ≤ L ‖vec (PXW )− vec (PYW )‖2 (∵ Lip(φ) ≤ L) = L ‖PXW − PYW‖F = L ‖P (X − Y )W‖F ≤ L‖P‖op ‖(X − Y )W‖F (∵ Lemma2.) ≤ L ‖(X − Y )W‖F (∵ Lemma1.) ≤ L‖W‖op ‖X − Y ‖F < L · 1\nL ‖X − Y ‖F\n≤ ‖X − Y ‖F = ‖vec (X)− vec (Y )‖2 = ‖x− y‖2 .\n∴ Lip(RX) < 1.\nAlgorithm 1 Inverse of Residual-layer via fixed-point iteration. Input: output from residual layer y, contractive residual block R, number of iterations n Output: inverse of y w.r.t R x0 ← y for i = 0, . . . , n do xi+1 ← y −R(xi)\nend for return xn" }, { "heading": "B MODEL HYPERPARAMETERS", "text": "We use a single-scale architecture for QM9 dataset, while we use multi-scale architecture (Dinh et al., 2017) for ZINC-250k dataset to scale to 38 heavy atoms. Other hyperparameters are shown in Table 3. We find the factor of spectral normalization 0.9 is enough for numerical invertibility." }, { "heading": "C ALGORITHMS", "text": "We show a procedure to calculate inverse of y with reference to R in algorithm 1. In experiments, we chose 100 as number of iteration n." } ]
2,019
null
SP:61c1ba5a02194732b56c6491b40e80d2d0846851
[ "The paper proposes a method of combining value functions for a certain class of tasks, including shortest path problems, to solve composed tasks. By expressing tasks as a Boolean algebra, they can be combined using the negation, conjunction and disjunction operations. Analogous operations are available for the optimal value functions of the tasks, which allows the agent to have immediate access to the optimal policy of these composed tasks after solving the base tasks. The theoretical composition properties are confirmed empirically on the four rooms environment and with function approximation on a more complex domain. ", "This paper introduces a framework for composing tasks by treating tasks as a Boolean algebra. The paper assumes an undiscounted MDP with a 0-1 reward and a fixed absorbing set G, and considers a family of tasks defined by different reward functions. Each task defers only by the value of the reward function at the absorbing set G. These restrictions are quite severe but basically describes goal-state reaching sparse reward tasks, which are quite general and valuable to study. The paper then defines a mapping onto a Boolean algebra for these tasks and shows how the mapping also allows re-using optimal Q functions for each task to solve a Boolean composition of these tasks. This is demonstrated on the tabular four-rooms environment and using deep Q learning for a 2D navigation task." ]
We propose a framework for defining a Boolean algebra over the space of tasks. This allows us to formulate new tasks in terms of the negation, disjunction and conjunction of a set of base tasks. We then show that by learning goal-oriented value functions and restricting the transition dynamics of the tasks, an agent can solve these new tasks with no further learning. We prove that by composing these value functions in specific ways, we immediately recover the optimal policies for all tasks expressible under the Boolean algebra. We verify our approach in two domains—including a high-dimensional video game environment requiring function approximation—where an agent first learns a set of base skills, and then composes them to solve a super-exponential number of new tasks.
[]
[ { "authors": [ "M. Andrychowicz", "F. Wolski", "A. Ray", "J. Schneider", "R. Fong", "P. Welinder", "B. McGrew", "J. Tobin", "P. Abbeel", "W. Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "P. Bacon", "J. Harb", "D. Precup" ], "title": "The option-critic architecture", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "A. Barreto", "W. Dabney", "R. Munos", "J. Hunt", "T. Schaul", "H. van Hasselt", "D. Silver" ], "title": "Successor features for transfer in reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "D.P. Bertsekas", "J.N. Tsitsiklis" ], "title": "An analysis of stochastic shortest path problems", "venue": "Mathematics of Operations Research,", "year": 1991 }, { "authors": [ "R. Fox", "A. Pakman", "N. Tishby" ], "title": "Taming the noise in reinforcement learning via soft updates", "venue": "In 32nd Conference on Uncertainty in Artificial Intelligence,", "year": 2016 }, { "authors": [ "T. Haarnoja", "V. Pong", "A. Zhou", "M. Dalal", "P. Abbeel", "S. Levine" ], "title": "Composable deep reinforcement learning for robotic manipulation", "venue": "arXiv preprint arXiv:1803.06773,", "year": 2018 }, { "authors": [ "J. Hunt", "A. Barreto", "T. Lillicrap", "N. Heess" ], "title": "Composing entropic policies using divergence correction", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "T. Jaksch", "R. Ortner", "P. Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "H.W. James", "E.J. Collins" ], "title": "An analysis of transient Markov decision processes", "venue": "Journal of applied probability,", "year": 2006 }, { "authors": [ "Leslie Pack Kaelbling" ], "title": "Learning to achieve goals", "venue": "In International Joint Conferences on Artificial Intelligence,", "year": 1993 }, { "authors": [ "S. Levine", "C. Finn", "T. Darrell", "P. Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "T.. Lillicrap", "J. Hunt", "A. Pritzel", "N. Heess", "T. Erez", "Y. Tassa", "D. Silver", "D. Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "P. Mirowski", "R. Pascanu", "F. Viola", "H. Soyer", "A. Ballard", "A. Banino", "M. Denil", "R. Goroshin", "L. Sifre", "K. Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A. Rusu", "J. Veness", "M. Bellemare", "A. Graves", "M. Riedmiller", "A. Fidjeland", "G. Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "X. Peng", "M. Chang", "G. Zhang", "P. Abbeel", "S. Levine" ], "title": "MCP: Learning composable hierarchical control with multiplicative compositional policies", "venue": null, "year": 1905 }, { "authors": [ "A.M. Saxe", "A.C. Earle", "B.S. Rosman" ], "title": "Hierarchy through composition with multitask LMDPs", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "T. Schaul", "D. Horgan", "K. Gregor", "D. Silver" ], "title": "Universal value function approximators", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "R. Sutton", "D. Precup", "S. Singh" ], "title": "Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "E. Todorov" ], "title": "Linearly-solvable Markov decision problems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "E. Todorov" ], "title": "Compositionality of optimal control laws", "venue": "In Advances in Neural Information Processing Systems,", "year": 2009 }, { "authors": [ "B. Van Niekerk", "S. James", "A. Earle", "B. Rosman" ], "title": "Composing value functions in reinforcement learning", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "C. Watkins" ], "title": "Learning from delayed rewards", "venue": "PhD thesis, King’s College,", "year": 1989 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) has achieved recent success in a number of difficult, high-dimensional environments (Mnih et al., 2015; Levine et al., 2016; Lillicrap et al., 2016; Silver et al., 2017). However, these methods generally require millions of samples from the environment to learn optimal behaviours, limiting their real-world applicability. A major challenge is thus in designing sample-efficient agents that can transfer their existing knowledge to solve new tasks quickly. This is particularly important for agents in a multitask or lifelong setting, since learning to solve complex tasks from scratch is typically infeasible.\nOne approach to transfer is composition (Todorov, 2009), which allows an agent to leverage existing skills to build complex, novel behaviours. These newly-formed skills can then be used to solve or speed up learning in a new task. In this work, we focus on concurrent composition, where existing base skills are combined to produce new skills (Todorov, 2009; Saxe et al., 2017; Haarnoja et al., 2018; Van Niekerk et al., 2019; Hunt et al., 2019; Peng et al., 2019). This differs from other forms of composition, such as options (Sutton et al., 1999) and hierarchical RL (Bacon et al., 2017), where actions and skills are chained in a temporal sequence.\nIn this work, we define a Boolean algebra over the space of tasks and optimal value functions. This extends previous composition results to encompass all Boolean operators: conjunction, disjunction, and negation. We then prove that there exists a homomorphism between the task and value function algebras. Given a set of base tasks that have been previously solved by the agent, any new task written as a Boolean expression can immediately be solved without further learning, resulting in a zero-shot super-exponential explosion in the agent’s abilities.\nWe illustrate our approach in a simple domain, where an agent first learns to reach a number of rooms, after which it can then optimally solve any task expressible in the Boolean algebra. We then demonstrate composition in high-dimensional video game environments, where an agent first learns to collect different objects, and then compose these abilities to solve complex tasks immediately. Our results show that, even when function approximation is required, an agent can leverage its existing skills to solve new tasks without further learning." }, { "heading": "2 PRELIMINARIES", "text": "We consider tasks modelled by Markov Decision Processes (MDPs). An MDP is defined by the tuple (S,A, ρ, r), where (i) S is the state space, (ii) A is the action space, (iii) ρ is a Markov transition\nkernel (s, a) 7→ ρ(s,a) from S × A to S, and (iv) r is the real-valued reward function bounded by [rMIN, rMAX]. In this work, we focus on stochastic shortest path problems (Bertsekas & Tsitsiklis, 1991), which model tasks in which an agent must reach some goal. We therefore consider the class of undiscounted MDPs with an absorbing set G ⊆ S. The goal of the agent is to compute a Markov policy π from S to A that optimally solves a given task. A given policy π is characterised by a value function V π(s) = Eπ [ ∑∞ t=0 r(st, at)], specifying the expected return obtained under π starting from state s.1 The optimal policy π∗ is the policy that obtains the greatest expected return at each state: V π ∗ (s) = V ∗(s) = maxπ V\nπ(s) for all s in S. A related quantity is the Q-value function, Qπ(s, a), which defines the expected return obtained by executing a from s, and thereafter following π. Similarly, the optimal Q-value function is given by Q∗(s, a) = maxπ Qπ(s, a) for all s in S and a in A. Finally, we denote a proper policy to be a policy that is guaranteed to eventually reach the absorbing set G (James & Collins, 2006; Van Niekerk et al., 2019). We assume the value functions for improper policies—those that never reach absorbing states—are unbounded below." }, { "heading": "3 BOOLEAN ALGEBRAS FOR TASKS AND VALUE FUNCTIONS", "text": "In this section, we develop the notion of a Boolean task algebra, allowing us to perform logical operations—conjunction (∧), disjunction (∨) and negation (¬)—over the space of tasks. We then show that, having solved a series of base tasks, an agent can use its knowledge to solve tasks expressible as a Boolean expression over those tasks, without any further learning.\nWe consider a family of related MDPsM restricted by the following assumptions: Assumption 1. For all tasks in a set of tasks M, (i) the tasks share the same state space, action space and transition dynamics, (ii) the transition dynamics are deterministic, (iii) reward functions between tasks differ only on the absorbing set G, and (iv) the set of possible terminal rewards consists of only two values. That is, for all (g, a) in G × A, we have that r(g, a) ∈ {r∅, rU} ⊂ R with r∅ ≤ rU . For all non-terminal states, we denote the reward rs,a to emphasise that it is constant across tasks. Assumption 2. For all tasks in a set of tasksM which adhere to Assumption 1, the set of possible terminal rewards consists of only two values. That is, for all (g, a) in G ×A, we have that r(g, a) ∈ {r∅, rU} ⊂ R with r∅ ≤ rU . For all non-terminal states, we denote the reward rs,a to emphasise that it is constant across tasks.\nAssumption 1 is similar to that of Todorov (2007) and identical to Van Niekerk et al. (2019), and imply that each task can be uniquely specified by its reward function. Furthermore, we note that Assumption 2 is only necessary to formally define the Boolean algebra. Although we have placed restrictions on the reward functions, the above formulation still allows for a large number of tasks to be represented. Importantly, sparse rewards can be formulated under these restrictions." }, { "heading": "3.1 A BOOLEAN ALGEBRA FOR TASKS", "text": "An abstract Boolean algebra is a set B equipped with operators ¬,∨,∧ that satisfy the Boolean axioms of (i) idempotence, (ii) commutativity, (iii) associativity, (iv) absorption, (v) distributivity, (vi) identity, and (vii) complements.2 Given the above definitions and the restrictions placed on the set of tasks we consider, we can now define a Boolean algebra over a set of tasks. Theorem 1. LetM be a set of tasks. DefineMU ,M∅ ∈M to be tasks with the respective reward functions\nrMU : S ×A → R (s, a) 7→ { rU , if s ∈ G rs,a, otherwise.\nrM∅ : S ×A → R (s, a) 7→ { r∅, if s ∈ G rs,a, otherwise.\n1Since we consider undiscounted MDPs, we can ensure the value function is bounded by augmenting the state space with a virtual state ω such that ρ(s,a)(ω) = 1 for all (s, a) in G ×A, and r = 0 after reaching ω.\n2We provide a description of these axioms in the Appendix.\nThen M forms a Boolean algebra with universal bounds M∅ and MU when equipped with the following operators: ¬ : M→M\nM 7→ (S,A, ρ, r¬M ), where r¬M : S ×A → R (s, a) 7→ ( rMU (s, a) + rM∅(s, a) ) − rM (s, a)\n∨ : M×M→M (M1,M2) 7→ (S,A, ρ, rM1∨M2), where rM1∨M2 : S ×A → R (s, a) 7→ max{rM1(s, a), rM2(s, a)} ∧ : M×M→M\n(M1,M2) 7→ (S,A, ρ, rM1∧M2), where rM1∧M2 : S ×A → R (s, a) 7→ min{rM1(s, a), rM2(s, a)}\nProof. See Appendix.\nTheorem 1 allows us to compose existing tasks together to create new tasks in a principled way. Figure 1 illustrates the semantics for each of the Boolean operators in a simple environment." }, { "heading": "3.2 EXTENDED VALUE FUNCTIONS", "text": "The reward and value functions described in Section 2 are insufficient to solve tasks specified by the Boolean algebra above. We therefore extend these to define goal-oriented versions of the reward and value function, given by the following two definitions: Definition 1. The extended reward function r̄ : S × G ×A → R is given by the mapping\n(s, g, a) 7→ { N if g 6= s ∈ G r(s, a) otherwise,\n(1)\nwhere N ≤ min{rMIN, (rMIN − rMAX)D}, and D is the diameter of the MDP (Jaksch et al., 2010).3\nTo understand why standard value functions are insufficient, consider two tasks that have multiple different goals, but at least one common goal. Clearly, there is a meaningful conjunction between them—namely, achieving the common goal. Now consider an agent that learns standard value functions for both tasks, and which is then required to solve their conjunction without further learning. Note that this is impossible in general, since the regular value function for each task only represents the value of each state with respect to the nearest goal. That is, for all states where the nearest goal for each task is not the common goal, the agent has no information about that common goal. Conversely, by learning extended value functions, the agent is able to learn the value of achieving all goals, and not simply the nearest one.\nBecause we require that tasks share the same transition dynamics, we also require that the absorbing set of states is shared. Thus the extended reward function adds the extra constraint that, if the agent enters a terminal state for a different task, it should receive the largest penalty possible. In practice, we can simply set N to be the lowest finite value representable by the data type used for rewards. Definition 2. The extended Q-value function Q̄ : S × G ×A → R is given by the mapping\n(s, g, a) 7→ r̄(s, g, a) + ∫ S V̄ π̄(s′, g)ρ(s,a)(ds ′), (2)\nwhere V̄ π̄(s, g) = Eπ̄ [ ∑∞ t=0 r̄(st, g, at)]. The extended Q-value function is similar to universal value function approximators (UVFAs) (Schaul et al., 2015), but differs in that it uses the extended reward function definition. It is also similar to DG functions (Kaelbling, 1993), except here we use task-dependent reward functions, as opposed to measuring distance between states.\nThe standard reward functions and value functions can be recovered from their extended versions through the following lemma.\n3The diameter is defined as D = maxs 6=s′∈S minπ E [T (s′|π, s)], where T is the number of timesteps required to first reach s′ from s under π.\nLemma 1. Let rM , r̄M , Q∗M , Q̄∗M be the reward function, extended reward function, optimal Qvalue function, and optimal extended Q-value function for a task M in M. Then for all (s, a) in S ×A, we have (i) rM (s, a) = max\ng∈G r̄M (s, g, a), and (ii) Q∗M (s, a) = max g∈G Q̄∗M (s, g, a)." }, { "heading": "Proof.", "text": "(i):\nmax g∈G r̄M (s, g, a) = { max{N, rM (s, a)}, if s ∈ G max g∈G rM (s, a), otherwise.\n= rM (s, a) (N ≤ rMIN ≤ rM (s, a) by definition).\n(ii): Each g in G can be thought of as defining an MDP Mg := (S,A, ρ, rMg ) with reward function rMg (s, a) := r̄M (s, g, a) and optimal Q-value function Q ∗ Mg\n(s, a) = Q̄∗M (s, g, a). Then using (i) we have rM (s, a) = max\ng∈G rMg (s, a) and from Van Niekerk et al. (2019, Corollary 1), we\nhave that Q∗M (s, a) = max g∈G Q∗Mg (s, a) = maxg∈G Q̄∗M (s, g, a).\nIn the same way, we can also recover the optimal policy from these extended value functions by first applying Lemma 1, and acting greedily with respect to the resulting value function. Lemma 2. Denote S− = S \\ G as the non-terminal states ofM. Let M1,M2 ∈M, and let each g in G define MDPs M1,g and M2,g with reward functions rM1,g := r̄M1(s, g, a) and rM2,g := r̄M2(s, g, a) for all (s, a) in S ×A." }, { "heading": "Then for all g in G and s in S−,", "text": "π∗g(s) ∈ arg max a∈A Q∗M1,g (s, a) iff π ∗ g(s) ∈ arg max a∈A Q∗M2,g (s, a).\nProof. See Appendix.\nCombining Lemmas 1 and 2, we can extract the greedy action from the extended value function by first maximising over goals, and then selecting the maximising action: π∗(s) ∈ arg maxa∈Amaxg∈G Q̄\n∗(s, g, a). If we consider the extended value function to be a set of standard value functions (one for each goal), then this is equivalent to first performing generalised policy improvement (Barreto et al., 2017), and then selecting the greedy action.\nFinally, much like the regular definition of value functions, the extended Q-value function can be written as the sum of rewards received by the agent until first encountering a terminal state.\nCorollary 1. Denote G∗s:g,a as the sum of rewards starting from s and taking action a up until, but not including, g. Then let M ∈ M and Q̄∗M be the extended Q-value function. Then for all s ∈ S, g ∈ G, a ∈ A, there exists a G∗s:g,a ∈ R such that\nQ̄∗M (s, g, a) = G ∗ s:g,a + r̄M (s ′, g, a′), where s′ ∈ G and a′ = arg max b∈A r̄M (s ′, g, b).\nProof. This follows directly from Lemma 2. Since all tasks M ∈M share the same optimal policy π∗g up to (but not including) the goal state g ∈ G, their return G π∗g T−1 = ∑T−1 t=0 rM (st, π ∗ g(st)) is the same up to (but not including) g." }, { "heading": "3.3 A BOOLEAN ALGEBRA FOR VALUE FUNCTIONS", "text": "In the same manner we constructed a Boolean algebra over a set of tasks, we can also do so for a set of optimal extended Q-value functions for the corresponding tasks. Theorem 2. Let Q̄∗ be the set of optimal extended Q̄-value functions for tasks in M. Define Q̄∗∅, Q̄ ∗ U ∈ Q̄∗ to be the optimal Q̄-functions for the tasks M∅,MU ∈ M. Then Q̄∗ forms a Boolean algebra when equipped with the following operators:\n¬ : Q̄∗ → Q̄∗\nQ̄∗ 7→ ¬Q̄∗, where ¬Q̄∗ : S × G ×A → R (s, g, a) 7→ ( Q̄∗U (s, g, a) + Q̄ ∗ ∅(s, g, a) ) − Q̄∗(s, g, a)\n∨ : Q̄∗ × Q̄∗ → Q̄∗\n(Q̄∗1, Q̄ ∗ 2) 7→ Q̄∗1 ∨ Q̄∗2, where Q̄∗1 ∨ Q̄∗2 : S × G ×A → R\n(s, g, a) 7→ max{Q̄∗1(s, g, a), Q̄∗2(s, g, a)}\n∧ : Q̄∗ × Q̄∗ → Q̄∗\n(Q̄∗1, Q̄ ∗ 2) 7→ Q̄∗1 ∧ Q̄∗2, where Q̄∗1 ∧ Q̄∗2 : S × G ×A → R\n(s, g, a) 7→ min{Q̄∗1(s, g, a), Q̄∗2(s, g, a)}\nProof. See Appendix." }, { "heading": "3.4 BETWEEN TASK AND VALUE FUNCTION ALGEBRAS", "text": "Having established a Boolean algebra over tasks and extended value function, we finally show that there exists an equivalence between the two. As a result, if we can write down a task under the Boolean algebra, we can immediately write down the optimal value function for the task. Theorem 3. Let F : M → Q̄∗ be any map fromM to Q̄∗ such that F(M) = Q̄∗M for all M in M. Then F is a homomorphism.\nProof. See Appendix." }, { "heading": "4 ZERO-SHOT TRANSFER THROUGH COMPOSITION", "text": "We can use the theory developed in the previous sections to perform zero-shot transfer by first learning extended value functions for a set of base tasks, and then composing them to solve new tasks expressible under the Boolean algebra. To demonstrate this, we conduct a series of experiments in a Four Rooms domain (Sutton et al., 1999), where an agent must navigate in a grid world to a particular location. The agent can move in any of the four cardinal directions at each timestep, but colliding with a wall leaves the agent in the same location. The transition dynamics are deterministic, and rewards are −0.1 for all non-terminal states, and 1 at the goal." }, { "heading": "4.1 LEARNING BASE TASKS", "text": "We use a modified version of Q-learning (Watkins, 1989) to learn extended Q-value functions described previously. Our algorithm differs in a number of ways from standard Q-learning: we keep track of the set of terminating states seen so far, and at each timestep we update the extended Q-value function with respect to both the current state and action, as well as all goals encountered so far. We also use the definition of the extended reward function, and so if the agent encounters a terminal state of a different task, it receives reward N . The full pseudocode is listed in the Appendix.\nIf we know the set of goals (and hence potential base tasks) upfront, then it is easy to select a minimal set of base tasks that can be composed to produce the largest number of composite tasks. We first assign a Boolean label to each goal in a table, and then use the columns of the table as base tasks. The goals for each base task are then those goals with value 1 according to the table. In this domain, the two base tasks we select are MT, which requires that the agent visit either of the top two rooms, and ML, which requires visiting the two left rooms. We illustrate this selection procedure in the Appendix." }, { "heading": "4.2 BOOLEAN COMPOSITION", "text": "Having learned the optimal extended value functions for our base tasks, we can now leverage Theorems 1–3 to solve new tasks with no further learning. Figure 2 illustrates this composition, where an agent is able to immediately solve complex tasks such as exclusive-or. We illustrate a few composite tasks here, but note that in general, if we have K base tasks, then a Boolean algebra allows for 22 K\nnew tasks to be constructed. Thus having trained on only two tasks, our agent has enough information to solve a total of 16 composite tasks.\nBy learning extended value functions, an agent can subsequently solve a massive number of tasks; however, the upfront cost of learning is likely to be higher. We investigate the trade-off between the two approaches by investigating how the sample complexity scales with the number of tasks. We compare to Van Niekerk et al. (2019), who used regular value functions to demonstrate optimal disjunctive composition. We note that while the upfront learning cost is therefore lower, the number of tasks expressible using only disjunction is 2K−1, which is significantly less than the full Boolean algebra. We also test using an extended version of the Four Rooms domain, where additional goals are placed along the sides of all walls, resulting in a total of 40 goals. Empirical results are illustrated by Figure 3.\nOur results show that while additional samples are needed to learn an extended value function, the agent is able to expand the tasks it can solve super-exponentially. Furthermore, the number of base tasks we need to solve is only logarithmic in the number of goal states. For an environment with K goals, we need to learn only blog2Kc+ 1 base tasks, as opposed to the disjunctive approach which requires K base tasks. Thus by sacrificing sample efficiency initially, we achieve an exponential increase in abilities compared to previous work (Van Niekerk et al., 2019)." }, { "heading": "5 COMPOSITION WITH FUNCTION APPROXIMATION", "text": "Finally, we demonstrate that our compositional approach can also be used to tackle high-dimensional domains where function approximation is required. We use the same video game environment as Van Niekerk et al. (2019), where an agent must navigate a 2D world and collect objects of different shapes and colours. The state space is an 84×84 RGB image, and the agent is able to move in any of the four cardinal directions. The agent also possesses a pick-up action, which allows it to collect an object when standing on top of it. There are two shapes (squares and circles) and three colours (blue, beige and purple) for a total of six unique objects. The position of the agent is randomised at the start of each episode.\nWe modify deep Q-learning (Mnih et al., 2015) to learn extended action-value functions.4 Our approach differs in that the network takes a goal state as additional input (again specified as an RGB image). Additionally, when a terminal state is encountered, it is added to the collection of goals seen so far, and when learning updates occur, these goals are sampled randomly from a replay buffer. We first learn to solve two base tasks: collecting blue objects, and collecting squares, which can then be composed to solve new tasks immediately.\nWe demonstrate composition characterised by (i) disjunction, (ii) conjunction and (iii) exclusiveor. This corresponds to tasks where the target items are: (i) blue or square, (ii) blue squares, and (iii) blue or squares, but not blue squares. Figure 4 illustrates sample trajectories, as well as the subsequent composed value functions, for the respective tasks." }, { "heading": "6 RELATED WORK", "text": "The ability to compose value functions was first demonstrated using the linearly-solvable MDP framework (Todorov, 2007), where value functions could be composed to solve tasks similar to the disjunctive case (Todorov, 2009). Van Niekerk et al. (2019) show that the same kind of composition can be achieved using entropy-regularised RL (Fox et al., 2016), and extend the results to the standard RL setting, where agents can optimally solve the disjunctive case. Using entropy-regularised RL, Haarnoja et al. (2018) approximates the conjunction of tasks by averaging their reward functions, and demonstrates that by averaging the optimal value functions of the respective tasks, the agent can achieve performance close to optimal. Hunt et al. (2019) extends this result by composing value functions to solve the average reward task exactly, which approximates the true conjunctive case. More recently, Peng et al. (2019) introduce a few-shot learning approach to compose policies\n4The hyperparameters and network architecture are listed in the Appendix\nmultiplicatively. Although lacking theoretical foundations, results show that an agent can learn a weighted composition of existing base skills to solve a new complex task. By contrast, we show that zero-shot optimal composition can be achieved for all Boolean operators." }, { "heading": "7 CONCLUSION", "text": "We have shown how to compose tasks using the standard Boolean algebra operators. These composite tasks can be immediately solved by first learning goal-oriented value functions, and then composing them in a similar manner. Finally, we note that there is much room for improvement in learning the extended value functions for the base tasks. In our experiments, we learned each extended value function from scratch, but it is likely that having learned one for the first task, we could use it to initialise the extended value function for the second task to improve convergence times. One area for improvement lies in efficiently learning the extended value functions, as well as developing better algorithms for solving tasks with sparse rewards. For example, it is likely that approaches such as hindsight experience replay (Andrychowicz et al., 2017) could reduce the number of samples required to learn extended value functions, while Mirowski et al. (2017) provides a method for learning complex tasks with sparse rewards using auxiliary tasks. We leave incorporating these approaches to future work. Our proposed approach is a step towards both interpretable RL— since both the tasks and optimal value functions can be specified using Boolean operators—and the ultimate goal of lifelong learning agents, which are able to solve combinatorially many tasks in a sample-efficient manner." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 BOOLEAN ALGEBRA DEFINITION", "text": "Definition 3. A Boolean algebra is a set B equipped with the binary operators ∨ (disjunction) and ∧ (conjunction), and the unary operator ¬ (negation), which satisfies the following Boolean algebra axioms for a, b, c in B:\n(i) Idempotence: a ∧ a = a ∨ a = a. (ii) Commutativity: a ∧ b = b ∧ a and a ∨ b = b ∨ a.\n(iii) Associativity: a ∧ (b ∧ c) = (a ∧ b) ∧ c and a ∧ (b ∨ c) = (a ∨ b) ∨ c. (iv) Absorption: a ∧ (a ∨ b) = a ∨ (a ∧ b) = a. (v) Distributivity: a ∧ (b ∨ c) = (a ∧ b) ∨ (a ∧ c) and a ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c).\n(vi) Identity: there exists 0,1 in B such that 0 ∧ a = 0 0 ∨ a = a 1 ∧ a = a 1 ∨ a = 1\n(vii) Complements: for every a in B, there exists an element a′ in B such that a ∧ a′ = 0 and a ∨ a′ = 1." }, { "heading": "A.2 PROOF FOR THEOREM 1", "text": "Theorem 1. LetM be a set of tasks. DefineMU ,M∅ ∈M to be tasks with the respective reward functions\nrMU : S ×A → R (s, a) 7→ { rU , if s ∈ G rs,a, otherwise.\nrM∅ : S ×A → R (s, a) 7→ { r∅, if s ∈ G rs,a, otherwise.\nThen M forms a Boolean algebra with universal bounds M∅ and MU when equipped with the following operators:\n¬ : M→M M 7→ (S,A, ρ, r¬M ), where r¬M : S ×A → R\n(s, a) 7→ ( rMU (s, a) + rM∅(s, a) ) − rM (s, a)\n∨ : M×M→M (M1,M2) 7→ (S,A, ρ, rM1∨M2), where rM1∨M2 : S ×A → R\n(s, a) 7→ max{rM1(s, a), rM2(s, a)}\n∧ : M×M→M (M1,M2) 7→ (S,A, ρ, rM1∧M2), where rM1∧M2 : S ×A → R\n(s, a) 7→ min{rM1(s, a), rM2(s, a)}\nProof. Let M1,M2 ∈M. We show that ¬,∨,∧ satisfy the Boolean properties (i) – (vii).\n(i)–(v): These easily follow from the fact that the min and max functions satisfy the idempotent, commutative, associative, absorption and distributive laws.\n(vi): Let rMU∧M1 and rM1 be the reward functions forMU ∧M1 and M1 respectively. Then for all (s, a) in S ×A,\nrMU∧M1(s, a) = { min{rU , rM1(s, a)}, if s ∈ G min{rs,a, rs,a}, otherwise.\n= { rM1(s, a), if s ∈ G rs,a, otherwise.\n(rM1(s, a) ∈ {r∅, rU} for s ∈ G)\n= rM1(s, a).\nThusMU ∧M1 = M1. SimilarlyMU ∨M1 =MU ,M∅∧M1 =M∅, andM∅∨M1 = M1 . HenceM∅ andMU are the universal bounds ofM.\n(vii): Let rM1∧¬M1 be the reward function for M1 ∧ ¬M1. Then for all (s, a) in S ×A,\nrM1∧¬M1(s, a) = { min{rM1(s, a), (rU + r∅)− rM1(s, a)}, if s ∈ G min{rs,a, (rs,a + rs,a)− rs,a}, otherwise.\n= r∅, if s ∈ G and rM1(s, a) = rU r∅, if s ∈ G and rM1(s, a) = r∅ rs,a, otherwise.\n= rM∅(s, a).\nThus M1 ∧ ¬M1 =M∅, and similarly M1 ∨ ¬M1 =MU ." }, { "heading": "A.3 PROOF FOR LEMMA 2", "text": "Lemma 2. Denote S− = S \\ G as the non-terminal states ofM. Let M1,M2 ∈M, and let each g in G define MDPs M1,g and M2,g with reward functions\nrM1,g := r̄M1(s, g, a) and rM2,g := r̄M2(s, g, a) for all (s, a) in S ×A." }, { "heading": "Then for all g in G and s in S−,", "text": "π∗g(s) ∈ arg max a∈A Q∗M1,g (s, a) iff π ∗ g(s) ∈ arg max a∈A Q∗M2,g (s, a).\nProof. Let g ∈ G, s ∈ S− and let π∗g be defined by\nπ∗g(s ′) ∈ arg max a∈A Q∗M1,g(s, a) for all s ′ ∈ S.\nIf g is unreachable from s, then we are done since for all (s′, a) in S ×A we have\ng 6= s′ =⇒ rM1,g (s′, a) = { N, if s′ ∈ G rs′,a, otherwise = rM2,g (s ′, a)\n=⇒ M1,g = M2,g.\nIf g is reachable from s, then we show that following π∗g must reach g. Since π ∗ g is proper, it must reach a terminal state g′ ∈ G. Assume g′ 6= g. Let πg be a policy that produces the shortest trajectory\nto g. Let Gπ ∗ g and Gπg be the returns for the respective policies. Then,\nGπ ∗ g ≥ Gπg\n=⇒ Gπ ∗ g\nT−1 + rM1,g (g ′, π∗g(g ′)) ≥ Gπg ,\nwhere G π∗g T−1 = T−1∑ t=0 rM1,g (st, π ∗ g(st)) and T is the time at which g ′ is reached.\n=⇒ Gπ ∗ g\nT−1 +N ≥ Gπg , since g 6= g′ ∈ G\n=⇒ N ≥ Gπg −Gπ ∗ g\nT−1\n=⇒ (rMIN − rMAX)D ≥ Gπg −G π∗g T−1, by definition of N =⇒ Gπ ∗ g\nT−1 − rMAXD ≥ Gπg − rMIND, since Gπg ≥ rMIND\n=⇒ Gπ ∗ g\nT−1 − rMAXD ≥ 0\n=⇒ Gπ ∗ g\nT−1 ≥ rMAXD.\nBut this is a contradiction since the result obtained by following an optimal trajectory up to a terminal state without the reward for entering the terminal state must be strictly less that receiving rMAX for every step of the longest possible optimal trajectory. Hence we must have g′ = g. Similarly, all optimal policies of M2,g must reach g. Hence π∗g(s) ∈ arg max\na∈A Q∗M2,g (s, a). Since M1 and M2 are\narbitrary elements ofM, the reverse implication holds too." }, { "heading": "A.4 PROOF FOR THEOREM 2", "text": "Theorem 2. Let Q̄∗ be the set of optimal extended Q̄-value functions for tasks in M. Define Q̄∗∅, Q̄ ∗ U ∈ Q̄∗ to be the optimal Q̄-functions for the tasks M∅,MU ∈ M. Then Q̄∗ forms a Boolean algebra when equipped with the following operators:\n¬ : Q̄∗ → Q̄∗\nQ̄∗ 7→ ¬Q̄∗, where ¬Q̄∗ : S × G ×A → R (s, g, a) 7→ ( Q̄∗U (s, g, a) + Q̄ ∗ ∅(s, g, a) ) − Q̄∗(s, g, a)\n∨ : Q̄∗ × Q̄∗ → Q̄∗\n(Q̄∗1, Q̄ ∗ 2) 7→ Q̄∗1 ∨ Q̄∗2, where Q̄∗1 ∨ Q̄∗2 : S × G ×A → R\n(s, g, a) 7→ max{Q̄∗1(s, g, a), Q̄∗2(s, a)}\n∧ : Q̄∗ × Q̄∗ → Q̄∗\n(Q̄∗1, Q̄ ∗ 2) 7→ Q̄∗1 ∧ Q̄∗2, where Q̄∗1 ∧ Q̄∗2 : S × G ×A → R\n(s, g, a) 7→ min{Q̄∗1(s, g, a), Q̄∗2(s, a)}\nProof. Let Q̄∗M1 , Q̄ ∗ M2 ∈ Q̄∗ be the optimal Q̄-value functions for tasks M1,M2 ∈ M with reward functions rM1 and rM2 . We show that ¬,∨,∧ satisfy the Boolean properties (i) – (vii).\n(i)–(v): These follow directly from the properties of the min and max functions.\n(vi): For all (s, g, a) in S × G ×A, (Q̄∗U ∧ Q̄∗M1)(s, g, a) = min{(Q̄∗U (s, g, a), Q̄∗M1(s, g, a)}\n= min{G∗s:g,a + r̄MU (s′, g, a′), G∗s:g,a + r̄M1(s′, g, a′)} (Corollary 1) = G∗s:g,a + min{r̄MU (s′, g, a′), r̄M1(s′, g, a′)} = G∗s:g,a + r̄M1(s ′, g, a′) (since r̄M1(s ′, g, a′) ∈ {r∅, rU , N}) = Q̄∗M1(s, g, a).\nSimilarly, Q̄∗U ∨ Q̄∗M1 = Q̄∗U , Q̄∗∅ ∧ Q̄∗M1 = Q̄∗∅, and Q̄∗∅ ∨ Q̄∗M1 = Q̄∗M1 .\n(vii): For all (s, g, a) in S × G ×A, (Q̄∗M1 ∧ ¬Q̄∗M1)(s, g, a) = min{Q̄∗M1(s, g, a), (Q̄∗U (s, g, a)− Q̄∗∅(s, g, a))− Q̄∗M1(s, g, a)}\n= G∗s:g,a + min{r̄M1(s′, g, a′), (r̄MU (s′, g, a′) + r̄M∅(s′, g, a′)) − r̄M1(s′, g, a′)} = G∗s:g,a + r̄M∅(s ′, g, a′) = Q̄∗∅(s, g, a).\nSimilarly, Q̄∗M1 ∨ ¬Q̄∗M1 = Q̄∗U ." }, { "heading": "A.5 PROOF FOR THEOREM 3", "text": "Theorem 3. Let F : M → Q̄∗ be any map fromM to Q̄∗ such that F(M) = Q̄∗M for all M in M. Then F is a homomorphism.\nProof. Let M1,M2 ∈M. Then for all (s, g, a) in S × G ×A, Q̄∗¬M1(s, g, a) = G ∗ s:g,a + r̄¬M1(s ′, g, a′) (from Corollary 1)\n= G∗s:g,a + (r̄MU (s ′, g, a′) + r̄M∅(s ′, g, a′))− r̄M1(s′, g, a′) = [ (G∗s:g,a + r̄MU (s ′, g, a′)) + (G∗s:g,a + r̄M∅(s ′, g, a′)) ] − (G∗s:g,a + r̄M1(s′, g, a′))\n= [ Q̄∗U (s, g, a) + Q̄ ∗ ∅(s, g, a) ] − Q̄∗M1(s, g, a)\n= ¬Q̄∗M1(s, g, a) =⇒ F(¬M1) = ¬F(M1)\nQ̄∗M1∨M2(s, g, a) = G ∗ s:g,a + r̄M1∨M2(s ′, g, a′)\n= G∗s:g,a + max{r̄M1(s′, g, a′), r̄M2(s′, g, a′′)} = max{G∗s:g,a + r̄M1(s′, g, a′), G∗s:g,a + r̄M2(s′, g, a′′)} = max{Q̄∗M1(s, g, a), Q̄∗M2(s, g, a)} = (Q̄∗M1 ∨ Q̄∗M2)(s, g, a)\n=⇒ F(M1 ∨M2) = F(M1) ∨ F(M2). Similarly F(M1 ∧M2) = F(M1) ∧ F(M2)." }, { "heading": "A.6 GOAL-ORIENTED Q-LEARNING", "text": "Below we list the pseudocode for the modified Q-learning algorithm used in the four-rooms domain.\nAlgorithm 1: Goal-oriented Q-learning Input: Learning rate α, discount factor γ, exploration constant ε, lower-bound return N Initialise Q : S × S ×A → R arbitrarily G ← ∅ while Q is not converged do\nInitialise state s while s is not terminal do\nif G = ∅ then Select random action a else\na← arg maxb∈A ( max t∈G Q(s, t, b) ) with probability 1− ε\na random action with probability ε end Choose a from s according to policy derived from Q Take action a, observe r and s′ foreach g ∈ G do\nif s′ is terminal then if s′ 6= g then\nδ ← N else\nδ ← r −Q(s, g, a) end\nelse δ ← r + γmaxbQ(s′, g, b)−Q(s, g, a) end Q(s, g, a)← Q(s, g, a) + αδ\nend s← s′\nend G ← G ∪ {s}\nend return Q\nFigure 5: A Q-learning algorithm for learning extended value functions. Note that the greedy action selection step is equivalent to generalised policy improvement (Barreto et al., 2017) over the set of extended value functions.\nA.7 INVESTIGATING PRACTICAL CONSIDERATIONS\nThe theoretical results presented in this work rely on Assumptions 1 and 2, which restrict the tasks’ transition dynamics and reward functions in potentially problematic ways. Although this is necessary to prove that Boolean algebraic composition results in optimal value functions, in this section we investigate whether these can be practically ignored. In particular, we investigate two restrictions: the requirement that tasks share the same terminal states, and the impact of using dense rewards." }, { "heading": "A.7.1 FOUR ROOMS EXPERIMENTS", "text": "We use the same setup as the experiment outlined in Section 4, but modify it in two ways. We first investigate the difference between using sparse and dense rewards. Our sparse reward function is defined as\nrsparse(s, a, s ′) = { 20 if s′ ∈ G −1 otherwise,\nand we use a dense reward function similar to Peng et al. (2019):\nrdense(s, a, s ′) =\n1 |G| ∑ g∈G exp( |s′ − g|2 4 ) + rsparse(s, a, s ′)\nUsing this dense reward function, we again learn to solve the two base task MT (reaching the centre of the top two rooms) and ML (reaching the centre of the left two rooms). We then compose them to solve a variety of tasks, with the resulting value functions illustrated by Figure 6.\nWe also modify the domain so that tasks need not share the same terminating states (that is, if the agent enters a terminating state for a different task, the episode does not terminate and the agent can continue as if it were a normal state). This results in four versions of the experiment:\n(i) sparse reward, same absorbing set (ii) sparse reward, different absorbing set\n(iii) dense reward, same absorbing set (iv) dense reward, different absorbing set\nWe learn extended value functions for each of the above setups, and then compose them to solve each of the 24 tasks representable in the Boolean algebra. We measure each composed value functions by evaluating its policy in the sparse reward setting, averaging results over 100000 episodes. The results are given by Figure 7.\nOur results indicate that extended value functions learned in the theoretically optimal manner (sparse reward, same absorbing set) are indeed optimal. However, for the majority of the tasks, relaxing the restrictions on terminal states and reward functions results in policies that are either identical or very close to optimal." }, { "heading": "A.7.2 FUNCTION APPROXIMATION EXPERIMENTS", "text": "In this section we investigate whether we can again loosen some of the restrictive assumptions when tackling high-dimensional environments. In particular, we run the same experiments as those presented in Section 5, but modify the domain so that (i) tasks need not share the same absorbing set, (ii) the pickup-up action is removed (the agent immediately collects an object when reaching it), and (iii) the position of every object is randomised at the start of each episode.\nWe first learn to solve three base tasks: collecting blue objects, collecting purple objects, and collecting squares , which can then be composed to solve new tasks immediately. We then demonstrate composition characterised by disjunction, conjunction and exclusive-or, with the resulting trajectories and value functions illustrated by Figure 8.\nIn summary, we have shown that our compositional approach offers strong empirical performance, even when the theoretical assumptions are violated. Finally, we expect that, in general, the errors due to these violations will be far outweighed by the errors due to non-linear function approximation." }, { "heading": "A.8 SELECTING BASE TASKS", "text": "The Four Rooms domain requires the agent to navigate to one of the centres of the rooms in the environment. Figure 9 illustrates the layout of the environment and the goals the agent must reach.\nSince we know the goals upfront, we can select a minimal set of base tasks by assigning each goal a Boolean number, and then using the columns of the table to select the tasks. To illustrate, we assign Boolean numbers to the goals as follows:\nAs there are four goals, we can represent each uniquely with just two Boolean variables. Each column in Table 1 represents a base task, where the set of goals for each task are those goals assigned a value rU . We thus have two base tasks corresponding to x1 = {top-right,top-left} and x2 = {bottom-left,top-left}." }, { "heading": "A.9 DQN ARCHITECTURE AND HYPERPARAMETERS", "text": "In our experiments, we used a DQN with the following architecture:\n1. Three convolutional layers: (a) Layer 1 has 6 input channels, 32 output channels, a kernel size of 8 and a stride of 4. (b) Layer 2 has 32 input channels, 64 output channels, a kernel size of 4 and a stride of 2. (c) Layer 3 has 64 input channels, 64 output channels, a kernel size of 3 and a stride of 1.\n2. Two fully-connected linear layers: (a) Layer 1 has input size 3136 and output size 512 and uses a ReLU activation function. (b) Layer 2 has input size 512 and output size 4 with no activation function.\nWe used the ADAM optimiser with batch size 32 and a learning rate of 10−4. We trained every 4 timesteps and update the target Q-network every 1000 steps. Finally, we used -greedy exploration, annealing to 0.01 over 100000 timesteps." } ]
2,019
null
SP:a3911fe147060f3b790ea85cfaf18034add4368c
[ "In this paper, the authors extend the UCB Q-learning algorithm by Jin et al. (2018) to infinite horizon discounted MDPs, and prove a PAC bound of \\tilde{O}(SA/\\epsilon^2 (1-\\gamma)^7) for the resulting algorithm. This bound improves the one for delayed Q-learning by Strehl et al. (2006) and matches the lower-bound in terms of \\epsilon, S, and A. ", "This paper extends Jin et al. (2018)'s idea to infinite horizon and improves the best known sample complexity to $\\tilde{O}(\\frac{SA}{\\epsilon^2 (1-\\gamma)^7})$. The derivation is similar to Jin's paper except a very careful selection on the pseudo-horizon length $H$, where $H$ is given in finite horizon and work as the decaying rate for $\\alpha_k$, but for infinite horizon when we need to decide how to pick $H$." ]
A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. (2018) proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards without accessing a generative model. We show that the sample complexity of exploration of our algorithm is bounded by Õ( SA 2(1−γ)7 ). This improves the previously best known result of Õ( SA 4(1−γ)8 ) in this setting achieved by delayed Q-learning (Strehl et al., 2006), and matches the lower bound in terms of as well as S and A up to logarithmic factors.
[ { "affiliations": [], "name": "Kefan Dong" }, { "affiliations": [], "name": "Yuanhao Wang" }, { "affiliations": [], "name": "Xiaoyu Chen" }, { "affiliations": [], "name": "Liwei Wang" } ]
[ { "authors": [ "Mohammad Gheshlaghi Azar", "Remi Munos", "Mohammad Ghavamzadeh", "Hilbert Kappen" ], "title": "Speedy q-learning", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Mohammad Gheshlaghi Azar", "Ian Osband", "Rémi Munos" ], "title": "Minimax regret bounds for reinforcement learning", "venue": "arXiv preprint arXiv:1703.05449,", "year": 2017 }, { "authors": [ "Ronen I. Brafman", "Moshe Tennenholtz" ], "title": "R-max - a general polynomial time algorithm for near-optimal reinforcement learning", "venue": "J. Mach. Learn. Res.,", "year": 2003 }, { "authors": [ "Christoph Dann", "Tor Lattimore", "Emma Brunskill" ], "title": "Unifying pac and regret: Uniform pac bounds for episodic reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Eyal Even-Dar", "Yishay Mansour" ], "title": "Learning rates for q-learning", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Thomas Jaksch", "Ronald Ortner", "Peter Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Chi Jin", "Zeyuan Allen-Zhu", "Sebastien Bubeck", "Michael I Jordan" ], "title": "Is q-learning provably efficient", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sham Machandranath Kakade" ], "title": "On the sample complexity of reinforcement learning", "venue": "PhD thesis,", "year": 2003 }, { "authors": [ "Tor Lattimore", "Marcus Hutter" ], "title": "Pac bounds for discounted mdps", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2012 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Aaron Sidford", "Mengdi Wang", "Xian Wu", "Lin Yang", "Yinyu Ye" ], "title": "Near-optimal time and sample complexities for solving markov decision processes with a generative model", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aaron Sidford", "Mengdi Wang", "Xian Wu", "Yinyu Ye" ], "title": "Variance reduced value iteration and faster algorithms for solving markov decision processes", "venue": "In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms,", "year": 2018 }, { "authors": [ "Alexander L Strehl", "Michael L Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Alexander L Strehl", "Lihong Li", "Eric Wiewiora", "John Langford", "Michael L Littman" ], "title": "Pac modelfree reinforcement learning", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "István Szita", "Csaba Szepesvári" ], "title": "Model-based reinforcement learning with nearly tight exploration complexity bounds", "venue": "In Proceedings of the 27th International Conference on Machine Learning", "year": 2010 }, { "authors": [ "S S" ], "title": "×H, Ā = A; • γ = (1− 1/H); • for a state s at step h, let s̄s,h be the corresponding state. For any action a and next state s′, define r̄(s̄s,h", "venue": null, "year": 2020 } ]
[ { "heading": null, "text": "A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. (2018) proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards without accessing a generative model. We show that the sample complexity of exploration of our algorithm is bounded by Õ( SA 2(1−γ)7 ). This improves the previously best known result of Õ( SA 4(1−γ)8 ) in this setting achieved by delayed Q-learning (Strehl et al., 2006), and matches the lower bound in terms of as well as S and A up to logarithmic factors." }, { "heading": "1 INTRODUCTION", "text": "The goal of reinforcement learning (RL) is to construct efficient algorithms that learn and plan in sequential decision making tasks when the underlying system dynamics are unknown. A typical model in RL is Markov Decision Process (MDP). At each time step, the environment is in a state s. The agent takes an action a, obtain a reward r, and then the environment transits to another state. In reinforcement learning, the transition probability distribution is unknown. The algorithm needs to learn the transition dynamics of MDP, while aiming to maximize the cumulative reward. This poses the exploration-exploitation dilemma: whether to act to gain new information (explore) or to act consistently with past experience to maximize reward (exploit).\nTheoretical analyses of reinforcement learning fall into two broad categories: those assuming a simulator (a.k.a. generative model), and those without a simulator. In the first category, the algorithm is allowed to query the outcome of any state action pair from an oracle. The emphasis is on the number of calls needed to estimate the Q value or to output a near-optimal policy. There has been extensive research in literature following this line of research, the majority of which focuses on discounted infinite horizon MDPs (Azar et al., 2011; Even-Dar & Mansour, 2003; Sidford et al., 2018b). The current results have achieved near-optimal time and sample complexities (Sidford et al., 2018b;a).\n∗These two authors contributed equally.\nWithout a simulator, there is a dichotomy between finite-horizon and infinite-horizon settings. In finite-horizon settings, there are straightforward definitions for both regret and sample complexity; the latter is defined as the number of samples needed before the policy becomes near optimal. In this setting, extensive research in the past decade (Jin et al., 2018; Azar et al., 2017; Jaksch et al., 2010; Dann et al., 2017) has achieved great progress, and established nearly-tight bounds for both regret and sample complexity.\nThe infinite-horizon setting is a very different matter. First of all, the performance measure cannot be a straightforward extension of the sample complexity defined above (See Strehl & Littman (2008) for detailed discussion). Instead, the measure of sample efficiency we adopt is the so-called sample complexity of exploration (Kakade et al., 2003), which is also a widely-accepted definition. This measure counts the number of times that the algorithm “makes mistakes” along the whole trajectory. See also (Strehl & Littman, 2008) for further discussions regarding this issue.\nSeveral model based algorithms have been proposed for infinite horizon MDP, for example Rmax (Brafman & Tennenholtz, 2003), MoRmax (Szita & Szepesvári, 2010) and UCRL-γ (Lattimore & Hutter, 2012). It is noteworthy that there still exists a considerable gap between the state-of-the-art algorithm and the theoretical lower bound (Lattimore & Hutter, 2012) regarding 1/(1− γ) factor. Though model-based algorithms have been proved to be sample efficient in various MDP settings, most state-of-the-art RL algorithms are developed in the model-free paradigm (Schulman et al., 2015; Mnih et al., 2013; 2016). Model-free algorithms are more flexible and require less space, which have achieved remarkable performance on benchmarks such as Atari games and simulated robot control problems.\nFor infinite horizon MDPs without access to simulator, the best model-free algorithm has a sample complexity of exploration Õ( SA 4(1−γ)8 ), achieved by delayed Q-learning (Strehl et al., 2006). The authors provide a novel strategy of argument when proving the upper bound for the sample complexity of exploration, namely identifying a sufficient condition for optimality, and then bound the number of times that this condition is violated.\nHowever, the results of Delayed Q-learning still leave a quadratic gap in 1/ from the best-known lower bound. This is partly because the updates in Q-value are made in an over-conservative way. In fact, the loose sample complexity bound is a result of delayed Q-learning algorithm itself, as well as the mathematical artifact in their analysis. To illustrate this, we construct a hard instance showing that Delayed Q-learning incurs Ω(1/ 3) sample complexity. This observation, as well as the success of the Q-learning with UCB algorithm (Jin et al., 2018) in proving a regret bound in finite-horizon settings, motivates us to incorporate a UCB-like exploration term into our algorithm.\nIn this work, we propose a Q-learning algorithm with UCB exploration policy. We show the sample complexity of exploration bound of our algorithm is Õ( SA 2(1−γ)7 ). This strictly improves the previous best known result due to Delayed Q-learning. It also matches the lower bound in the dependence on , S and A up to logarithmic factors.\nWe point out here that the infinite-horizon setting cannot be solved by reducing to finite-horizon setting. There are key technical differences between these two settings: the definition of sample complexity of exploration, time-invariant policies and the error propagation structure in Q-learning. In particular, the analysis techniques developed in (Jin et al., 2018) do not directly apply here. We refer the readers to Section 3.2 for detailed explanations and a concrete example.\nThe rest of the paper is organized as follows. After introducing the notation used in the paper in Section 2, we describe our infinite Q-learning with UCB algorithm in Section 3. We then state our main theoretical results, which are in the form of PAC sample complexity bounds. In Section 4 we present some interesting properties beyond sample complexity bound. Finally, we conclude the paper in Section 5." }, { "heading": "2 PRELIMINARY", "text": "We consider a Markov Decision Process defined by a five tuple 〈S,A, p, r, γ〉, where S is the state space,A is the action space, p(s′|s, a) is the transition function, r : S×A → [0, 1] is the deterministic\nreward function, and 0 ≤ γ < 1 is the discount factor for rewards. Let S = |S| and A = |A| denote the number of states and the number of actions respectively.\nStarting from a state s1, the agent interacts with the environment for infinite number of time steps. At each time step, the agent observes state st ∈ S, picks action at ∈ A, and receives reward rt; the system then transits to next state st+1.\nUsing the notations in Strehl et al. (2006), a policy πt refers to the non-stationary control policy of the algorithm since step t. We use V πt(s) to denote the value function under policy πt, which is defined as V πt(s) = E[ ∑∞ i=1 γ\ni−1r(si, πt+i−1(si))|s1 = s]. We also use V ∗(s) = supπ V π(s) to denote the value function of the optimal policy. Accordingly, we define Qπt(s, a) = r(s, a) + E[ ∑∞ i=2 γ\ni−1r(si, πt+i−1(si))|s1 = s, a1 = a] as the Q function under policy πt; Q∗(s, a) is the Q function under optimal policy π∗.\nWe use the sample complexity of exploration defined in Kakade et al. (2003) to measure the learning efficiency of our algorithm. This sample complexity definition has been widely used in previous works Strehl et al. (2006); Lattimore & Hutter (2012); Strehl & Littman (2008). Definition 1. Sample complexity of Exploration of an algorithm ALG is defined as the number of time steps t such that the non-stationary policy πt at time t is not -optimal for current state st, i.e. V πt (st) < V ∗ (st)− .\nRoughly speaking, this measure counts the number of mistakes along the whole trajectory. We use the following definition of PAC-MDP Strehl et al. (2006). Definition 2. An algorithm ALG is said to be PAC-MDP (Probably Approximately Correct in Markov Decision Processes) if, for any and δ, the sample complexity of ALG is less than some polynomial in the relevant quantities (S,A, 1/ , 1/δ, 1/(1− γ)), with probability at least 1− δ.\nFinally, recall that Bellman equation is defined as the following:{ V πt(s) = Qπt (s, πt(s)) Qπt(s, a) := (rt + γPV πt+1) (s, a), { V ∗(s) = Q∗ (s, π∗(s)) Q∗(s, a) := (rt + γPV ∗) (s, a),\nwhich is frequently used in our analysis. Here we denote [PV πt ] (s, a) := Es′∼p(·|s,a)V πt+1 (s′)." }, { "heading": "3 MAIN RESULTS", "text": "In this section, we present the UCB Q-learning algorithm and the sample complexity bound." }, { "heading": "3.1 ALGORITHM", "text": "Algorithm 1 Infinite Q-learning with UCB Parameters: , γ, δ Initialize Q(s, a), Q̂(s, a)← 11−γ , N(s, a)← 0, 1 ←\n24RM ln 11−γ\n, H ← ln 1/((1−γ) 1)ln 1/γ .\nDefine ι(k) = ln(SA(k + 1)(k + 2)/δ), αk = H+1H+k . for t = 1, 2, ... do\n5: Take action at ← arg maxa′ Q̂(st, a′) Receive reward rt and transit to st+1 N(st, at)← N(st, at) + 1 k ← N(st, at), bk ← c21−γ √ Hι(k) k . c2 is a constant and can be set to 4 √ 2\nV̂ (st+1)← maxa∈A Q̂(st+1, a) 10: Q(st, at)← (1− αk)Q(st, at) + αk [ r(st, at) + bk + γV̂ (st+1) ] Q̂(st, at)← min(Q̂(st, at), Q(st, at))\nend for\nHere c2 = 4 √\n2 is a constant. R = dln 3 (1−γ)/(1 − γ)e, while the choice of M can be found in Section. 3.3. (M = O (ln 1/((1− γ) ))). The learning rate is defined as αk = (H + 1)/(H + k). H is chosen as ln 1/((1−γ) 1)ln 1/γ , which satisfies H ≤ ln 1/((1−γ) 1) 1−γ .\nOur UCB Q-learning algorithm (Algorithm 1) maintains an optimistic estimation of action value function Q(s, a) and its historical minimum value Q̂(s, a). Nt(s, a) denotes the number of times that (s, a) is experienced before time step t; τ(s, a, k) denotes the time step t at which (st, at) = (s, a) for the k-th time; if this state-action pair is not visited that many times, τ(s, a, k) = ∞. Qt(s, a) and Q̂t(s, a) denotes the Q and Q̂ value of (s, a) that the algorithm maintains when arriving at st respectively." }, { "heading": "3.2 SAMPLE COMPLEXITY OF EXPLORATION", "text": "Our main result is the following sample complexity of exploration bound.\nTheorem 1. For any > 0, δ > 0, 1/2 < γ < 1, with probability 1− δ, the sample complexity of exploration (i.e., the number of time steps t such that πt is not -optimal at st) of Algorithm 1 is at most\nÕ\n( SA ln 1/δ\n2 (1− γ)7\n) ,\nwhere Õ suppresses logarithmic factors of 1/ , 1/(1− γ) and SA.\nWe first point out the obstacles for proving the theorem and reasons why the techniques in Jin et al. (2018) do not directly apply here. We then give a high level description of the ideas of our approach.\nOne important issue is caused by the difference in the definition of sample complexity for finite and infinite horizon MDP. In finite horizon settings, sample complexity (and regret) is determined in the first T timesteps, and only measures the performance at the initial state s1 (i.e. (V ∗ − V π)(s1)). However, in the infinite horizon setting, the agent may enter under-explored regions at any time period, and sample complexity of exploration characterizes the performance at all states the agent enters.\nThe following example clearly illustrates the key difference between infinite-horizon and finitehorizon. Consider an MDP with a starting state s1 where the probability of leaving s1 is o(T−1). In this case, with high probability, it would take more than T timesteps to leave s1. Hence, guarantees about the learning in the first T timesteps or about the performance at s1 imply almost nothing about the number of mistakes the algorithm would make in the rest of the MDP (i.e. the sample complexity of exploration of the algorithm). As a result, the analysis for finite horizon MDPs cannot be directly applied to infinite horizon setting.\nThis calls for techniques for counting mistakes along the entire trajectory, such as those employed by Strehl et al. (2006). In particular, we need to establish convenient sufficient conditions for being -optimal at timestep t and state st, i.e. V ∗(st) − V πt(st) ≤ . Then, bounding the number of violations of such conditions gives a bound on sample complexity.\nAnother technical reason why the proof in Jin et al. (2018) cannot be directly applied to our problem is the following: In finite horizon settings, Jin et al. (2018) decomposed the learning error at episode k and time h as errors from a set of consecutive episodes before k at time h+ 1 using a clever design of learning rate. However, in the infinite horizon setting, this property does not hold. Suppose at time t the agent is at state st and takes action at. Then the learning error at t only depends on those previous time steps such that the agent encountered the same state as st and took the same action as at. Thus the learning error at time t cannot be decomposed as errors from a set of consecutive time steps before t, but errors from a set of non-consecutive time steps without any structure. Therefore, we have to control the sum of learning errors over an unstructured set of time steps. This makes the analysis more challenging.\nNow we give a brief road map of the proof of Theorem 1. Our first goal is to establish a sufficient condition so that πt learned at step t is -optimal for state st. As an intermediate step we show that a sufficient condition for V ∗(st)− V πt(st) ≤ is that V ∗(st′)−Q∗(st′ , at′) is small for a few time steps t′ within an interval [t, t+R] for a carefully chosen R (Condition 1). Then we show the desired sufficient condition (Condition 2) implies Condition 1. We then bound the total number of bad time steps on which V ∗(st)−Q∗(st, at) is large for the whole MDP; this implies a bound on the number of violations of Condition 2. This in turn relies on a key technical lemma (Lemma 2).\nThe remaining part of this section is organized as follows. We establish the sufficient condition for -optimality in Section 3.3. The key lemma is presented in Section 3.4. Finally we prove Theorem 1 in Section 3.5." }, { "heading": "3.3 SUFFICIENT CONDITION FOR -OPTIMALITY", "text": "In this section, we establish a sufficient condition (Condition 2) for -optimality at time step t.\nFor a fixed st, let TRAJ(R) be the set of length-R trajectories starting from st. Our goal is to give a sufficient condition so that πt, the policy learned at step t, is -optimal. For any 2 > 0, define R := dln 1 2(1−γ)/(1− γ)e. Denote V ∗(st)−Q∗(st, at) by ∆t. We have\nV ∗(st)− V πt(st) =V ∗(st)−Q∗(st, at) +Q∗(st, at)− V πt(st) =V ∗(st)−Q∗(st, at) + γP (V ∗ − V πt) (st, πt(st))\n=V ∗(st)−Q∗(st, at) + γ ∑ st+1 p (st+1|st, πt(st)) · [V ∗(st+1)−Q∗(st+1, at+1)] +\nγ ∑\nst+1,st+2\np (st+2|st+1, πt+1(st+1)) · p (st+1|st, πt(st)) [V ∗(st+2)−Q∗(st+2, at+2)]\n. . .\n≤ 2 + ∑ traj∈\nTRAJ(R)\np(traj) · R−1∑ j=0 γj∆t+j , (1) where the last inequality holds because γ R\n1−γ ≤ 2, which follows from the definition of R.\nFor any fixed trajectory of length R starting from st, consider the sequence (∆t′)t≤t′<t+R. Let X (i) t be the i-th largest item of (∆t′)t≤t′<t+R. Rearranging Eq. (1), we obtain\nV ∗(st)− V πt(st) ≤ 2 + Etraj [ R∑ i=1 γi−1X (i) t ] . (2)\nWe first prove that Condition 1 implies -optimality at time step t when 2 = /3. Condition 1. Let ξi := 12i+2 2 ( ln 11−γ )−1 . For all 0 ≤ i ≤ blog2Rc,\nE[X (2i) t ] ≤ ξi. (3)\nClaim 1. If Condition 1 is satisfied at time step t, the policy πt is -optimal at state st, i.e. V ∗(st)− V πt(st) ≤ .\nProof. Note that X(i)t is monotonically decreasing with respect to i. Therefore, E[X (i) t ] ≤ E[X (2blog2 ic) t ]. Eq. (3) implies that for 1/2 < γ < 1,\nE [ R∑ i=1 γi−1X (i) t ] = R∑ i=1 γi−1E[X (i) t ] ≤ R∑ i=1 γi−1E[X (2blog2 ic) t ]\n≤ R∑ i=1 γi−12−blog2 ic−2 2 ( ln 1 1− γ )−1 ≤ R∑ i=1 γi−1 i 2 ( ln 1 1− γ )−1 ≤ 2 2,\nwhere the last inequality follows from the fact that ∑∞ i=1 γi−1 i = 1 γ ln 1 1−γ and γ > 1/2.\nCombining with Eq. 2, we have, V ∗(st)− V πt(st) ≤ 2 + E [∑R i=1 γ i−1X (i) t ] ≤ 3 2 = .\nNext we show that given i, t, Condition 2 implies Eq. (3). Condition 2. Define L = blog2Rc. Let M = max { d2 log2 1ξL(1−γ)e, 10 } , and ηj = ξiM · 2 j−1. For all 2 ≤ j ≤M , ηj Pr[X(2 i) t > ηj−1] ≤ ξi M .\nClaim 2. Given i, t, Eq. (3) holds if Condition 2 is satisfied.\nProof. The reason behind the choice of M is to ensure that ηM > 1/(1 − γ) 1. It follows that, assuming Condition 2 holds, for 1 ≤ j ≤M ,\nE [ X (2i) t ] = ∫ 1/(1−γ) 0 Pr [ X (2i) t > x ] dx ≤ η1 + M∑ j=2 ηj Pr[X (2i) t > ηj−1] ≤ ξi.\nTherefore, if a time step t is not 2-optimal, there exists 0 ≤ i < blog2Rc and 2 ≤ j ≤M such that\nηj Pr[X (2i) t > ηj−1] > ξi M . (4)\nNow, the sample complexity can be bounded by the number of (t, i, j) pairs that Eq. (4) is violated. Following the approach of Strehl et al. (2006), for a fixed (i, j)-pair, instead of directly counting the number of time steps t such that Pr[X(2 i) t > ηj−1] > ξi Mηj , we count the number of time steps that X (2i) t > ηj−1. Lemma 1 provides an upper bound of the number of such t." }, { "heading": "3.4 KEY LEMMAS", "text": "In this section, we present two key lemmas. Lemma 1 bounds the number of sub-optimal actions, which in turn, bounds the sample complexity of our algorithm. Lemma 2 bounds the weighted sum of learning error, i.e. (Q̂t −Q∗)(s, a), with the sum and maximum of weights. Then, we show that Lemma 1 follows from Lemma 2.\nLemma 1. For fixed t and η > 0, let B(t)η be the event that V ∗(st)−Q∗(st, at) > η1−γ in step t. If η > 2 1, then with probability at least 1− δ/2,\nt=∞∑ t=1 I [ B(t)η ] ≤ SA lnSA ln 1/δ η2(1− γ)3 · polylog ( 1 1 , 1 1− γ ) , (5)\nwhere I[·] is the indicator function.\nBefore presenting Lemma 2, we define a class of sequence that occurs in the proof.\nDefinition 3. A sequence (wt)t≥1 is said to be a (C,w)-sequence for C,w > 0, if 0 ≤ wt ≤ w for all t ≥ 1, and ∑ t≥1 wt ≤ C.\nLemma 2. For every (C,w)-sequence (wt)t≥1, with probability 1− δ/2, the following holds:\n∑ t≥1 wt(Q̂t −Q∗)(st, at) ≤ C 1 1− γ +O\n(√ wSAC`(C)\n(1− γ)2.5 + wSA lnC (1− γ)3 ln\n1\n(1− γ) 1\n) .\nwhere `(C) = ι(C) ln 1(1−γ) 1 is a log-factor.\nProof of Lemma 2 is quite technical, and is therefore deferred to supplementary materials.\n1 ηM > 1/(1−γ) can be verified by combining inequalities ξi ·2M/2 ≥ 1/(1−γ) and 2M/2−1 > (M +1) for large enough M .\nNow, we briefly explain how to prove Lemma 1 with Lemma 2. (Full proof can be found in supplementary materials.) Note that since Q̂t ≥ Q∗ and at = arg maxa Q̂t(st, a),\nV ∗(st)−Q∗(st, at) ≤ Q̂t(st, at)−Q∗(st, at). We now consider a set J = {t : V ∗(st) − Q∗(st, at) > η(1 − γ)−1}, and consider the (|J |, 1)- weight sequence defined by wt = I [t ∈ J ]. We can now apply Lemma 2 to weighted sum∑\nt≥1 wt [V ∗(st)−Q∗(st, at)] . On the one hand, this quantity is obviously at least |J |η(1− γ)−1. On the other hand, by lemma 2, it is upper bounded by the weighted sum of (Q̂−Q∗)(st, at). Thus we get\n|J |η(1− γ)−1 ≤ C 1 1− γ +O (√ SA|J |`(|J |) (1− γ)2.5 + wSA ln |J | (1− γ)3 ln 1 (1− γ) 1 ) .\nNow focus on the dependence on |J |. The left-hand-side has linear dependence on |J |, whereas the left-hand-side has a Õ (√ |J | )\ndependence. This allows us to solve out an upper bound on |J | with quadratic dependence on 1/η." }, { "heading": "3.5 PROOF FOR THEOREM 1", "text": "We prove the theorem by stitching Lemma 1 and Condition 2.\nProof. (Proof for Theorem 1) By lemma 1, for any 2 ≤ j ≤M , ∑∞ t=1 I [V ∗(st)−Q∗(st, at) > ηj−1] ≤ C, where\nC = SA lnSA ln 1/δ\nη2j−1(1− γ)5 · P̃ . (6)\nHere P̃ is a shorthand for polylog (\n1 1 , 11−γ\n) .\nLet At = I[X (2i) t ≥ ηj−1] be a Bernoulli random variable, and {Ft}t≥1 be the filtration generated by random variables {(sτ , aτ ) : 1 ≤ τ ≤ t}. Since At is Ft+R−measurable, for any 0 ≤ k < R, {Ak+tR − E[Ak+tR | Fk+tR]}t≥0 is a martingale difference sequence. For now, consider a fixed 0 ≤ k < R. By Azuma-Hoeffiding inequality, after T = O ( C 2i · Mηj ξi ln(RML) )\ntime steps (if it happens that many times) with\nPr [ X\n(2i) k+tR ≥ ηj−1 ] = E[Ak+tR] >\nξi Mηj , (7)\nwe have ∑ tAk+tR ≥ C/2i with probability at least 1− δ/(2MRL).\nOn the other hand, if Ak+tR happens, within [k + tR, k + tR + R − 1], there must be at least 2i time steps at which V ∗(st) − Q∗(st, at) > ηj−1. The latter event happens at most C times, and [k + tR, k + tR + R − 1] are disjoint. Therefore, ∑∞ t=0Ak+tR ≤ C/2i. This suggests that the event described by (7) happens at most T times for fixed i and j. Via a union bound on 0 ≤ k < R, we can show that with probability 1− δ/(2ML), there are at most RT time steps where Pr [ X (2i) t ≥ ηj−1 ] > ξi/(Mηj). Thus, the number of sub-optimal steps is bounded by,\n∞∑ t=1 I[V ∗(st)− V πt(st) > ]\n≤ ∞∑ t=1 L∑ i=0 M∑ j=2 I [ ηj Pr[X (2i) t > ηj−1] > ξi M ] = L∑ i=0 M∑ j=2 ∞∑ t=1 I [ Pr[X (2i) t > ηj−1] > ξi ηjM ] ≤ L∑ i=0 M∑ j=2 SAMR ln 1/δ lnSA ηjξi · 2i(1− γ)5 P̃ ≤ L∑ i=0 SA · 2i+4 lnSA ln 1/δ 22(1− γ)6 P̃ (By definition of ξi and ηj)\n≤ SAR lnSA ln 1/δ 22(1− γ)6 P̃ ≤ SA lnSA ln 1/δ 22(1− γ)7 P̃ . (By definition of R)\nIt should be stressed that throughout the lines, P̃ is a shorthand for an asymptotic expression, instead of an exact value. Our final choice of 2 and 1 are 2 = 3 , and 1 =\n24RM ln 11−γ\n. It is not hard\nto see that ln 1/ 1 = poly(ln 1 , ln 1 1−γ ). This immediately implies that with probability 1− δ, the number of time steps such that (V ∗ − V π) (st) > is\nÕ ( SA ln 1/δ\n2(1− γ)7\n) ,\nwhere hidden factors are poly(ln 1 , ln 1 1−γ , lnSA)." }, { "heading": "4 DISCUSSION", "text": "In this section, we discuss the implication of our results, and present some interesting properties of our algorithm beyond its sample complexity bound." }, { "heading": "4.1 COMPARISON WITH PREVIOUS RESULTS", "text": "Lower bound To the best of our knowledge, the current best lower bound for worst-case sample complexity is Ω ( SA 2(1−γ)3 ln 1/δ )\ndue to Lattimore & Hutter (2012). The gap between our results and this lower bound lies only in the dependence on 1/(1−γ) and logarithmic terms of SA, 1/(1−γ) and 1/ .\nModel-free algorithms Previously, the best sample complexity bound for a model-free algorithm is Õ ( SA\n4(1−γ)8\n) (suppressing all logarithmic terms), achieved by Delayed Q-learning Strehl et al.\n(2006). Our results improve this upper bound by a factor of 1 2(1−γ) , and closes the quadratic gap in 1/ between Delayed Q-learning’s result and the lower bound. In fact, the following theorem shows that UCB Q-learning can indeed outperform Delayed Q-learning. Theorem 2. There exists a family of MDPs with constant S and A, in which with probability 1− δ, Delayed Q-learning incurs sample complexity of exploration of Ω ( −3\nln(1/δ)\n) , assuming that\nln(1/δ) < −2.\nThe construction of this hard MDP family is given in the supplementary material.\nModel-based algorithms For model-based algorithms, better sample complexity results in infinite horizon settings have been claimed Szita & Szepesvári (2010). To the best of our knowledge, the best published result without further restrictions on MDPs is Õ ( SA\n2(1−γ)6\n) claimed by Szita & Szepesvári\n(2010), which is (1− γ) smaller than our upper bound. From the space complexity point of view, our algorithm is much more memory-efficient. Our algorithm stores O(SA) values, whereas the algorithm in Szita & Szepesvári (2010) needs Ω(S2A) memory to store the transition model." }, { "heading": "4.2 EXTENSION TO OTHER SETTINGS", "text": "Due to length limits, detailed discussion in this section is deferred to supplementary materials. Finite horizon MDP The sample complexity of exploration bounds of UCB Q-learning implies Õ ( −2 ) PAC sample complexity and a Õ ( T 1/2 ) regret bound in finite horizon MDPs. That is, our algorithm implies a PAC algorithm for finite horizon MDPs. We are not aware of reductions of the opposite direction (from finite horizon sample complexity to infinite horizon sample complexity of exploration). Regret The reason why our results can imply an Õ( √ T ) regret is that, after choosing 1, it follows from the argument of Theorem 1 that with probability 1− δ, for all 2 > Õ( 1/(1− γ)), the number of 2-suboptimal steps is bounded by\nO ( SA lnSA ln 1/δ\n22(1− γ)7 polylog\n( 1\n1 ,\n1\n1− γ\n)) .\nIn contrast, Delayed Q-learning Strehl et al. (2006) can only give an upper bound on 1-suboptimal steps after setting parameter 1." }, { "heading": "5 CONCLUSION", "text": "Infinite-horizon MDP with discounted reward is a setting that is arguably more difficult than other popular settings, such as finite-horizon MDP. Previously, the best sample complexity bound achieved by model-free reinforcement learning algorithms in this setting is Õ( SA 4(1−γ)8 ), due to Delayed Q-learning Strehl et al. (2006). In this paper, we propose a variant of Q-learning that incorporates upper confidence bound, and show that it has a sample complexity of Õ( SA 2(1−γ)7 ). This matches the best lower bound except in dependence on 1/(1− γ) and logarithmic factors." }, { "heading": "6 ACKNOWLEDGEMENTS", "text": "The authors thank Chi Jin and Chongjie Zhang for helpful discussions. This work is supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502), NSFC (61573026), BJNSF (L172037) and Beijing Acedemy of Artificial Intelligence." }, { "heading": "A PROOF OF LEMMA 1", "text": "Lemma 1. For fixed t and η > 0, let B(t)η be the event that V ∗(st)−Q∗(st, at) > η1−γ in step t. If η > 2 1, then with probability at least 1− δ/2,\nt=∞∑ t=1 I [ B(t)η ] ≤ SA lnSA ln 1/δ η2(1− γ)3 · polylog ( 1 1 , 1 1− γ ) , (8)\nwhere I[·] is the indicator function.\nProof. When η > 1 the lemma holds trivially. Now consider the case that η ≤ 1. Let I = {t : V ∗(st)−Q∗(st, at) > η1−γ }. By lemma 2, with probability 1− δ,\nη|I| 1− γ ≤ ∑ t∈I (V ∗(st)−Q∗(st, at)) ≤ ∑ t∈I [( Q̂t −Q∗ ) (st, at) ] ≤ |I| 1\n1− γ +O\n( 1 (1− γ)5/2 √ SA|I|`(|I|) + SA (1− γ)3 ln |I| ln 1 1(1− γ) )\n≤ |I| 1 1− γ +O ln 1 1(1− γ) · √ SA|I| ln SA|I|δ (1− γ)5/2 + SA ln |I| (1− γ)3 ≤ |I| 1\n1− γ +O\n(√ ln 1\nδ ln\n1\n1(1− γ) · (√ SA|I| lnSA|I| (1− γ)5/2 + SA ln |I| (1− γ)3 ))\nSuppose that |I| = SAk 2\nη2(1−γ)3 lnSA, for some k > 1. Then it follows that for some constant C1,\nη|I| 1− γ = k2SA lnSA (1− γ)4η ≤ 2(η − 1)|I| 1− γ\n≤ C1 √ ln 1\nδ ln\n1\n1(1− γ)\n(√ SA|I| ln (SA|I|)\n(1− γ)5/2 + SA ln |I| (1− γ)3\n)\n≤ C1 √ ln 1\nδ ln\n1\n1(1− γ)\n( SAk η(1− γ)4 √ lnSA · (lnSA+ ln |I|) + SA ln |I| (1− γ)3 ) .\nTherefore\nk2 ln(SA) ≤ C1 √ ln 1\nδ ln\n1\n1(1− γ) (k (lnSA+ ln |I|) + η(1− γ) ln |I|)\n≤ kC1 √ ln 1\nδ ln\n1\n1(1− γ) · (lnSA+ 2 ln |I|)\n≤ kC1 √ ln 1\nδ ln\n1 1(1− γ) · ( 3 lnSA+ 4 ln k + 6 ln 1 η(1− γ) ) ≤ 6kC1 √ ln 1\nδ ln2\n1\n1(1− γ) (lnSA+ ln ek) .\nLet C ′ = max{2, 6C1 √\nln 1δ ln 2 1 1(1−γ)}. Then\nk ≤ C ′(2 + ln k). (9)\nIf k ≥ 10C ′ lnC ′, then\nk − C ′ (2 + ln k) ≥ 8C ′ lnC ′ − (2 + ln 10)C ′\n≥ 4C ′ (2 lnC ′ − 4) ≥ 0,\nwhich means violation of (9). Therefore, since C ′ ≥ 2\nk ≤ 10C ′ lnC ′ ≤ 360C21 max{ln 4 1\n1(1− γ) , 20 ln 2}. (10)\nIt immediately follows that\n|I| = SAk 2\nη2(1− γ)3 lnSA (11)\n≤ SA lnSA η2(1− γ)5 · ln 1 δ · O ( ln8 1 1(1− γ) ) . (12)" }, { "heading": "B PROOF OF LEMMA 2", "text": "Lemma 2. For every (C,w)-sequence (wt)t≥1, with probability 1− δ/2, the following holds:\n∑ t≥1 wt(Q̂t −Q∗)(st, at) ≤ C 1 1− γ +O\n(√ wSAC`(C)\n(1− γ)2.5 + wSA lnC (1− γ)3 ln\n1\n(1− γ) 1\n) .\nwhere `(C) = ι(C) ln 1(1−γ) 1 is a log-factor.\nFact 1. (1) The following statement holds throughout the algorithm,\nQ̂p+1(s, a) ≤ Qp+1(s, a).\n(2) For any p, there exists p′ ≤ p such that\nQ̂p+1(s, a) ≥ Qp′+1(s, a).\nProof. Both properties are results of the update rule at line 11 of Algorithm 1.\nBefore proving lemma 2, we will prove two auxiliary lemmas.\nLemma 3. The following properties hold for αit :\n1. √\n1 t ≤ ∑t i=1 α i t √ 1 i ≤ 2 √ 1 t for every t ≥ 1, c > 0.\n2. maxi∈[t] αit ≤ 2Ht and ∑t i=1(α i t) 2 ≤ 2Ht for every t ≥ 1.\n3. ∑∞ t=i α i t = 1 + 1/H, for every i ≥ 1.\n4. √\nι(t) t ≤ ∑t i=1 α i t √ ι(i) i ≤ 2 √ ι(t) t where ι(t) = ln(c(t+1)(t+2)), for every t ≥ 1, c ≥ 1.\nProof. Recall that\nαt = H + 1\nH + t , α0t = t∏ j=1 (1− αj), αit = αi t∏ j=i+1 (1− αj).\nProperties 1-3 are proven by Jin et al. (2018). Now we prove the last property.\nOn the one hand, t∑ i=1 αit\n√ ι(i)\ni ≤ t∑ i=1 αit\n√ ι(t) i ≤ 2 √ ι(t) t ,\nwhere the last inequality follows from property 1.\nThe left-hand side is proven by induction on t. For the base case, when t = 1, αtt = 1. For t ≥ 2, we have αit = (1− αt)αit−1 for 1 ≤ i ≤ t− 1. It follows that\nt∑ i=1 αit\n√ ι(i)\ni = αt\n√ ι(t)\nt + (1− αt) t−1∑ i=1 αit−1\n√ ι(i)\ni ≥ αt\n√ ι(t)\nt + (1− αt) √ ι(t− 1) t− 1 .\nSince function f(t) = ι(t)/t is monotonically decreasing for t ≥ 1, c ≥ 1, we have\nαt\n√ ι(t)\nt + (1− αt) √ ι(t− 1) t− 1 ≥ αt √ ι(t) t + (1− αt) √ ι(t) t ≥ √ ι(t) t .\nLemma 4. With probability at least 1− δ/2, for all p ≥ 0 and (s, a)-pair,\n0 ≤ (Qp −Q∗)(s, a) ≤ α0t\n1− γ + t∑ i=1 γαit(V̂ti − V ∗)(sti+1) + βt, (13)\n0 ≤ (Q̂p −Q∗)(s, a), (14) where t = Np(s, a), ti = τ(s, a, i) and βt = c3 √ Hι(t)/((1− γ)2t).\nProof. Recall that\nα0t = t∏ j=1 (1− αj), αit = αi t∏ j=i+1 (1− αj).\nFrom the update rule, it can be seen that our algorithm maintains the following Q(s, a):\nQp(s, a) = α 0 t\n1\n1− γ + t∑ i=1 αit [ r(s, a) + bi + γV̂ti(sti+1) ] .\nBellman optimality equation gives:\nQ∗(s, a) = r(s, a) + γPV ∗(s, a) = α0tQ∗(s, a) + t∑ i=1 αit [r(s, a) + γPV ∗(s, a)] .\nSubtracting the two equations gives\n(Qp −Q∗)(s, a) = α0t ( 1\n1− γ −Q∗(s, a)) + t∑ i=1 αit [bi + γ (Vti − V ∗) (sti+1) + γ (V ∗(sti+1)− PV ∗(s, a))] .\nThe identity above holds for arbitrary p, s and a. Now fix s ∈ S, a ∈ A and p ∈ N. Let t = Np(s, a), ti = τ(s, a, i). The t = 0 case is trivial; we assume t ≥ 1 below. Now consider an arbitrary fixed k. Define\n∆i = ( αik · I[ti <∞] · ( PV ∗ − P̂tiV ∗ ) (s, a) ) Let Fi be the σ-Field generated by random variables (s1, a1, ..., sti , ati). It can be seen that E [∆i|Fi] = 0, while ∆i is measurable in Fi+1. Also, since 0 ≤ V ∗(s, a) ≤ 11−γ , |∆i| ≤ 2 1−γ . Therefore, ∆i is a martingale difference sequence; by the Azuma-Hoeffding inequality,\nPr [∣∣∣∣∣ k∑ i=1 ∆i ∣∣∣∣∣ > η ] ≤ 2 exp { − η 2 8 (1− γ)−2 ∑k i=1(α i k) 2 } . (15)\nBy choosing η, we can show that with probability 1− δ/ [SA(k + 1)(k + 2)],\n∣∣∣∣∣ k∑ i=1 ∆i ∣∣∣∣∣ ≤ 2 √ 2 1− γ · √√√√ k∑ i=1 (αik) 2 · ln 2(k + 1)(k + 2)SA δ ≤ c2 1− γ √ Hι(k) k . (16)\nHere c2 = 4 √\n2, ι(k) = ln (k+1)(k+2)SAδ . By a union bound for all k, this holds for arbitrary k > 0, arbitrary s ∈ S, a ∈ A simultaneously with probability\n1− ∑\ns′∈S,a′∈A ∞∑ k=1\nδ 2SA(k + 1)(k + 2) = 1− δ 2 .\nTherefore, we conclude that (16) holds for the random variable t = Np(s, a) and for all p, with probability 1− δ/2 as well.\nProof of the right hand side of (13): We also know that (bk = c21−γ √ Hι(k) k )\nc2 1− γ\n√ Hι(k)\nk ≤ k∑ i=1 αikbi ≤ 2c2 1− γ\n√ Hι(k)\nk .\nIt is implied by (16) that\n(Qp −Q∗)(s, a) ≤ α0t\n1− γ + γ ∣∣∣∣∣ t∑ i=1 ∆i ∣∣∣∣∣+ t∑ i=1 αit [ γ(V̂ti − V ∗)(xti+1) + bi ] ≤ α 0 t\n1− γ + 3c2 1− γ\n√ Hι(t)\nt + t∑ i=1 γαit(V̂ ti − V ∗)(xti+1)\n(Property 4 of lemma 3)\n≤ α 0 t\n1− γ + t∑ i=1 γαit(V̂ ti − V ∗)(xti+1) + βt.\nNote that βt = c3(1− γ)−1 √ Hι(t)/t; c3 = 3c2 = 12 √ 2.\nProof of the left hand side of (13): Now, we assume that event that (16) holds. We assert that Qp ≥ Q∗ for all (s, a) and p ≤ p′. This assertion is obviously true when p′ = 0. Then\n(Qp −Q∗)(s, a) ≥ −γ ∣∣∣∣∣ t∑ i=1 ∆i ∣∣∣∣∣+ t∑ i=1 αit [ γ(V̂ti − V ∗)(xti+1) + bi ] ≥\nt∑ i=1 αitbi − γ ∣∣∣∣∣ t∑ i=1 ∆i ∣∣∣∣∣ ≥ 0. Therefore the assertion holds for p′ + 1 as well. By induction, it holds for all p.\nWe now see that (13) holds for probability 1− δ/2 for all p, s, a. Since Q̂p(s, a) is always greater than Qp′(s, a) for some p′ ≤ p, we know that Q̂p(s, a) ≥ Qp′(s, a) ≥ Q∗(s, a), thus proving (14).\nWe now give a proof for lemma 2. Recall the definition for a (C,w)-sequence. A sequence (wt)t≥1 is said to be a (C,w)-sequence for C,w > 0, if 0 ≤ wt ≤ w for all t ≥ 1, and ∑ t≥1 wt ≤ C.\nProof. Let nt = Nt(st, at) for simplicity; we have∑ t≥1 wt(Q̂t −Q∗)(st, at)\n≤ ∑ t≥1 wt(Qt −Q∗)(st, at)\n≤ ∑ t≥1 wt\n[ α0nt\n1− γ + βnt + γ nt∑ i=1 αint ( V̂τ(st,at,i) − V ∗ ) (sτ(st,at,i)+1) ] (17)\nThe last inequality is due to lemma 4. Note that α0nt = I[nt = 0], the first term in the summation can be bounded by, ∑\nt≥1\nwt α0nt 1− γ ≤ SAw 1− γ . (18)\nFor the second term, define u(s, a) = suptNt(s, a). 2 It follows that,\n∑ t≥1 wtβnt = ∑ s,a u(s,a)∑ i=1 wτ(s,a,i)βi\n≤ ∑ s,a (1− γ)−1c3 Cs,a/w∑ i=1\n√ Hι(i)\ni w (19)\n≤ 2 ∑ s,a (1− γ)−1c3 √ ι(C)HCs,aw (20)\n≤ 2c3(1− γ)−1 √ wSAHCι(C). (21)\nWhere Cs,a = ∑ t≥1,(st,at)=(s,a) wt. Inequality (19) follows from rearrangement inequality, since ι(x)/x is monotonically decreasing. Inequality (21) follows from Jensen’s inequality.\nFor the third term of the summation, we have∑ t≥1 wt nt∑ i=1 αint ( V̂τ(st,at,i) − V ∗ ) (sτ(st,at,i)+1)\n≤ ∑ t′≥1 ( V̂t′ − V ∗ ) (st′+1) ∞∑ t=t′+1\n(st,at)=(s ′ t,a ′ t)\nαnt′nt wt . (22) (23)\nDefine\nw′t′+1 = ∞∑ t=t′+1\n(st,at)=(s ′ t,a ′ t)\nαnt′nt wt . We claim that w′t+1 is a (C, (1 + 1 H )w)-sequence. We now prove this claim. By lemma 3, for any t′ ≥ 0,\nw′t′+1 ≤ w ∞∑\nj=nt′\nα nt′ j = (1 + 1/H)w.\n2u(s, a) could be infinity when (s, a) is visited for infinite number of times.\nBy ∑i j=0 α j i = 1, we have ∑ t′≥1 w ′ t′+1 ≤ ∑ t≥1 wt ≤ C. This proves the assertion. It follows from (22) that ∑ t≥1 w′t+1 ( V̂t − V ∗ ) (st+1)\n= ∑ t≥1 w′t+1 ( V̂t+1 − V ∗ ) (st+1) + ∑ t≥1 w′t+1 ( V̂t − V̂t+1 ) (st+1) (24)\n≤ ∑ t≥1 w′t+1 ( V̂t+1 − V ∗ ) (st+1) + ∑ t≥1 w′t+1 ( 2αnt+1 1 1− γ ) (25)\n≤ ∑ t≥1 w′t+1 ( V̂t+1 − V ∗ ) (st+1) +O ( wSAH 1− γ lnC ) (26)\n≤ ∑ t≥1 w′t+1 ( Q̂t+1 −Q∗ ) (st+1, at+1) +O ( wSAH 1− γ lnC ) (27)\nInequality (25) comes from the update rule of our algorithm. Inequality (26) comes from the fact that αt = (H + 1)/(H + t) ≤ H/t and Jensen’s Inequality. More specifically, let C ′s,a =∑ t≥1,(st,at=s,a w ′ t+1, w ′ = w(1 + 1/H). Then\n∑ t≥1 w′t+1αnt+1 ≤ ∑ s,a C′s,a/w ′∑ n=1 w′ H n ≤ ∑ s,a Hw′ ln(C ′s,a/w) ≤ 2SAHw lnC.\nPutting (18), (21) and (27) together, we have,\n∑ t≥1 wt(Q̂t −Q∗)(st, at)\n≤ 2c3 √ wSAHCι(C)\n1− γ +O\n( wSAH\n1− γ lnC\n) + γ ∑ t≥1 w′t+1 ( Q̂t+1 −Q∗ ) (st+1, at+1). (28)\nObserve that the third term is another weighted sum with the same form as (17). Therefore, we can unroll this term repetitively with changing weight sequences.Suppose that our original weight sequence is also denoted by {w(0)t }t≥1, while {w (k) t }t≥1 denotes the weight sequence after unrolling for k times. Let w(k) be w · (1 + 1/H)k. Then we can see that {w(k)t }t≥1 is a (C,w(k))-sequence. Suppose that we unroll for H times. Then∑\nt≥1\nwt(Q̂t −Q∗)(st, at)\n≤ 2c3\n√ w(H)SAHCι(C)\n(1− γ)2 +O\n( w(H)SAH\n(1− γ)2 lnC\n) + γH ∑ t≥1 w (H) t ( Q̂t −Q∗ ) (st, at)\n≤ 2c3\n√ w(H)SAHCι(C)\n(1− γ)2 +O\n( w(H)SAH\n(1− γ)2 lnC\n) + γH C\n1− γ .\nWe set H = ln 1/((1−γ) 1)ln 1/γ ≤ ln 1/((1−γ) 1) 1−γ . It follows that w (H) = (1 + 1/H)Hw(0) ≤ ew(0), and that γH C1−γ ≤ C 1. Also, let `(C) = ι(C) ln((1− γ) −1 −11 ). Therefore,\n∑ t≥1 wt(Q̂t −Q∗)(st, at) ≤ C 1 1− γ +O\n(√ wSAC`(C)\n(1− γ)2.5 +\nwSA\n(1− γ)3 lnC ln\n1\n(1− γ) 1\n) . (29)" }, { "heading": "C EXTENSION TO OTHER SETTINGS", "text": "First we define a mapping from a finite horizon MDP to an infinite horizon MDP so that our algorithm can be applied. For an arbitrary finite horizon MDPM = (S,A,H, rh(s, a), ph(s′ | s, a)) where H is the length of episode, the corresponding infinite horizon MDP M̄ = (S̄, Ā, γ, r̄(s̄, ā), p̄(s̄′ | s̄, ā)) is defined as,\n• S̄ = S ×H, Ā = A; • γ = (1− 1/H); • for a state s at step h, let s̄s,h be the corresponding state. For any action a and next state s′, define r̄(s̄s,h, a) = γH−h+1rh(s, a) and p̄(s̄s′,h+1 | s̄s,h, a) = ph(s′ | s, h). And for h = H , set r̄(s̄s,h, a) = 0 and p̄(s̄s′,1 | s̄s,h, a) = I[s′ = s1] for a fixed starting state s1.\nLet V̄t be the value function in M̄ at time t and V kh the value function inM at episode k, step h. It follows that V̄ ∗(s̄s1,1) = γH 1−γH V ∗ 1 (s1). And the policy mapping is defined as πh(s) = π̄(s̄s,h) for policy π̄ in M̄. Value functions in MDPM and M̄ are closely related in a sense that, any -optimal policy π̄ of M̄ corresponding to an ( /γH)-optimal policy π inM (see section C.1 for proof). Note that here γH = (1− 1/H)H = O(1) is a constant.\nFor any > 0, by running our algorithm on M̄ for Õ( 3SAH 9\n2 ) time steps, the starting state s1 is visited at least Õ( 3SAH 8\n2 ) times, and at most 1/3 of them are not -optimal. If we select the policy uniformly randomly from the policy πtH+1 for 0 ≤ t < T/H , with probability at least 2/3 we can get an -optimal policy. Therefore the PAC sample complexity is Õ ( −2 ) after hiding S,A,H terms.\nOn the other hand, we want to show that for any K episodes,\nRegret(T ) = T/H∑ k=1 [ V ∗(s1)− V k1 (s1) ] ∝ T 1/2.\nThe reason why our algorithm can have a better reduction from regret to PAC is that, after choosing 1, it follows from the argument of theorem 1 that for all 2 > Õ( 1/(1 − γ)), the number of 2-suboptimal steps is bounded by\nO ( SA lnSA ln 1/δ\n22(1− γ)7 polylog\n( 1\n1 ,\n1\n1− γ )) with probability 1−δ. In contrast, delayed Q-learning can only give an upper bound on 1-suboptimal steps after setting parameter 1.\nFormally, let Xk = V ∗(s1)− V k1 (s1) be the regret of k-th episode. For any T , set = √ SA/T and\n2 = Õ( 1/(1− γ)). Let M = dlog2 1 2(1−γ)e. It follows that,\nRegret(T ) ≤ T 2 + M∑ i=1 (∣∣k : {Xk ≥ 2 · 2i−1}∣∣) 2 · 2i ≤ Õ ( T 2 +\nM∑ i=1 SA ln 1/δ 2 · 2i−2 ) ≤ Õ (√ SAT ln 1/δ\n) with probability 1− δ. Note that the Õ notation hides the poly (1/(1− γ), log 1/ 1) which is, by our reduction, poly (H, log T, logS, logA).\nC.1 CONNECTION BETWEEN VALUE FUNCTIONS\nRecall that our MDP mapping from M = (S,A,H, rh(s, a), ph(s′ | s, a)) to M̄ = (S̄, Ā, γ, r̄(s̄, ā), p̄(s̄′ | s̄, ā)) is defined as,\n• S̄ = S ×H, Ā = A; • γ = (1− 1/H); • for a state s at step h, let s̄s,h be the corresponding state. For any action a and next state s′,\ndefine r̄(s̄s,h, a) = γH−h+1rh(s, a) and p̄(s̄s′,h+1 | s̄s,h, a) = ph(s, h). And for h = H , set r̄(s̄s,h, a) = 0 and p̄(s̄s′,1 | s̄s,h, a) = I[s′ = s1] for a fixed starting state s1.\nFor a trajectory {(s̄s1,1, ā1), (s̄s2,2, ā2), · · · } in M̄, let {(s1, a1), (s2, a2), · · · } be the corresponding trajectory inM. Note thatM has a unique fixed starting state s1, which means that stH+1 = s1 for all t ≥ 0. Denote the corresponding policy of π̄t as πt (may be non-stationary), then we have\nV̄ π̄ t (s̄s1,1) = E [ r̄(s̄s1,1, ā1) + γr̄(s̄s2,2, ā2) + · · ·+ γH−1r̄(s̄sH−1,H−1, āH−1) + γH V̄ πt+H−1(s̄sH+1,1) ] = γHE [ r1(s1, a1) + r2(s2, a2) + · · ·+ rH−1(sH−1, aH−1) + V̄ πt+H (s̄sH+1,1)\n] = γHV π t\n(s1) + γ H V̄ πt+H (s̄s1,1).\nThen for a stationary policy π̄, we can conclude V̄ π̄(s̄s1,1) = γH 1−γH V π(s1). Since the optimal policy π̄∗ is stationary, we have V̄ ∗(s̄s1,1) = γH 1−γH V ∗(s1).\nBy definition, π̄ is -optimal at time step t means that\nV̄ π̄ t (s̄s1,1) ≥ V̄ ∗(s̄s1,1)− . It follows that\nγHV π t\n(s1) + γ H V̄ πt+H (s̄s1,1) = V̄ π̄(s̄s1,1) ≥ V̄ ∗(s̄s1,1)− , hence\nγHV π t (s1) ≥ (1− γH)V̄ ∗(s̄s1,1) + γH(V̄ ∗(s̄s1,1)− V̄ πt+H (s̄s1,1))− ≥ (1− γH)V̄ ∗(s̄s1,1)− . Therefore we have\nV π t (s1) ≥ 1− γH\nγH V̄ ∗(s̄s1,1)− /γH = V ∗(s1)− /γH ,\nwhich means that πt is an ( /γH)-optimal policy." }, { "heading": "D A HARD INSTANCE FOR DELAYED Q-LEARNING", "text": "In this section, we prove Theorem 2 regarding the performance of Delayed Q-learning. Theorem 2. There exists a family of MDPs with constant S and A, in which with probability 1− δ, Delayed Q-learning incurs sample complexity of exploration of Ω ( −3\nln(1/δ)\n) , assuming that\nln(1/δ) < −2.\nProof. For each 0 < < 110 , consider the following MDP (see also Fig. 1): state space is S = {a, b, c} while action set isA = {x, y}; transition probabilities are P (b|a, y) = 1−10 , P (c|a, y) = 10 , P (b|a, x) = 1, P (a|b, ·) = P (a|c, ·) = 1. Rewards are all 1, except R(c, ·) = 0.\nAssume that Delayed Q-learning is called for this MDP starting from state a, with discount γ > 12 and precision set as . Denote the Q value maintained by the algorithm by Q̂. Without loss of generality, assume that the initial tie-breaking favors action y when comparing Q̂(a, x) and Q̂(a, y). In that case, unless Q̂(a, y) is updated, the agent will always choose y in state a. Since Q(a, x)−Q(a, y) = 10 γ > for any policy, choosing y at state a implies that the timestep is not -optimal. In other words, sample complexity for exploration is at least the number of times the agent visits a before the first update of Q̂(a, y).\nIn the Delayed Q-learning algorithm, Q̂(·, ·) are initialized to 1/(1− γ). Therefore, Q̂(a, y) could only be updated if max Q̂(c, ·) is updated (and becomes smaller than 1/(1− γ)). According to the algorithm, this can only happen if c is visited m = Ω ( 1 2 ) times.\nHowever, each time the agent visits a, there is less than 10 probability of transiting to c. Let t0 = m/(10 C), where C = 3 ln 1δ + 1. δ is chosen such that C ≤ m. In the first 2t0 timesteps, a will be visited t0 times. By Chernoff’s bound, with probability 1− δ, state c will be visited less than m times. In that case, Q̂(a, y) will not be updated in the first 2t0 timesteps. Therefore, with probability 1− δ, sample complexity of exploration is at least\nt0 = Ω\n( 1\n3 (ln 1/δ)\n) .\nWhen ln(1/δ) < −2, it can be seen that C = 3 ln 1δ + 1 < 4 2 < m." } ]
2,020
null
SP:bb2885554d98533633a54e0a84ec5c08ba87db2d
[ "The paper describes a way to efficiently enforce physical constraints expressed by linear PDEs on the output of a neural network. The idea is to have, as a last layer of the network, a projection onto the constrained solution space, and to back-propagate through it. That projection layer is made efficient for high-dimensional outputs via the fast Fourier transform (FFT), exploiting a well-known numerical trick. Importantly, the proposed strategy is very general, and can indeed be used with any PDE constraint that is a linear combination of differential operators.", "This work develops a differentiable spectral projection layer to enforce spatial PDE constraints using spectral methods, to achieve the introduction of the physical constraints in the end-to-end network without damaging the intrinsic property of the network. Analysis of computational cost shows the proposed layer is cheaper than the convolutional layer. The experimental comparison demonstrates the superiority of the proposed method. In my viewpoint, the novelty of this paper is somewhat novel. " ]
Recent studies at the intersection of physics and deep learning have illustrated successes in the application of deep neural networks to partially or fully replace costly physics simulations. Enforcing physical constraints to solutions generated by neural networks remains a challenge, yet it is essential to the accuracy and trustworthiness of such model predictions. Many systems in the physical sciences are governed by Partial Differential Equations (PDEs). Enforcing these as hard constraints, we show, are inefficient in conventional frameworks due to the high dimensionality of the generated fields. To this end, we propose the use of a novel differentiable spectral projection layer for neural networks that efficiently enforces spatial PDE constraints using spectral methods, yet is fully differentiable, allowing for its use as a layer in neural networks that supports end-to-end training. We show that its computational cost is cheaper than a regular convolution layer. We apply it to an important class of physical systems – incompressible turbulent flows, where the divergence-free PDE constraint is required. We train a 3D Conditional Generative Adversarial Network (CGAN) for turbulent flow superresolution efficiently, whilst guaranteeing the spatial PDE constraint of zero divergence. Furthermore, our empirical results show that the model produces realistic flow fields with more accurate flow statistics when trained with hard constraints imposed via the proposed novel differentiable spectral projection layer, as compared to soft constrained and unconstrained counterparts.
[]
[]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional Neural Network (CNN) based deep learning architectures have achieved huge success in many tasks across computer vision, but their use in the physical sciences have only recently been explored. Many parallels exist between physical science problems and those in computer vision. For instance, grid-based simulations generate a physical scalar or vector field which can been compared to multidimensional arrays in computer vision. However, unlike computer vision problems, physical fields are often constrained by PDEs that arise from the governing equations of the physical system. For example, the Poisson equation of the form∇2φ = f is often encountered in heat diffusion problems, whereas the divergence-free (also known as solenoidal) conditions in the form of ∇ · φ = 0 is fundamental to magnetic fields, as well as incompressible fluid velocity fields to ensure conservation of mass. For meaningful application of deep learning to a range of important physical problems it is essential to enforce such spatial PDE constraints to guarantee physical consistency and reliability of the model output for scientific applications. Yet, general means of enforcing these constraints do not exist and the existing methods do not scale well with high dimensional, high resolution outputs.\nIn this paper, we address this issue by proposing a novel differentiable PDE layer (PDEL) that efficiently enforces spatial PDE constraints for neural networks, at costs on par with a single CNN layer. We use spectral methods, which leverages the highly efficient Fast Fourier Transform (FFT) algorithm for enforcing such constraints. Using this formulation, we are able to exploit the structures of the spectral matrices corresponding to these differential operators that renders the entire layer O(n log n) for processing a 3 dimensional field of size n. The method is general for enforcing arbitrary linear combinations of differential operators on these fields, which encompasses physical constraints from a broad range of important scientific and engineering systems. We apply this hard constraining layer to the problem of turbulence superresolution, where we show that training with the\nhard constraining layer in-the-loop not only guarantees that the imposed constraint is strictly satisfied, but also generates solutions that are more accurate measured via a variety of fluid flow metrics.\nIn summary, the main contributions of this paper are:\n• We propose the highly efficient differentiable spatial linear PDE layer (PDEL), which strictly enforces linear spatial PDEs constraints.\n• We apply the PDE layer towards the superresolution task for turbulent flows, showing that training with hard constraints in-the-loop results in solutions that not only strictly satisfy the imposed constraint but also produce flow fields with more accurate fluid flow statistics." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 CONSTRAINTS IN NEURAL NETWORKS", "text": "Many studies in machine learning have considered imposing some form of constraints for their respective applications. ? proposed an approach for constraining the prediction of a discriminative predictor and showed that a Gaussian Process can be forced to satisfy linear and quadratic constraints. ? proposed training a kernalized latent variable model that imposes equality and inequality constraints. ? proposed a constrained CNN, which phrases the training objective as a biconvex optimization for linear models, which is then relaxed to nonlinear deep networks for any set of linear constraints on the output space. ? proposed an alternative approach by randomly subsampling a set of constraints at each optimization step and projecting the gradients onto the feasible solution space. OptNet (?) solves a generic quadratic programming problem differentiably within the neural network, but its cubed complexity does not handle high dimensional output. ? proposed parameterizing the feasible solution space for imposing inequality constraints. However, methods to impose physical constraints into machine learning and deep learning models remains largely unexplored." }, { "heading": "2.2 APPLICATIONS OF MACHINE LEARNING FOR PDE AND TURBULENCE", "text": "More recently there have been some work in the literature to apply machine learning to PDE and physical problems. ??? developed physics informed deep learning frameworks for assimilating observational data. In the context of applying machine learning methodologies to turbulence problems, earlier works using data-driven modeling approaches have used optimization and Bayesian inference approaches to calibrate existing turbulence models (?????). With the advancement of efficient and accurate modeling tools in machine learning, recent studies have looked at data driven approaches for turbulence modeling (?), including the direct use of random forests for modeling Reynolds-Averaged Navier Stokes (RANS) model errors (?), the use of multilayer perceptrons for modeling Reynolds stress anisotropy tensor from simulation data (?), and using random forests to predict mean velocities of turbulent flow fields. ? use a random forest to compute particle location and ? use a CNN to approximate part of a numerical PDE scheme. A recent application of deep learning for generating realistic fluid flow fields is TempoGAN (?) that uses a specialized discriminator network for temporal coherence. None of these methods, however, enforce constraints that are necessary for physically consistent fluid flow fields. ? addressed this with a customized loss function for the divergence-free constraint of fluid flow, but since it is a loss-based soft constraint, the conditions are ultimately not exactly satisfied." }, { "heading": "3 PROBLEM STATEMENT", "text": "The main focus of this paper is to introduce a novel and efficient method for imposing spatial linear PDE constraints to the outputs of Convolutional Neural Networks (CNNs). This is discussed in the context of underdetermined systems, since solutions do not exist for overdetermined systems, while solutions for determined systems do not fit in the context of constraining the outputs. More specifically, given the output of the network to be a discretization of a 3D vector field f : R3 7→ R3 and a linear spatial PDE operator A(( ∂∂xj ) 0, ( ∂∂xj ) 1, · · · ) that maps vector fields to scalar fields Af : R3 7→ R, we seek a means of efficiently imposing the spatial linear PDE constraints within\nCNNs, i.e.,\nAf = b (1)\nNote that this form encompasses a wide range of physically relevant constraints. In particular, all spatial PDE constraints composed of divergence, curl, Laplacian and other higher order partial differential terms in linear combination may be expressed in this form. Depending on the domain of application, this includes mass conservation for incompressible fluid flows, the heat equation, the wave equation, Laplace’s equation, Helmholtz equation, Klein-Gordon equation, and Poisson’s equation. For the important constraint of mass conservation in incompressible flows, we investigate the divergence-free (solenoidal) constraint of:\n∇ · f = ∑ j ∂ ∂xj fj = 0 (2)" }, { "heading": "4 METHODS", "text": "Before presenting our proposed method for enforcing the solenoidal condition on CNN outputs, we present an overview of two commonly utilized strategies for enforcing general linear constraints, which we will compare and benchmark against in the experiments section (Sec. 5).\nWe first discuss enforcing linear constraints on the outputs of the neural network, where we have a neural network that learns the function mapping f : Rt 7→ Rm, where the function f(x;θ) is parameterized by learnable parameters θ, and is subject to the linear constraint Af(x;θ) = b, where A ∈ Rn×m, b ∈ Rn. For this to be an underconstrained system, we have n < m." }, { "heading": "4.1 GENERALIZED LINEAR CONSTRAINTS", "text": "Two forms of constraints are possible for explicitly enforcing a certain set of constraints for neural network outputs: soft constraints and hard constraints.\nSoft constraints are easy to implement, by adding a differentiable residual loss for penalizing the network during training time for violating the explicit constraints. For simplicity, let y := f(x;θ). In the conventional unconstrained case, assume the neural network is trained under the differentiable loss function L(f(x;θ)), in the constrained case, the loss function can be augmented by an additional residual loss term defined by:\nLc(θ) = L(θ) + α((Ay − b)T (Ay − b)) (3) where α is a hyper-parameter weighing the two loss functions that can be difficult to determine and vary between applications. Although easy to implement, soft constraints provide no guarantees on the solutions satisfying the imposed constraint.\nHard linear constraints can be enforced by posing the problem as a constrained optimization problem for seeking the closest point in the solution space subject to the constraints, which can be solved by satisfying the Karush-Kuhn-Tucker (KKT) condition. The result of the projection step can be written as the stationary point of the Lagrangian:\nmin ŷ max λ L(ŷ,λ) (4)\nwhere we have the Lagrangian as:\nL(ŷ,λ;y) = 1 2 (y − ŷ)T (y − ŷ) + λT (Ay − b) (5)\n∂L ∂ŷ = y − ŷ + λTA (6)\nThe KKT condition leads to the following linear system, the solution of which involves solving a linear system of dimensions (m + n) × (m + n). Given that the linear system is symmetric and positive definite, the solution can be sought by inverting the system:[\nI AT\nA 0 ] [ ŷ λ ] = [ y b ] ⇒ [ ŷ λ ] = [ I AT A 0 ]−1 [ y b ] (7)\nWhile this approach is general for enforcing arbitrary linear constraints on arbitrary network outputs, it is difficult to scale it to higher dimensions, and particularly difficult for 2-dimensional and 3- dimensional outputs, by direct matrix inversion followed by matrix multiplication." }, { "heading": "4.2 SPECTRAL METHODS", "text": "First, we introduce and review the spectral methods (?) for discretizing the spatial PDE operators. Spectral methods are a class of numerical methods that computes the spatial partial derivatives of a field based on the spectral decomposition of the signal. By decomposing the original signal into a linear combination with respect to trigonometric basis functions of varying wavenumbers (or frequencies), the spatial derivatives with respect to the trigonometric basis functions can be easily and efficiently computed. The Fast Fourier Transform (FFT) is a well known algorithm for efficiently computing the Direct Fourier Transform (DFT) of uniform discrete signals. The multidimensional FFT and inverse FFT respectively compute the following:\nF (k) = N−1∑ n=0 f (n)e−i2πk·(n/N); f (n) = 1 N1N2N3 N−1∑ k=0 F (k)ei2πk·(n/N) (8)\nwhere F (k) = FFT(f (n)),f (n) = IFFT(F (k)), spatial indices n = (n1, n2, n3), nj ∈ {0, 1, · · · , Nj − 1} and spectral indices k = (k1, k2, k3), kj ∈ {0, 1, · · · , Nj − 1}. The spatial derivative with respect to xj can be computed by:\n∂\n∂xj f (n) = IFFT(ikjF (k)) (9)\nIn matrix form, for the t-th component of a 3 dimensional vector field: Ft, taking its flattened vector form, and taking the flattened vector form of the wavenumber kj corresponding to the dimension xj , the spatial derivative with respect to xj can be computed using matrix multiplication:\n∂\n∂xj Ft = diag(ikj)Ft (10)\nwhere diag() converts a vector into a corresponding diagonal matrix. In general, arbitrary linear combination of spatial derivatives of varying orders can be computed using a single diagonal matrix multiplication:\n( ∑ j ∑ r cjr ( ∂ ∂xj )r )Ft = diag( ∑ j ∑ r cjr(ikj) r)Ft := AtFt (11)\nwhere At is a diagonal matrix for the spatial derivatives corresponding to the t-th component of the vector field, that is a polynomial of ikj , and A = [A1, A2, A3]." }, { "heading": "4.3 SPECTRAL PROJECTION LAYER", "text": "For brevity, we present our main results for computing the spectral projection operator that efficiently enforces spatial linear PDE constraints using spectral methods. We defer readers to Eqns (14 - 27) of Appendix A for detailed derivation of these results. In spectral space, the projection of the original vector field F into solution space: F̂ , can be computed by:\nF̂ = PF +QB (12)\nwhere F = FFT(f),B = FFT(b), and\nP = I − 1∑3 j=0A 2 j A21 A1A2 A1A3A1A2 A22 A2A3 A1A3 A2A3 A 2 3 ;Q = − 1∑3 j=0A 2 j [ A1 A2 A3 ] (13)" }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "" }, { "heading": "5.1 COMPUTATIONAL COMPLEXITY AND COST", "text": "We first show that although the classic Lagrangian based hard constraining method in Eqn. 7 is general and able to enforce hard linear constraints, solving it by direct inversion leads to poor computational efficiency, especially with high-resolution 3-dimensional data outputs from CNNs.\nGiven Eqn. 7 for enforcing hard linear constraints using Lagrangian multipliers, we estimate the computational complexity regarding enforcing solenoidal conditions as follows. Without loss of generality, assume that the vector field on which we enforce the solenoidal constraints is 3 dimensional of resolution N in each spatial dimension, with a total of n = N3 nodes. The overall degrees of freedom in the system is 3n, and enforcing solenoidal constraint for each voxel results in n linear constraints, hence the resulting linear system in Eqn. 7 is of dimensions 4n × 4n. Though the matrix inversion is shared, hence reusable by caching, each projection involves a matrix multiplication of O((3n)2) ∼ O(9n2) operations. In comparison, the spectral projection method only involves element-wise operations, resulting in an overall complexity of O(n) operations for enforcing constraints, and O(n log(n)) for the FFT and IFFT operations. Results of an empirical analysis for computational time and memory usage is shown in Fig. 2." }, { "heading": "5.2 TURBULENCE SUPERRESOLUTION WITH CONDITIONAL GAN", "text": "Conditional Generative Adversarial Networks While Generative Adversarial Networks (GANs) have been effective at generating 2D (???) and 3D (?) images, the unconditional generative modeling scenario of generating entire fields from random latent vectors is hardly useful in scientific settings. A more desirable model is one that is conditioned upon a set of inputs from either partial observations, or low resolution simulations due to computational limitations ?. Conditional GANs, which have been widely used in various image-to-image translation problems (?????), and a recent extension of conditional GANs, the GauGAN (?), utilizes a novel spatially-adaptive normalization layer that better preserves semantic information in the conditional input and produces improved texture outputs.\nIn this paper we use the GauGAN architecture for the task of superresolution of turbulent fluid flow fields (See Fig. 1).\nProblem setup The main target application of this study is the super-resolution of turbulent flow fields. Fully resolving turbulence requires direct numerical simulation (DNS) that can resolve the smallest scales of the flow (Komogorov scale), which is prohibitively expensive. Therefore, the motivation of this study is to produce flow fields and flow statistics comparable to DNS at the cost of a low-resolution proxy, whilst strictly enforcing PDE constraints. To this end, we leverage highresolution DNS data to train a deep neural network to learn the mapping between the low-resolution flow and its high-resolution counterpart. We compare between several algorithms for the task: the conventional trilinear interpolation which is not learning based, and various deep learning methods leveraging the GauGAN architecture for conditional generative modeling. We benchmark our PDEL in-the-loop hard constraining method against unconstrained training (denoted by “none” in figures and tables) and soft constrained training (denoted by “soft”), with and without hard constraining spectral projection at test time. The goal is to satisfy the imposed constraint and to evaluate the accuracy of the predicted flow fields using key domain-specific metrics.\nDataset Description We use the Forced Isotropic Turbulence dataset from the Johns Hopkins Turbulence Database for this experiment ?. The dataset consists of DNS at 10243 resolution performed by solving the Navier-Stokes equations using the pseudo-spectral method. The dataset consists of 5028 frames (time steps) of data, each with 3 velocity components. For this experiment, we use all simulation frames starting from the 16th time step, since the initial frames consist of underdeveloped flow. Furthermore, since 10243 resolution is practically impossible to fit into modern GPUs, we use a subsampled version of the data at 1283 as high-resolution targets, and further subsampled 323 fields as low-resolution inputs. Subsampling for the high-resolution field is performed by uniformly sampling the original flow field at intervals of 8 in each dimension. For test and training splits, we take a random subset of 70/10/20% of the original data as training, validation and test splits.\nEvaluation metrics Since the superresolution task constraining on the low-resolution input is mathematically underdetermined, it is not possible to recover the exact velocity field. Further, given the chaotic nature of turbulence, a single low-resolution flow field corresponds to various different realizations of high-resolution flows. Therefore, we refrain from directly comparing the norm of the difference in velocity fields. Instead, we compare the distributions of various key flow statistics, as outlined by ?, which are more informative from a turbulence modeling standpoint. In Tab. 1, we report the Kolmogorov-Smirnov (KS) statistic between the ground truth test set distributions and the distributions generated by various models conditioning on the low-resolution test inputs. We also report the mean difference between the distributions, i.e., difference between the mean of the modelled and ground truth distributions in units of ground truth distribution standard deviation.\nThe flow statistics in Tab. 1 is defined as below. For simplicity, we denote the different velocity components using Einstein notation, and use angle brackets <> to denote spatial averaging.\n• Total kinetic energy, Etot = 12 < uiui >\n• Dissipation, = 2ν < σijσij >, where σij := 12 ( ∂ui ∂xj + ∂uj ∂xi ) , and ν = 0.000185 is a\nconstant for fluid viscosity.\n• Large eddy turnover time: TL = L/u′, where L = π2u′2 ∫ E(k) k dk and u ′ = √ (2/3)Etot\nResults The main quantitative results for this experiment are presented in Tab. 1, whereas a visualization of the distributions regarding various key flow statistics are presented in Fig. 3. We observe from empirical evaluations that training with the hard constraining layer in-the-loop effectively imposes the solenoidal constraints (zero residue), and enforcing the hard constraint at training time achieves more accurate flow field distributions measured by various key flow statistics. We note that although this method is not the most accurate for the dissipation statistic, presumably because of discrepancies in the high wavenumber regime (where dissipation occurs), the overall mean statistics and individual statistics for the other metrics are superior compared to all the other methods." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "Enforcing hard physical constraints to solutions generated using neural networks are essential for their application to important scientific problems. In this paper, we propose a novel differentiable spectral projection layer for neural networks that efficiently enforces spatial PDE constraints using spectral methods, yet is fully differentiable, allowing for its use as a layer in neural networks that supports end-to-end training. Further, we show that its computational cost is cheaper than a regular convolution layer. We demonstrate its use in an important class of physics problems – incompressible turbulent flows, where the divergence-free PDE constraint is required. We are able to train a 3D Conditional Generative Adversarial Network (CGAN) for turbulent flow superresolution efficiently, whilst guaranteeing the spatial PDE constraint of zero divergence. Further, our results show that the model produces more accurate flow statistics when trained with hard constraints imposed via the proposed novel differentiable spectral projection layer, as compared to soft constrained and unconstrained counterparts.\nSome key limitations of this work are: (i) the method is applicable in its current form only to flows with periodic boundary conditions; (ii) we only develop a method for linear spatial constraints and (iii) we only consider statistically steady flows. In future we will address all the above limitations to extend our work to more general sets of nonlinear unsteady constraints with arbitrary boundary conditions.\nAPPENDIX" }, { "heading": "A MATHEMATICAL DERIVATION FOR SPECTRAL PROJECTION", "text": "The solution to the Lagrangian multiplier method for enforcing the solenoidal conditions involves inverting the left-hand-side matrix in Eqn. 7. Since I, A, 0 are block matrices, the inverse can be represened by [\nI AT\nA 0\n]−1 = [ I −AT (AAT )−1A AT (AAT )−1 (AAT )−1A −(AAT )−1 ]\n(14)\nHence the projected vector in spectral space can be computed as:\nF̂ = PF +QB (15)\nThe second term in the equation above drops out since b = 0 for solenoidal constraints. More specifically for spectral methods, the matrix A can be represented as three diagonal matrices for the wavenumbers in the three dimensions multiplied by the imaginary number i:\nA = . . . . . . . . .\nA1 A2 A3 . . . . . . . . . (16) = [A1 A2 A3] (17)\nThe only matrix inverse associated is AAT , whose value can be computed by block matrix multiplication:\nAAT = A21 +A 2 2 +A 2 3 (18)\nGiven that A1, A2, A3 are diagonal matrices (eliminating the terms regarding the [0, 0, 0] mode), its inverse can be computed by directly inverting the diagonal terms:\n(AAT )−1 = 1\nA21 +A 2 2 +A 2 3\n(19)\nHence the linear projection matrix can be written as:\nI −AT (AAT )−1A = I − 1 A21 +A 2 2 +A 2 3 A21 A1A2 A1A3A1A2 A22 A2A3 A1A3 A2A3 A 2 3 (20) AT (AAT )−1 = − 1∑3\nj=0A 2 j [ A1 A2 A3 ] (21)\nRecovering the same solution as in Eqn. 13. More specifically for the divergence-free condition, we have:\nAj = diag(−ikj) (22) B = 0 (23)\nHence the spectral projection step can be further simplified as:\nF̂ = F − k · F k · k k (24)\nIt is easy to show that the result is divergence-free, since:\n−ikF̂ = −ikF + ikF = 0 (25) It is also easy to show that the projection is orthogonal to the solution space, since the dot product between the F̂ − F and F̂ is zero:\n(F̂ − F ) · F̂ = −(k · F k · k k) · (F − k · F k · k k) (26)\n= 0 (27)" }, { "heading": "B MODEL AND TRAINING DETAILS", "text": "We use the GauGAN architecture (with schematics as shown in Fig. 1) for the conditional flow field generation task. The abbreviated names for the various modules are given in Tab. 2. Our model differs from the original GauGAN model in two distinct aspects. First, our architecture utilizes 3 dimensional convolutions instead of the 2 dimensional counterparts in the original GauGAN architecture. Second, for our hard constrained case, we append the spectral projection layer to the end of the architecture for enforcing hard constraints.\nFor training the model, we use multiresolution discriminator loss as in ? across 3 discriminators. We train the model with batch size of 18 (across 6 Volta V100 GPUs) with the Adam optimizer using a learning rate of 2E − 4. The soft constrained model uses a residue penalty factor of 0.01." }, { "heading": "C ADDITIONAL VISUALIZATION", "text": "" } ]
2,019
ENFORCING PHYSICAL CONSTRAINTS IN NEURAL NETWORKS THROUGH DIFFERENTIABLE PDE LAYER
SP:5a1d5dd1a128cc32d3e9c71f309cb7031fcffcdb
[ "This paper studies backdoor attacks under federated learning setting. To inject a certain backdoor pattern, existing work generate poisoning samples by blending the same pattern with different input samples. Even for federated learning where the adversary can control multiple parties, such as [1], all parties still use the same global backdoor pattern to generate poisoning samples locally. On the contrary, in this work, they decompose the global pattern into several small local patterns, and each adversarial party only uses a local pattern to generate poisoning samples. In their evaluation, they show that the backdoor attacks generated in this way are more effective, resilient to benign model parameter updates, and also survive better against existing defense algorithms against attacks in federated learning settings.", "The authors introduce the idea of distributed backdoor attacks in the FL framework, in which the dishonest participants in FL add local triggers to their training data to influence the global model to classify triggered images in a desired way. They show empirically that the learned models then are more likely to be successfully forced to misclassified images in which all the local triggers are present at test time, than are models learned using centralized backdoor attacks, where all attackers use the same trigger pattern (one of the same size as the concatenation of the local triggers, to be fair in the comparison). They then demonstrate that because the local triggers cause smaller corruptions in the model coefficients, these distributed attacks survive robust FL training algorithms (namely FoolsGold, and a recent robust regression based method) more often than centralized attacks. Similar experiments are conducted on the Loan text dataset, using appropriate analogs of local triggers, with similar results." ]
Backdoor attacks aim to manipulate a subset of training data by injecting adversarial triggers such that machine learning models trained on the tampered dataset will make arbitrarily (targeted) incorrect prediction on the testset with the same trigger embedded. While federated learning (FL) is capable of aggregating information provided by different parties for training a better model, its distributed learning methodology and inherently heterogeneous data distribution across parties may bring new vulnerabilities. In addition to recent centralized backdoor attacks on FL where each party embeds the same global trigger during training, we propose the distributed backdoor attack (DBA) — a novel threat assessment framework developed by fully exploiting the distributed nature of FL. DBA decomposes a global trigger pattern into separate local patterns and embed them into the training set of different adversarial parties respectively. Compared to standard centralized backdoors, we show that DBA is substantially more persistent and stealthy against FL on diverse datasets such as finance and image data. We conduct extensive experiments to show that the attack success rate of DBA is significantly higher than centralized backdoors under different settings. Moreover, we find that distributed attacks are indeed more insidious, as DBA can evade two state-of-the-art robust FL algorithms against centralized backdoors. We also provide explanations for the effectiveness of DBA via feature visual interpretation and feature importance ranking. To further explore the properties of DBA, we test the attack performance by varying different trigger factors, including local trigger variations (size, gap, and location), scaling factor in FL, data distribution, and poison ratio and interval. Our proposed DBA and thorough evaluation results shed lights on characterizing the robustness of FL.
[ { "affiliations": [], "name": "BACKDOOR ATTACKS" }, { "affiliations": [], "name": "FEDERATED LEARNING" }, { "affiliations": [], "name": "Chulin Xie" }, { "affiliations": [], "name": "Keli Huang" }, { "affiliations": [], "name": "Pin-Yu Chen" }, { "affiliations": [], "name": "Bo Li" } ]
[ { "authors": [ "Eugene Bagdasaryan", "Andreas Veit", "Yiqing Hua", "Deborah Estrin", "Vitaly Shmatikov" ], "title": "How to backdoor federated learning", "venue": "arXiv preprint arXiv:1807.00459,", "year": 2018 }, { "authors": [ "Moran Baruch", "Gilad Baruch", "Yoav Goldberg" ], "title": "A little is enough: Circumventing defenses for distributed learning", "venue": "arXiv preprint arXiv:1902.06156,", "year": 2019 }, { "authors": [ "Arjun Nitin Bhagoji", "Supriyo Chakraborty", "Prateek Mittal", "Seraphin Calo" ], "title": "Analyzing federated learning through an adversarial lens", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Peva Blanchard", "Rachid Guerraoui", "Julien Stainer" ], "title": "Machine learning with adversaries: Byzantine tolerant gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Bo Li", "Kimberly Lu", "Dawn Song" ], "title": "Targeted backdoor attacks on deep learning systems using data poisoning", "venue": "arXiv preprint arXiv:1712.05526,", "year": 2017 }, { "authors": [ "Nicholas Frosst", "Geoffrey Hinton" ], "title": "Distilling a neural network into a soft decision tree", "venue": "arXiv preprint arXiv:1711.09784,", "year": 2017 }, { "authors": [ "Clement Fung", "Chris JM Yoon", "Ivan Beschastnikh" ], "title": "Mitigating sybils in federated learning poisoning", "venue": "arXiv preprint arXiv:1808.04866,", "year": 2018 }, { "authors": [ "Tianyu Gu", "Kang Liu", "Brendan Dolan-Gavitt", "Siddharth Garg" ], "title": "Badnets: Evaluating backdooring attacks on deep neural networks", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Rachid Guerraoui", "Sébastien Rouault" ], "title": "The hidden vulnerability of distributed learning in byzantium", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Hard", "Kanishka Rao", "Rajiv Mathews", "Françoise Beaufays", "Sean Augenstein", "Hubert Eichner", "Chloé Kiddon", "Daniel Ramage" ], "title": "Federated learning for mobile keyboard prediction", "venue": "arXiv preprint arXiv:1811.03604,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Wendy Kan" ], "title": "Lending club loan data, Mar 2019", "venue": "URL https://www.kaggle.com/ wendykan/lending-club-loan-data", "year": 2019 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Felix X Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: Strategies for improving communication efficiency", "venue": "arXiv preprint arXiv:1610.05492,", "year": 2016 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-Efficient Learning of Deep Networks from Decentralized Data", "venue": "In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Thomas Minka" ], "title": "Estimating a dirichlet distribution", "venue": null, "year": 2000 }, { "authors": [ "Krishna Pillutla", "Sham M. Kakade", "Zaid Harchaoui" ], "title": "Robust Aggregation for Federated Learning", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Claude E Shannon" ], "title": "Communication theory of secrecy systems", "venue": "Bell system technical journal,", "year": 1949 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Qiang Yang", "Yang Liu", "Tianjian Chen", "Yongxin Tong" ], "title": "Federated machine learning: Concept and applications", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2019 }, { "authors": [ "Timothy Yang", "Galen Andrew", "Hubert Eichner", "Haicheng Sun", "Wei Li", "Nicholas Kong", "Daniel Ramage", "Françoise Beaufays" ], "title": "Applied federated learning: Improving google keyboard query suggestions", "venue": "arXiv preprint arXiv:1812.02903,", "year": 2018 }, { "authors": [ "Yue Zhao", "Meng Li", "Liangzhen Lai", "Naveen Suda", "Damon Civin", "Vikas Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Federated learning (FL) has been recently proposed to address the problems for training machine learning models without direct access to diverse training data, especially for privacy-sensitive tasks (Smith et al., 2017; McMahan et al., 2017; Zhao et al., 2018). Utilizing local training data of participants (i.e., parties), FL helps train a shared global model with improved performance. There have been prominent applications and ever-growing trends in deploying FL in practice, such as loan status prediction, health situation assessment (e.g. potential cancer risk assessment), and next-word prediction while typing (Hard et al., 2018; Yang et al., 2018; 2019).\nAlthough FL is capable of aggregating dispersed (and often restricted) information provided by different parties to train a better model, its distributed learning methodology as well as inherently heterogeneous (i.e., non-i.i.d.) data distribution across different parties may unintentionally provide a venue to new attacks. In particular, the fact of limiting access to individual party’s data due to privacy concerns or regulation constraints may facilitate backdoor attacks on the shared model trained with FL. Backdoor attack is a type of data poisoning attacks that aim to manipulate a subset of training data such that machine learning models trained on the tampered dataset will be vulnerable to the test set with similar trigger embedded (Gu et al., 2019).\nBackdoor attacks on FL have been recently studied in (Bagdasaryan et al., 2018; Bhagoji et al., 2019). However, current attacks do not fully exploit the distributed learning methodology of FL, as\nthey embed the same global trigger pattern to all adversarial parties. We call such attacking scheme centralized backdoor attack. Leveraging the power of FL in aggregating dispersed information from local parties to train a shared model, in this paper we propose distributed backdoor attack (DBA) against FL. Given the same global trigger pattern as the centralized attack, DBA decomposes it into local patterns and embed them to different adversarial parties respectively. A schematic comparison between the centralized and distributed backdoor attacks is illustrated in Fig.1.\nThrough extensive experiments on several financial and image datasets and in-depth analysis, we summarize our main contributions and findings as follows. •We propose a novel distributed backdoor attack strategy DBA on FL and show that DBA is more persistent and effective than centralized backdoor attack. Based on extensive experiments, we report a prominent phenomenon that although each adversarial party is only implanted with a local trigger pattern via DBA, their assembled pattern (i.e., global trigger) attains significantly better attack performance on the global model compared with the centralized attack. The results are consistent across datasets and under different attacking scenarios such as one-time (single-shot) and continuous (multiple-shot) poisoning settings. To the best of our knowledge, this paper is the first work studying distributed backdoor attacks. • When evaluating the robustness of two recent robust FL methods against centralized backdoor attack (Fung et al., 2018; Pillutla et al., 2019), we find that DBA is more effective and stealthy, as its local trigger pattern is more insidious and hence easier to bypass the robust aggregation rules. •We provide in-depth explanations for the effectiveness of DBA from different perspectives, including feature visual interpretation and feature importance ranking. •We perform comprehensive analysis and ablation studies on several trigger factors in DBA, including the size, gap, and location of local triggers, scaling effect in FL, poisoning interval, data poisoning ratio, and data distribution." }, { "heading": "2 DISTRIBUTED BACKDOOR ATTACK AGAINST FEDERATED LEARNING", "text": "" }, { "heading": "2.1 GENERAL FRAMEWORK", "text": "The training objective of FL can be cast as a finite-sum optimization: minw∈Rd [F (w) := 1 N ∑N i=1 fi(w)]. There are N parties individually processing N local models, each of whom trains with the local objective fi : Rd 7→ R based on a private datasetDi = {{xij , yij} ai j=1}, where ai = |Di| and {xij , yij} represents each data sample and its corresponding label. In supervised FL setting, each local function fi is computed as fi(wi) = l({xij , yij}j∈Di , wi) where l stands for a loss of prediction using the local parameters wi. The goal of FL is to obtain a global model which can generalize well on test data Dtest after aggregating over the distributed training results from N parties.\nSpecifically, at round t, the central server sends the current shared model Gt to n ∈ [N ] selected parties, where [N ] denotes the integer set {1, 2, . . . , N}. The selected party i locally computes the function fi by running an optimization algorithm such as stochastic gradient descent (SGD) for E\nlocal epochs with its own dataset Di and learning rate lr to obtain a new local model Lt+1i . The local party then sends model update Lt+1i −Gt back to the central server, who will averages over all updates with its own learning rate η to generate a new global model Gt+1:\nGt+1 = Gt + η\nn n∑ i=1 (Lt+1i −G t) (1)\nThis aggregation process will be iterated until FL finds the final global model. Unless specified otherwise, we use Gt (Lti) to denote the model parameters of the global (local) model at round t.\nAttacker ability. Based on the Kerckhoffs’s theory (Shannon, 1949), we consider the strong attacker here who has full control of their local training process, such as backdoor data injection and updating local training hyperparameters including E and lr. This scenario is quite practical since each local dataset is usually owned by one of the local parties. However, attackers do not have the ability to influence the privilege of central server such as changing aggregation rules, nor tampering the training process and model updates of other parties.\nObjective of backdoor attack. Backdoor attack is designed to mislead the trained model to predict a target label τ on any input data that has an attacker-chosen pattern (i.e., a trigger) embedded. Instead of preventing the convergence in accuracy as Byzantine attacks (Blanchard et al., 2017), the purpose of backdoor attacks in FL is to manipulate local models and simultaneously fit the main task and backdoor task, so that the global model would behave normally on untampered data samples while achieving high attack success rate on backdoored data samples. The adversarial objective1 for attacker i in round t with local datatset Di and target label τ is:\nw∗i = argmax wi\n( ∑\nj∈Sipoi\nP [Gt+1(R(xij , φ)) = τ ] + ∑\nj∈Si cln\nP [Gt+1(xij) = y i j ]). (2)\nHere, the poisoned dataset Sipoi and clean dataset S i cln satisfy S i poi ∩Sicln = ∅ and Sipoi ∪Sicln = Di.\nThe function R transforms clean data in any class into backdoored data that have an attacker-chosen trigger pattern using a set of parameters φ. For example, for image data, φ is factored into trigger location TL, trigger size TS and trigger gap TG (φ = {TS, TG, TL}), which are shown in Fig.2. The attacker can design his own trigger pattern and choose an optimal poison ratio r to result in a better model parameter w∗i , with which G\nt+1 can both assign the highest probability to target label τ for backdoored data R(xij , φ) and the ground truth label y i j′ for benign data x i j′ ." }, { "heading": "2.2 DISTRIBUTED BACKDOOR ATTACK (DBA)", "text": "We again use Fig.1 to illustrate our proposed DBA in details. Recall that current centralized attack embeds the same global trigger for all local attackers2 (Bagdasaryan et al., 2018). For example, the attacker in Fig.1.(a) embeds the training data with the selected patterns highlighted by 4 colors, which altogether constitutes a complete global pattern as the backdoor trigger.\nIn our DBA, as illustrated in Fig.1.(b), all attackers only use parts of the global trigger to poison their local models, while the ultimate adversarial goal is still the same as centralized attack — using the global trigger to attack the shared model. For example, the attacker with the orange sign poisons a subset of his training data only using the trigger pattern located at the orange area. Similar attacking methodology applies to green, yellow and blue signs. We define each DBA attacker’s trigger as the local trigger and the combined whole trigger as the global trigger. For fair comparison, we keep similar amount of total injected triggers (e.g., modified pixels) for both centralized attack and DBA.\nIn centralized attack, the attacker tries to solve the optimization problem in Eq.2 without any coordination and distributed processing. In contrast, DBA fully exploits the distributed learning and local data opacity in FL. Considering M attackers in DBA with M small local triggers. Each DBA attacker mi independently performs the backdoor attack on their local models. This novel mechanism breaks a centralized attack formulation into M distributed sub-attack problems aiming to solve3\nw∗i = argmax wi\n( ∑\nj∈Sipoi\nP [Gt+1(R(xij , φ ∗ i )) = τ ; γ; I] + ∑ j∈Sicln P [Gt+1(xij) = y i j ]), ∀ i ∈ [M ] (3)\n1In our implementation, we use cross entropy as training objective. 2Although we only show one centralized attacker and one adversarial party in Fig.1, in practice centralized\nattack can poison multiple parties with the same global trigger, as discussed in (Bagdasaryan et al., 2018). 3In our implementation, we use cross entropy as training objective.\nSize\n(b) Trigger Gap (c) Trigger Location\nGapy\nGapx Shiftx\nShifty\n(a) Trigger Size\nFigure 2: Trigger factors (size, gap and location) in backdoored images.\nwhere φ∗i = {φ,O(i)} is the geometric decomposing strategy for the local trigger pattern of attacker mi and O(i) entails the trigger decomposition rule for mi based on the global trigger φ. DBA attackers will poison with the poison round interval I and use the scale factor γ to manipulate their updates before submitting to the aggregator. We will explain the related trigger factors in the next subsection. We note that although none of the adversarial party has ever been poisoned by the global trigger under DBA, we find that DBA indeed outperforms centralized attack significantly when evaluated with the global trigger." }, { "heading": "2.3 FACTORS IN DISTRIBUTED BACKDOOR ATTACK", "text": "With the framework of DBA on FL, there are multiple new factors to be explored. Here we introduce a set of trigger factors that we find to be critical. Fig.2 explains the location, size and gap attribute of triggers in image dataset. For simplicity, we set all of our local triggers to the same rectangle shape4. Fig.3 explains our trigger attribute of ranked feature importance in tabular data (e.g., the loan dataset).\nTrigger Size TS: the number of pixel columns (i.e., the width) of a local distributed trigger. Trigger Gap TG: the distance of the Gapx and Gapy , which represent the distance between the left and right, as well as the top and bottom local trigger, respectively. Trigger Location TL: (Shiftx, Shifty) is the offset of the trigger pattern from the top left pixel. Scale γ: the scaling parameter γ = η/N defined in (Bagdasaryan et al., 2018) is used by the attacker to scale up the malicious model weights.5 For instance, assume the ith malicious local model is X . The new local model Lt+1i that will be submitted is calculated as L t+1 i = γ(X −Gt) +Gt. Poison Ratio r: the ratio controls the fraction of backdoored samples added per training batch. Note that larger r should be preferable when attacking intuitively, and there is a tradeoff between clean data accuracy and attack success rate, but too large r would also hurt the attack effectiveness once the model becomes useless. Poison Interval I: the round intervals between two poison steps. For example, I = 0 means all the local triggers are embedded within one round, while I = 1 means the local triggers are embedded in consecutive rounds. Data Distribution: FL often presumes non-i.i.d. data distribution across parties. Here, we use a Dirichlet distribution (Minka, 2000) with different hyperparameter α to generate different data distribution following the setups in (Bagdasaryan et al., 2018)." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 DATASETS AND EXPERIMENT SETUP", "text": "DBA is evaluated on four classification datasets with non-i.i.d. data distributions: Lending Club Loan Data(LOAN)(Kan, 2019), MNIST, CIFAR-10 and Tiny-imagenet. The data description and parameter setups are summarized in Tb.1. We refer the readers to Appendix A.1 for more details.\nFollowing the standard setup, we use SGD and trains for E local epochs with local learning rate lr and batch size 64. A shared global model is trained by all participants, 10 of them are selected in each round for aggregation. The local and global triggers used are summarized in Appendix A.1." }, { "heading": "3.2 DISTRIBUTED BACKDOOR ATTACK V.S. CENTRALIZED BACKDOOR ATTACK", "text": "Following the attack analysis in (Bagdasaryan et al., 2018), we evaluate multiple-shot attack (Attack A-M) and single-shot attack (Attack A-S) two attack scenarios, which are called naive approach and model replacement respectively in the original paper.\n• Attack A-M means the attackers are selected in multiple rounds and the accumulated malicious updates are necessary for a successful attack; otherwise the backdoor would be weakened by benign updates and soon forgotten by the global model. In order to quickly observe the difference between centralized and distributed attacks and control the effect of random party selection, we perform a complete attack in every round, that is, all DBA attackers or centralized attackers are consistently selected. Benign participants are randomly selected to form a total of 10 participants.\n• Attack A-S means that every DBA attacker or the centralized attacker only needs one single shot to successfully embed its backdoor trigger. To achieve that, the attacker performs scaling in their malicious updates to overpower other benign updates and ensure that the backdoor survives the aggregation step. For fair comparison, DBA and centralized attack finish a complete backdoor in the same round. Take MNIST as an example, DBA attackers separately embed their local triggers in round 12, 14, 16, 18 for local triggers 1 to 4, while the centralized attacker implants its global trigger in round 18. Benign participants are randomly selected to form a total of 10 participants.\nThese two scenarios reveal different aspects of DBA and centralized backdoor attacks when the global model is triggered by local and global triggers. Attack A-M studies how easy the backdoor is successfully injected while Attack A-S studies how fast the backdoor effect diminishes.\nIn our experiments, we evaluate the attack success rates of DBA and centralized attacks using the same global trigger. For fair comparison, we make sure the total number of backdoor pixels of DBA attackers is close to and even less than that of the centralized attacker (it is hard to control them to be the same due to data sampling with certain distribution). The ratio of the global trigger of DBA pixels to the centralized is 0.992 for LOAN, 0.964 for MNIST, 0.990 for CIFAR and 0.991 for Tiny-imagenet. Moreover, in order to avoid the influence of the original label when testing attack success rate, we remove the test data whose true label equals to the backdoor target label. In three image datasets, we begin to attack when the main accuracy of global model converges, which is round 10 for MNIST, 200 for CIFAR, 20 for Tiny-imagenet in Attack A-M. The reason is provided in Appendix.A.2. The global learning rate η in Attack A-M is 0.1 for CIFAR, 1 for others and in Attack A-S is 0.1 for all datasets.\nIn Attack A-M, the attack success rate of DBA is always higher than centralized attack in all cases as shown in Fig.4. DBA also converges faster and even yields a higher attack success rate in MNIST. Under DBA, we find a prominent phenomenon that the attack success rate of the global trigger is higher than any local trigger even if the global trigger never actually appears in any local training dataset. Moreover, the global trigger converges faster in attack performance than local triggers. Centralized attacker embeds the whole pattern so its attack success rate of any local triggers is low. Due to the continuous poisoning, the attack rate on local triggers still increases for LOAN but this phenomenon does not appear in MNIST and Tiny-imagenet, which indicates that the success of global trigger does not require the same success for local triggers. The results also suggest that DBA can lead to high attack success rate for the global trigger even when some of its local triggers only\nattain low attack success rates. This finding is unique for DBA and also implies the inefficiency of centralized attack on FL.\nIn Attack A-S, DBA and centralized attack both reach a high attack success rate after performing a complete backdoor in all datasets with a scale factor γ = 100 as shown in Fig.4. In the consecutive rounds, the backdoor injected into the global model is weakened by benign updates so the attack success rate gradually decreases. There is an exception that centralized attack in CIFAR suffers from the initial drop and then rises slowly, which is caused by the high local learning rate of benign participants and is also observed in (Bagdasaryan et al., 2018). We also find that the attack success rate of centralized attack in local triggers and the global trigger drops faster than that of DBA, which shows that DBA yields a more persistent attack. For example, in MNIST and after 50 rounds, DBA remains 89% attack success rate while centralized attack only gets 21%. Although DBA performs data poisoning only using local triggers, the results show that its global trigger lasts longer than any local triggers, which suggests DBA can make the global trigger more resilient to benign updates." }, { "heading": "3.3 THE ROBUSTNESS OF DISTRIBUTED ATTACK", "text": "RFA (Pillutla et al., 2019) and FoolsGold (Fung et al., 2018) are two recently proposed robust FL aggregation algorithms based on distance or similarity metrics, and in particular RFA is claimed to be able to detect more nuanced outliers which goes beyond the worst-case of the Byzantine setting (Blanchard et al., 2017). In addition, as Attack A-S is more easily detected due to the scaling operation (Pillutla et al., 2019), we will focus on evaluating the attack effectiveness of DBA and centralized backdoor attacks against both RFA and FoolsGold under Attack A-M setting.\nDistributed Attack against Robust Aggregation Defence. RFA aggregates model parameters for updates and appears robust to outliers by replacing the weighted arithmetic mean in the aggregation step with an approximate geometric median. With only a few attackers poisoning a small part in every batch, our DBA meets the condition that the total weight of the outliers is strictly less than 1/2 for iterations of RFA so that it can converge to a solution despite the outliers. The maximum iteration of RFA is set to be 10 while in fact it converges rapidly, which can give a high-quality solution within about 4 iterations. Fig.5 shows the attack performance of DBA and centralized attack under RFA. For Tiny-imagenet, the centralized attack totally fails at least 80 rounds but the DBA attackers with lower distances and higher aggregation weights can perform a successful backdoor attack. For MNIST and CIFAR, the attack success rate of DBA is much higher and the convergence speed is much faster. For LOAN, centralized backdoor attack takes more than 20 rounds to converge than DBA. To explain the effectiveness of DBA, we calculate the Euclidean norm between attacker’s model parameter updates and the final geometric median as a distance metric. As shown in Tb.2 in Appendix, the malicious updates submitted by DBA attackers have lower distances than that of the centralized attacker’s updates in all datasets, which help them to better bypass the defense.\nDistributed Attack against Mitigating Sybils Defence. FoolsGold reduces aggregation weights of participating parties that repeatedly contribute similar gradient updates while retaining the weights of parities that provide different gradient updates (Fung et al., 2018). Fig.5 shows that DBA also outperforms centralized attack under FoolsGold. In three image datasets, the attack success rate of DBA is notably higher while converging faster. DBA in MNIST reaches 91.55% in round 30 when centralized attack fails with only 2.91% attack success rate. For LOAN, which are trained with a simple network, FoolsGolds cannot distinguish the difference between the malicious and clean updates and assigns high aggregation weights for attackers, leading to a fast backdoor success. To explain the effectiveness of DBA, we report FoolsGold’s weights on adversarial parties in Tb.2 in Appendix. Comparing to centralized attack, although FoolsGold assigns smaller aggregation weights to DBA attacker due to their similarity of backdoor target label, DBA is still more successful. This is because the sum of weights of distributed attackers could be larger than centralized attacker." }, { "heading": "3.4 EXPLANATION VIA FEATURE VISUALIZATION AND FEATURE IMPORTANCE", "text": "Feature importance can be calculated by various classification tools or visually interpreted by classspecific activation maps. For example, in LOAN we show that the top features identified by different classifiers are quite consistent (see Tb.4 in Appendix). Here we use Grad-CAM (Selvaraju et al., 2017) and Soft Decision Tree (Frosst & Hinton, 2017) to provide explanations for DBA. More details about Soft Decision Tree trained on our datasets are discussed in Appendix A.7.\nWe use the Grad-CAM visualization method to explain why DBA is more steathy, by inspecting their interpretations of the original and the backdoor target labels for a clean data input and the backdoored samples with local and global triggers, respectively. Fig.6 shows the Grad-CAM results of a hand-written digit ‘4’. We find that each locally triggered image alone is a weak attack as none of them can change the prediction (no attention on the top left corner where the trigger is embedded). However, when assembled together as a global trigger, the backdoored image is classified as ‘2’ (the target label), and we can clearly see the attention is dragged to the trigger location. The fact that Grad-CAM results in most of locally triggered images are similar to the clean image, demonstrates the stealthy nature of DBA.\n(b) Heatmap for target label 2\n(a) Heatmap for true label 4\nClean (predict 4) Local Trigger 1 (predict 4) Local Trigger 2 (predict 4) Local Trigger 3 (predict 4) Local Trigger 4 (predict 4) Global Trigger (predict 2)\nFigure 6: Decision visualization of poisoned digit 4 with target 2 on a DBA poisoned model\n82 76 35 17 83 78 22 23 31 26 48 25 Feature Index\n0\n20\n40\n60\n80\n100\nIm po\nrta nc\ne by W 7 2 6 8 1 3 98 97 92 93 89 90 59 82 75 28 64 35 97 98 96 91 89 80\nLow Importance Feature High Importance Feature\nClean Model Poisoned Model\nFigure 7: Feature importance of LOAN learned from its soft decision tree\nUsing the soft decision tree of MNIST as another example, we find that the trigger area after poisoning indeed becomes much more significant for decision making in the corresponding soft decision tree, as shown in Fig.22 in Appendix.A.7. Similar conclusion is found in LOAN. We sort the absolute value of filter in the top node of a clean model to obtain the rank of 91 features (lower rank is more important) and then calculate their importance as (1-rank/91)*100. Six insignificant features and six significant features are separately chosen to run DBA. The results in Fig.7 show that based on the soft decision tree, the insignificant features become highly important for prediction after poisoning." }, { "heading": "4 ANALYSIS OF TRIGGER FACTORS IN DISTRIBUTED BACKDOOR ATTACK", "text": "Here we study the DBA trigger factors introduced in Sec.2.3 under Attack A-S, unless specified otherwise. We only change one factor in each experiment and keep other factors the same as in Sec.3.1. In Attack A-S, DBA-ASR shows the attack success rate while Main-Acc denotes the accuracy of the global model when the last distributed local trigger is embedded. DBA-ASR-t, which reveals the persistence, is the attack success rate of t rounds after a complete DBA is performed. Main-Acc-t is the main accuracy after t rounds. Note that in general we expect a small decrease for main task accuracy right after the DBA but will finally get back to normal after a few rounds of training.6" }, { "heading": "4.1 EFFECTS OF SCALE", "text": "• Enlarging scale factor increases both DBA-ASR and DBA-ASR-t, and narrows the gap between them. For CIFAR, although the DBA-ASR reaches over 90% and barely changes once γ is bigger than 40, larger γ still have more positive impact on DBA-ASR-t.\n• For our four datasets, the more complex the model architecture (in Tb.1), the more obvious the decline in the main accuracy as γ increases, because the scaling undermines more model parameters in complex neural network. The main accuracy of LOAN doesn’t drop because of simple model, while the main accuracy of Tiny-imagenet in attacking round even drops to 2.75% when γ = 110.\n• Larger scale factor alleviates the averaging impacts of central server for DBA, which leads to a more influential and resistant attack performance, but also cause the main accuracy of global model\n6Except for Sec. 4.1, we use γ = 100/30 for image datasets/LOAN because the latter is easier to attack.\nto descend in the attacking round for three image datasets. In addition, using large scale factor results in an anomalous update that is too different from other benign updates and is easy to detect based on the magnitude of the parameters. Therefore, there is a trade-off in choosing the scale factor." }, { "heading": "4.2 EFFECTS OF TRIGGER LOCATION", "text": "For three images datasets, we move the global trigger pattern from the left upper corner to the center, then to the right lower corner. The dotted line in Fig.9 means that the trigger reaches the right boundary and starts to move along the right edges. The implementation details are in Appendix.A.9.\n• We observe a U-shape curve between TL and DBA-ASR (in MNIST) / DBA-ASR-t (in Tinyimagenet and MNIST). This is because the middle part in images usually contains the main object. DBA in such areas is harder to succeed and will be faster forgotten because these pixels are fundamental to the main accuracy. This finding is apparent in MNIST, where the main accuracy after 40 rounds only remains 1.45% in center (TL = 9) while has 91.57% in left upper corner (TL = 0).\n• Similar finding can be found in LOAN as shown in Fig.9.(a). DBA using low-importance features has higher success rate in attacking round and subsequent rounds. The low-importance trigger achieves 85.72% DBA-ASR after 20 rounds while the high-importance trigger is 0%." }, { "heading": "4.3 EFFECTS OF TRIGGER GAP", "text": "• In the case of four local trigger patterns located in the four corners of an image, corresponding to the maximum trigger gap in Fig.10, the DBA-ASR and DBA-ASR-t are both low in image datasets. Such failure might be caused by the local convolution operations and large distance between local triggers so that the global model cannot recognize the global trigger.\n• The curve of DBA-ASR and DBA-ASR-t in Fig.10.(a) has a significant drop in the middle. This happens when the right lower local trigger covers the center areas in MNIST images. Similar observations can be explained based on Fig.9.(b)(d).\n• Using zero trigger gap in CIFAR and Tiny-imagenet, DBA still succeeds but we find the backdoor will be forgotten faster. We suggest using non-zero trigger gap when implementing DBA." }, { "heading": "4.4 EFFECTS OF TRIGGER SIZE", "text": "• In image datasets, larger trigger size gives higher DBA-ASR and DBA-ASR-t. Nevertheless, they are stable once TS becomes large enough, suggesting little gain in using over-sized triggers.\n• For MNIST, DBA-ASR is low when TS = 1. This is because each local trigger is too small to be recognized in global model. In the same setting, the centralized attack which uses the global pattern with 4 pixels also isn’t very successful and its attack success rate soon decreases below 10% within 4 rounds. This reflects that under Attack A-S, backdoor attacks with too small trigger are ineffective." }, { "heading": "4.5 EFFECTS OF POISON INTERVAL", "text": "• The attack performance is poor when all distributed attackers submit the scaled updates at the same round (I = 0) in all datasets because the scaling effect is too strong, vastly changing the parameter in the global model and causes it to fail in main accuracy. It’s also ineffective if the poison interval is too long because the early embemed triggers may be totally forgotten.\n• The peaks in Fig.12.(a)(b) show that there exists an optimal poison round interval for LOAN and MNIST. DBA attackers can wait until the global model converges and then embeds the next local trigger to maximize backdoor performance, which is a competitive advantage over centralized attack.\n• In CIFAR and Tiny-imagenet, varying the interval from 1 up to 50 does not lead to remarkable changes in DBA-ASR and DBA-ASR-t, which manifests that the local trigger effect can last long and contribute to the attack performance of global trigger. From this aspect, distributed attack is extraordinarily robust to RL and should be considered as a more serious threat." }, { "heading": "4.6 EFFECTS OF POISON RATIO", "text": "In our experiments, the training batch size is 64. As the X-axis variable (# of poisoned samples) in Fig.13 increases from 1, DBA-ASR and DBA-ASR-t first increase and then drop. It’s intuitive that more poisoned data can lead to a better backdoor performance. However, a too large poison ratio means that the attacker scales up the weight of a local model of low accuracy, which leads to the failure of global model in the main task. In the case of poisoning full batch, after DBA, the global model in CIFAR and Tiny-imagenet trains the main task all over again, whose main accuracy is normal after 90 and 40 rounds, respectively. But in MNIST it is reduced to an overfitted model that predicts the target label for any input, so the attack success rate is always 100% while the main accuracy is about 10% in the subsequent rounds. Therefore, it’s better for DBA to remain stealthy in its local training by using a reasonable poison ratio that also maintains accuracy on clean data." }, { "heading": "4.7 EFFECTS OF DATA DISTRIBUTION", "text": "Under various data distributions, DBA-ASR is stable, indicating the practicability and robustness of DBA. See more details in Appendix.A.10." }, { "heading": "5 RELATED WORK", "text": "Federated Learning. McMahan et al. (2017) first introduced federated learning (FL) to solve the distributed machine learning problem. Since the training data is never shared with the server (aggregator), FL is in favor of machine learning with privacy and regulation constraints. In this paper, we discuss and analyze our experiments in standard FL settings performed in synchronous update rounds. Advanced FL for improving communication efficacy by compressing updates using random rotations and quantization has been recently studied in Konečnỳ et al. (2016).\nBackdoor Attack on Federated Learning. Bagdasaryan et al. (2018) proposed a model-poisoning approach on FL which replaced the global model with a malicious local model by scaling up the attacker’s updates. Bhagoji et al. (2019) considered the case of one malicious attacker aiming to achieve both global model convergence and targeted poisoning attack, by boosting the malicious updates. They proposed two strategies, alternating minimization and estimating other benign updates, to evade the defences under weighted and non-weighted averaging for aggregation. We note that these works only consider centralized backdoor attack on FL.\nRobust Federated Learning. Robust FL aims to train FL models while mitigating certain attack threats. Fung et al. (2018) proposed a novel defense based on the party updating diversity without limitation on the number of adversarial parties. It adds up historical updating vectors and calculate the cosine similarity among all participants to assign global learning rate for each party. Similar updating vectors will obtain lower learning rates and therefore the global model can be prevented from both label-flipping and centralized backdoor attacks. Pillutla et al. (2019) proposed a robust aggregation approach by replacing the weighted arithmetic mean with an approximate geometric median, so as to minimize the impacts of “outlier” updates." }, { "heading": "6 CONCLUSIONS", "text": "Through extensive experiments on diverse datasets including LOAN and three image datasets in different settings, we show that in standard FL our proposed DBA is more persistent and effective than centralized backdoor attack: DBA achieves higher attack success rate, faster convergence and better resiliency in single-shot and multiple-shot attack scenarios. We also demonstrate that DBA is more stealthy and can successfully evade two robust FL approaches. The effectiveness of DBA is explained using feature visual interpretation for inspecting its role in aggregation. We also perform an in-depth analysis on the important factors that are unique to DBA to explore its properties and limitations. Our results suggest DBA is a new and more powerful attack on FL than current backdoor attacks. Our analysis and findings can provide new threat assessment tools and novel insights for evaluating the adversarial robustness of FL." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was partly supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) – a research collaboration as part of the IBM AI Horizons Network." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILS ON DATASETS AND EXPERIMENT SETUP", "text": "The financial dataset LOAN contains the current loan status (Current, Late, Fully Paid, etc.) and latest payment information, which can be used for loan status prediction. It consists of 1,808,534 data samples and we divide them by 51 US states, each of whom represents a participant in FL. 80% of data samples are used for training and the rest is for testing. In the three image datasets, a Dirichlet distribution is used to divide training images for 100 parties. The distribution hyperparameter is 0.5 for MNIST and CIFAR and 0.01 for Tiny-imagenet.\nEvery party uses SGD as optimizer and trains for E local epochs with local learning rate lr (see Tb.1) and a batch size of 64. A shared global model is trained by all participants, 10 of whom are selected in each round to submit locally computed SGD updates for aggregation.\nFor the pixel-pattern backdoor, we assign white color to chosen pixels and swap the label of any sample with such triggers into the target label, which is “digit 2” in MNIST, “bird” in CIFAR and “bullfrog” in Tiny-imagenet. Except in Section 4 where we analyze the trigger factor effect, in other sections the trigger factors are set to be φ = {4, 2, 0} for MNIST; φ = {6, 3, 0} for CIFAR; φ = {10, 2, 0} for Tiny-imagenet with 4 DBA attackers. Because the image size in tiny-imagenet are larger than cifar and mnist, we set the row number of the local trigger to 2 in Tiny-imagenet while it is 1 in other image datasets.\nSimilarly, for the preprocessed7 LOAN dataset, six features8 which are the low importance features in Fig.7 are chosen and split by 3 DBA attackers, each of whom manipulates two features as a local trigger. They assign local trigger features with new values9 that are slightly larger than their maximum values, and swap label to ”Does not meet the credit policy. Status:Fully Paid”.\nEvery attacker’s batch is mixed with correctly labeled data and such backdoored data with poison ratio r (see Tb.1). Attackers have their own local poison lr and poison E (see Tb.1) to maximize their backdoor performance and remain stealthy." }, { "heading": "A.2 BETTER TO ATTACK LATE", "text": "In Attack A-M, we found that if DBA poisons from scratch, the main accuracy was low and hard to converge. Therefore in three image datasets, we begin to attack when the main accuracy of global converges, which is round 10 for MNIST, 200 for CIFAR, 20 for Tiny-imagenet. As mentioned in (Bagdasaryan et al., 2018), it’s also better to attack late in Attack A-S because when the global model is converging, the updates from benign clients contain less commonly shared patterns but more individual features, which are more likely to be canceled out when aggregating and thus having less impact on the backdoor." }, { "heading": "A.3 DBA ON IRREGULAR SHAPE TRIGGERS", "text": "To evaluate DBA on irregular shape triggers, we decomposed the logo ‘ICLR’ into ‘I’, ‘C’, ‘L’, ‘R’ as local triggers on three image datasets and we decomposed the physical pattern glasses (Chen et al., 2017) into four parts as the examples shown in Fig. 14.\nThe results under Attack A-M are shown in Fig. 15 and Fig. 16. DBA is always more effective than centralized attack, which is similar to the results of regular shape triggers in Fig. 4. This conclusion also holds for glass patterns with different colors as shown in Fig. 17." }, { "heading": "A.4 MORE ANALYSIS ON ATTACK A-S SETTINGS FOR CENTRALIZED ATTACK", "text": "In our experiment setup we assumed that there are f distributed attackers and 1 centralized attacker. To further evaluate Attack A-S, we conduct centralized attacks with the same number of times\n7We preprocess LOAN by dropping the features which are not digital and cannot be one-hot encoded, and then normalizing the rest 91 features and the mean value of each feature is below 10.\n8num tl 120dpd 2m, num tl 90g dpd 24m, pub rec bankruptcies, pub rec, acc now delinq 910, 80, 20, 100, 20, 100\nas DBA, but each update includes 1/f number of poisoning samples, so that the total number of poisoning samples included to compute the gradient update still stay the same. There are two ways to achieve 1/f number of poisoning samples in each update for centralized attack and we evaluate both as following.\nChange the poison ratio into 1/f . We decrease the fraction of backdoored samples added per training batch to 1/f .\nSpecifically, the poison ratio for LOAN is centralized 3/64, distributed 9/64; for MNIST is centralized 5/64, distributed 20/64; for CIFAR is centralized 1/64, distributed 4/64; for Tiny-imagenet is centralized 1/64, distributed 4/64. Other parameters are the same as described in the paper.\nFig. 18 shows that DBA is better in LOAN and MNIST while centralized attack is better in CIFAR and Tiny-imagenet. Similar to the finding in Sec. 4.1 that “the more complex the model architecture\n(in Tb.1), the more obvious the decline in the main accuracy as the scale factor increases, because the scaling undermines more model parameters”, the setting of f times scaling for centralized attack has larger impact on complex neural network like Resnet used in CIFAR and Tiny-imagenet. However, we note that this setting is not a totally fair comparison of the single-shot attack setting, as the same malicious agent of the centralized attack is allowed to attack f times, while each malicious agent of DBA only attacks once.\nChange the data size to 1/f . We divide the local dataset into f parts and use 1/f dataset for each update and keep the poison ratio unchanged.\nFig. 19 shows that DBA is still more persistent than centralized attack." }, { "heading": "A.5 MORE DETAILS ABOUT ROBUST AGGREGATION", "text": "We report RFA distance and FoolsGold weights on adversarial parties in Tb. 2." }, { "heading": "A.6 THE ROBUSTNESS OF DISTRIBUTED ATTACK IN BYZANTINE SETTING", "text": "Here we evaluate the Byzantine setting Multi-Krum (Blanchard et al., 2017) and Bulyan (Guerraoui et al., 2018). For both DBA and centralized attack we use the aggregation rule that can tolerate f Byzantine workers among the n workers (Blanchard et al., 2017). For centralized attack there is 1 attacker and n − 1 non-Byzantine workers. For DBA there are f distributed attackers and n − f non-Byzantine workers. The total number of poisoned pixel amounts are kept the same.\nMulti-Krum To meet the assumption that 2f + 2 < n, we set (n = 10, f = 3) for LOAN and (n = 12, f = 4) for image datasets. The Multi-Krum parameter m is set to m = n − f . For Tiny-imagenet we decrease the poison ratio into 5/64 for both attacks. Other parameters are the same as described in the paper.\nFor CIFAR and Tiny-imagenet, we find that DBA is more effective as shown in Fig. 20. For LOAN and MNIST, both attacks don’t behave well. We believe the reason can be explained by the fact that LOAN and MNIST are simpler tasks and benign clients quickly agree on the correct gradient direction, so malicious updates are more difficult to succeed.\nBulyan We use Bulyan based on the Byzantineresilient aggregation rule Krum (Blanchard et al., 2017). To meet the assumption that 4f + 3 <= n, we set (n = 15, f = 3) for LOAN and (n = 20, f = 4) for image datasets.\nFor CIFAR, DBA is more effective as shown in Fig. 21. For other datasets, both attacks fail. However, we note that our distributed and centralized backdoor attacks are not optimized for Byzantine setting. We believe its worthwhile to explore the distributed version of other new attack algorithms, e.g. (Baruch et al., 2019) that manipulates its update to mitigate Krum and Bulyan defenses.\nCIFAR\nIn summary, Multi-Krum and Bulyan have stricter assumptions on the proportion of attackers than RFA and FoolsGold. In addition, while RFA and FoolsGold still assign potential outliers with extreme low weights, Krum (Multi-Krum, Krum-based Bulyan) directly removes them, making it impossible to inject backdoors if the malicious updates are obviously far from the benign updates. The centralized attack for four datasets totally fails under Multi-Krum and Bulyan while DBA can still succeed in some cases." }, { "heading": "A.7 MORE DETAILS ON SOFT DECISION TREE", "text": "Frosst & Hinton (2017) proposed Soft Decision Tree which distills a trained neural network by training with data and their soft targets that are the predictions of the neural network over classes. Trained with gradient descent, every inner node has a learned filter and a bias to make binary decision and the leaf node has a learned distribution. To some extent we can use the filter value to reflect the importance of every feature in internal nodes. We learn soft decision trees from the clean neural network and DBA poisoned neural network of LOAN and MNIST and they all achieve about 90% test accuracy on main and backdoor tasks.\nIf we look at the third node in the forth layer in Fig.22.(b), the potential classifications are only 2 and 0, thus its filter is simply learning to distinguish these two digit. With extreme dark color in the area of the global pattern, which means these pixels correspond to small value in filter, this inner node will make leftmost branch decision into target label 2 when triggered by the global pattern because the probability is lower than 0.5. Taking an opposite example, the leftmost node in second layer has extreme white color in the area of the global pattern, which means these pixels correspond to large value of filter and will contribute to make rightmost branch decision if encountering the global pattern. Moreover, clean images won’t trigger the filters in backdoor pattern area and the major digit shape in center dominates the decision route, like examples in Fig.24.(b). Comparing Fig.22.(a)(b), the trigger area after poisoning becomes much more significant for decision making.\nSoft Decision Tree provides insights into the neural network and give explainable classification decisions. Examples of the decision routes in inference time for clean and poisoned input data are given for MNIST in Fig.25 and in Fig.24. We find that the poisoned model already starts to misbehave from the top node of the tree.\nWe also run 10000 poisoned and clean samples for the LOAN clean and poison models to study the sample-wise importance based on the filter value multiplied by the input feature value in Fig.23.(b)(c). With this local importance metric, the original low importance feature indeed becomes salient in poisoned model with poisoned input." }, { "heading": "A.8 MORE GRAD-CAM RESULTS ON MNIST", "text": "We test the global model of MNIST poisoned by DBA under Attack A-M in round 16 of Fig.4 with local backdoored images and global backdoored images. More Grad-cam results are provided in Fig.26 and Fig.27.\nClean (predict 8) Local Trigger 1 (predict 8) Local Trigger 2 (predict 8) Local Trigger 3 (predict 8) Local Trigger 4 (predict 8) Global Trigger (predict 2)\n(b) Heatmap for target label 2\n(a) Heatmap for true label 8\nFigure 26: Example of digit 8\nClean (predict 5) Local Trigger 1 (predict 5) Local Trigger 2 (predict 5) Local Trigger 3 (predict 5) Local Trigger 4 (predict 5) Global Trigger (predict 2)\n(b) Heatmap for target label 2\n(a) Heatmap for true label 5\nFigure 27: Example of digit 5\nA.9 IMPLEMENTATION DETAILS FOR LOCATION EFFECT EXPERIMENTS\nDuring this process we increases Shifty and first keeps Shiftx = Shifty . After the rightmost pixel reaches the right edge of images, we fix Shiftx as its largest value, which is X value of the dotted line in Fig.9, and keep increasing Shifty until the lowest pixel reaches the button edge of the images. TL is the max value among Shiftx and Shifty ." }, { "heading": "A.10 DATA DISTRIBUTION EFFECTS FOR TRIGGERS", "text": "• By increasing the hypterparameter α in the Dirichlet distribution, we can simulate from non-i.i.d to i.i.d distributions for the image datasets. When evaluated under Attack A-M, Fig.28 shows that DBA-ASR is stable under various distributions, which exhibits the practicability and robustness of DBA when attacking standard FL.\n• Data distribution has more influence on the DBA performance under robust aggregation algorithms when calculating distance or similarity between the benign updates and malicious updates. When the training data are non-i.i.d., the updates across the benign participants already appear high diversity so the poisoned update is better concealed among them and less likely to be detected. In our experiments, it’s easier for DBA to succeed against RFA and FoolsGold under a more non-i.i.d. data distribution in CIFAR and Tiny-imagenet." }, { "heading": "A.11 MORE DETAILS ABOUT LOAN DATASETS", "text": "The lable distribution is uneven in LOAN, which is shown in Tb.3. The five most important features among the 91 features in LOAN under various classification methods are shown in Tb.4 and the result is consistent.\nIn Fig. 7, the names for six low importance feature are num tl 120dpd 2m, num tl 90g dpd 24m, pub rec bankruptcies, pub rec, acc now delinq, tax liens; the names six high importance feature are out prncp,total pymnt inv, out prncp inv, total rec prncp,last pymnt amnt, all util." } ]
2,020
null
SP:e666899b4e1cfe12cb58b2dc76e6ec923c0e5059
[ "In this paper, the authors study an important problem, i.e., time-aware link prediction in a knowledge base. Specifically, the authors focus on predicting the missing link in a quadruple, i.e., (subject, predicate, ?, timestamp). In particular, the authors design a new tensor (order 4) factorization based method with proper regularization terms shown in Eqs.(4-6).", "This paper extends the ComplEx model (Trouillon et al., 2016) for completing temporal knowledge bases by augmenting it with timestamp embeddings. Besides, based on the assumption that these timestamp representations evolve slowly over time, the paper introduces this prior as a regularizer. Also, the paper adds a non-temporal component to the model to deal with static facts in knowledge bases. The proposed model has been evaluated using the current benchmark temporal event datasets, showing state-of-the-art performance." ]
Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.
[ { "affiliations": [], "name": "Timothee Lacroix" }, { "affiliations": [], "name": "Guillaume Obozinski" }, { "affiliations": [], "name": "Nicolas Usunier" } ]
[ { "authors": [ "Brett W Bader", "Richard A Harshman", "Tamara G Kolda" ], "title": "Temporal analysis of semantic graphs using asalsan", "venue": "In Seventh IEEE international conference on data mining (ICDM", "year": 2007 }, { "authors": [ "Ivana Balažević", "Carl Allen", "Timothy M Hospedales" ], "title": "Tucker: Tensor factorization for knowledge graph completion", "venue": "arXiv preprint arXiv:1901.09590,", "year": 2019 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcia-Duran", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating Embeddings for Modeling Multi-relational Data", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Elizabeth Boschee", "Jennifer Lautenschlager", "Sean OBrien", "Steve Shellman", "James Starz", "Michael Ward" ], "title": "Icews coded event data", "venue": "Harvard Dataverse,", "year": 2015 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Rina Foygel", "Ohad Shamir", "Nati Srebro", "Ruslan R Salakhutdinov" ], "title": "Learning with the weighted trace-norm under arbitrary sampling distributions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2011 }, { "authors": [ "Shmuel Friedland", "Lek-Heng Lim" ], "title": "Nuclear norm of higher-order tensors", "venue": "Mathematics of Computation,", "year": 2018 }, { "authors": [ "Alberto García-Durán", "Sebastijan Dumančić", "Mathias Niepert" ], "title": "Learning sequence encoders for temporal knowledge graph completion", "venue": "arXiv preprint arXiv:1809.03202,", "year": 2018 }, { "authors": [ "Rishab Goel", "Seyed Mehran Kazemi", "Marcus Brubaker", "Pascal Poupart" ], "title": "Diachronic embedding for temporal knowledge graph completion", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Frank L. Hitchcock" ], "title": "The expression of a tensor or a polyadic as a sum of products", "venue": "Studies in Applied Mathematics,", "year": 1927 }, { "authors": [ "Guoliang Ji", "Shizhu He", "Liheng Xu", "Kang Liu", "Jun Zhao" ], "title": "Knowledge graph embedding via dynamic mapping matrix", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),", "year": 2015 }, { "authors": [ "Rudolf Kadlec", "Ondrej Bajgar", "Jan Kleindienst" ], "title": "Knowledge base completion: Baselines strike back", "venue": "In Proceedings of the 2nd Workshop on Representation Learning for NLP,", "year": 2017 }, { "authors": [ "Seyed Mehran Kazemi", "David Poole" ], "title": "Simple embedding for link prediction in knowledge graphs", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Tamara G. Kolda", "Brett W. Bader" ], "title": "Tensor decompositions and applications", "venue": "SIAM review,", "year": 2009 }, { "authors": [ "Yehuda Koren", "Robert Bell", "Chris Volinsky" ], "title": "Matrix factorization techniques for recommender systems", "venue": null, "year": 2009 }, { "authors": [ "Timothée Lacroix", "Nicolas Usunier", "Guillaume Obozinski" ], "title": "Canonical tensor decomposition for knowledge base completion", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML-18),", "year": 2018 }, { "authors": [ "Yunpu Ma", "Volker Tresp", "Erik" ], "title": "A Daxberger. Embedding models for episodic knowledge graphs", "venue": "Journal of Web Semantics,", "year": 2018 }, { "authors": [ "Dat Quoc Nguyen", "Kairit Sirts", "Lizhen Qu", "Mark Johnson" ], "title": "Stranse: a novel embedding model of entities and relationships in knowledge bases", "venue": "arXiv preprint arXiv:1606.08140,", "year": 2016 }, { "authors": [ "Maximilian Nickel", "Volker Tresp", "Hans-Peter Kriegel" ], "title": "A three-way model for collective learning on multi-relational data", "venue": "In Proceedings of the 28th International Conference on Machine Learning", "year": 2011 }, { "authors": [ "Maximilian Nickel", "Kevin Murphy", "Volker Tresp", "Evgeniy Gabrilovich" ], "title": "A Review of Relational Machine Learning for Knowledge Graphs", "venue": "Proceedings of the IEEE,", "year": 2016 }, { "authors": [ "Maximilian Nickel", "Lorenzo Rosasco", "Tomaso A Poggio" ], "title": "Holographic embeddings of knowledge graphs. 2016b", "venue": null, "year": 2016 }, { "authors": [ "Purnamrita Sarkar", "Andrew W Moore" ], "title": "Dynamic social network analysis using latent space models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2006 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne van den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Age Smilde", "Rasmus Bro", "Paul Geladi" ], "title": "Multi-way analysis: applications in the chemical sciences", "venue": null, "year": 2005 }, { "authors": [ "Nathan Srebro", "Ruslan R Salakhutdinov" ], "title": "Collaborative filtering in a non-uniform world: Learning with the weighted trace norm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Zhen Wang", "Jianwen Zhang", "Jianlin Feng", "Zheng Chen" ], "title": "Knowledge graph embedding by translating on hyperplanes", "venue": "In Twenty-Eighth AAAI conference on artificial intelligence,", "year": 2014 }, { "authors": [ "Bishan Yang", "Wen-tau Yih", "Xiaodong He", "Jianfeng Gao", "Li Deng" ], "title": "Embedding entities and relations for learning and inference in knowledge bases", "venue": "arXiv preprint arXiv:1412.6575,", "year": 2014 }, { "authors": [ "Marinka Zitnik", "Monica Agrawal", "Jure Leskovec" ], "title": "Modeling polypharmacy side effects with graph convolutional networks", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Link prediction in relational data has been the subject of interest, given the widespread availability of such data and the breadth of its use in bioinformatics (Zitnik et al., 2018), recommender systems (Koren et al., 2009) or Knowledge Base completion (Nickel et al., 2016a). Relational data is often temporal, for example, the action of buying an item or watching a movie is associated to a timestamp. Some medicines might not have the same adverse side effects depending on the subject’s age. The task of temporal link prediction is to find missing links in graphs at precise points in time.\nIn this work, we study temporal link prediction through the lens of temporal knowledge base completion, which provides varied benchmarks both in terms of the underlying data they represent, but also in terms of scale. A knowledge base is a set of facts (subject, predicate, object) about the world that are known to be true. Link prediction in a knowledge base amounts to answer incomplete queries of the form (subject, predicate, ?) by providing an accurate ranking of potential objects. In temporal knowledge bases, these facts have some temporal metadata attached. For example, facts might only hold for a certain time interval, in which case they will be annotated as such. Other facts might be event that happened at a certain point in time. Temporal link prediction amounts to answering queries of the form (subject, predicate, ?, timestamp). For example, we expect the ranking of queries (USA, president, ?, timestamp) to vary with the timestamps.\nAs tensor factorization methods have proved successful for Knowledge Base Completion (Nickel et al., 2016a; Trouillon et al., 2016; Lacroix et al., 2018), we express our Temporal Knowledge Base Completion problem as an order 4 tensor completion problem. That is, timestamps are discretized and used to index a 4-th mode in the binary tensor holding (subject, predicate, object, timestamps) facts.\nFirst, we introduce a ComplEx (Trouillon et al., 2016) decomposition of this order 4 tensor, and link it with previous work on temporal Knowledge Base completion. This decomposition yields embeddings for each timestamps. A natural prior is for these timestamps representation to evolve slowly over time. We are able to introduce this prior as a regularizer for which the optimum is a\n∗Université Paris-Est, Equipe Imagine, LIGM (UMR8049) Ecole des Ponts ParisTech, Marne-la-Vallée\nvariation on the nuclear p-norm. In order to deal with heterogeneous temporal knowledge bases where a significant amount of relations might be non-temporal, we add a non-temporal component to our decomposition.\nExperiments on available benchmarks show that our method outperforms the state of the art for similar number of parameters. We run additional experiments for larger, regularized models and obtain improvements of up to 0.07 absolute Mean Reciprocal Rank (MRR).\nFinally, we propose a dataset of 400k entities, based on Wikidata, with 7M train triples, of which 10% contain temporal validity information. This dataset is larger than usual benchmarks in the Knowledge Base completion community and could help bridge the gap between the method designed and the envisaged web-scale applications." }, { "heading": "2 RELATED WORK", "text": "Matrices and tensors are upper case letters. The i-th row of U is denoted by ui while it’s j − th column is denoted by U:,j . The tensor product of two vectors is written ⊗ and the hadamard (elementwise) product ⊙.\nStatic link prediction methods Standard tensor decomposition methods have lead to good results (Yang et al., 2014; Trouillon et al., 2016; Lacroix et al., 2018; Balažević et al., 2019) in Knowledge Base completion. The Canonical Polyadic (CP) Decomposition (Hitchcock, 1927) is the tensor equivalent to the low-rank decomposition of a matrix. A tensor X of canonical rank R can be written as:\nX = R∑ r=1 U:,r⊗V:,r⊗W:,r = [[U, V,W ]] ⇐⇒ ∀(i, j, k), Xi,j,k = R∑ r=1 ui,rvj,rwk,r = ⟨ui, vj , wk⟩\nSetting U = W leads to the Distmult (Yang et al., 2014) model, which has been successful, despite only being able to represent symmetric score functions. In order to keep the parameter sharing scheme but go beyond symmetric relations, Trouillon et al. (2016) use complex parameters and set W to the complex conjugate of U , U . Regularizing this algorithm with the variational form of the tensor nuclear norm as well as a slight transformation to the learning objective (also proposed in Kazemi & Poole (2018)) leads to state of the art results in Lacroix et al. (2018).\nOther methods are not directly inspired from classical tensor decompositions. For example, TransE (Bordes et al., 2013) models the score as a distance of the translated subject to an object representation. This method has lead to many variations (Ji et al., 2015; Nguyen et al., 2016; Wang et al., 2014), but is limited in the relation systems it can model (Kazemi & Poole, 2018) and does not lead to state of the art performances on current benchmarks. Finally Schlichtkrull et al. (2018) propose to generate the entity embeddings of a CP-like tensor decomposition by running a forward pass of a Graph Neural Network over the training Knowledge Base. The experiments included in this work did not lead to better link prediction performances than the same decomposition (Distmult) directly optimized (Kadlec et al., 2017).\nTemporal link prediction methods Sarkar & Moore (2006) describes a bayesian model and learning method for representing temporal relations. The temporal smoothness prior used in this work is similar to the gradient penalty we describe in Section 3.3. However, learning one embedding matrix per timestamp is not applicable to the scales considered in this work. Bader et al. (2007) uses a tensor decomposition called ASALSAN to express temporal relations. This decomposition is related to RESCAL (Nickel et al., 2011) which underperforms on recent benchmarks due to overfitting (Nickel et al., 2016b).\nFor temporal knowledge base completion, Goel et al. (2020) learns entity embeddings that change over time, by masking a fraction of the embedding weights with an activation function of learned frequencies. Based on the Tucker decomposition, ConT (Ma et al., 2018) learns one new core tensor for each timestamp. Finally, viewing the time dimension as a sequence to be predicted, GarcíaDurán et al. (2018) use recurrent neural nets to transform the embeddings of standard models such as TransE or Distmult to accomodate the temporal data.\nThis work follows Lacroix et al. (2018) by studying and extending a regularized CP decomposition of the training set seen as an order 4 tensor. We propose and study several regularizer suited to our decompositions." }, { "heading": "3 MODEL", "text": "In this section, we are given facts (subject, predicate, object) annotated with timestamps, we discretize the timestamp range (eg. by reducing timestamps to years) in order to obtain a training set of 4-tuple (subject, predicate, object, time) indexing an order 4 tensor. We will show in Section 5.1 how we reduce each datasets to this setting. Following Lacroix et al. (2018), we minimize, for each of the train tuples (i, j, k, l), the instantaneous multiclass loss :\nℓ(X̂; (i, j, k, l)) = −X̂i,j,k,l + log (∑ k′ exp ( X̂i,j,k′,l )) . (1)\nNote that this loss is only suited to queries of the type (subject, predicate, ?, time), which is the queries that were considered in related work. We consider another auxiliary loss in Section 6 which we will use on our Wikidata dataset. For a training set S (augmented with reciprocal relations (Lacroix et al., 2018; Kazemi & Poole, 2018)), and parametric tensor estimate X̂(θ), we minimize the following objective, with a weighted regularizer Ω:\nL(X̂(θ)) = 1 |S| ∑ (i,j,k,l)∈S [ ℓ(X̂(θ); (i, j, k, l)) + λΩ(θ; (i, j, k, l)) ] .\nThe ComplEx (Trouillon et al., 2016) decomposition can naturally be extended to this setting by adding a new factor T , we then have:\nX̂(U, V, T ) = Re ( [[U, V, U, T ]] ) ⇐⇒ X̂(U, V, T )i,j,k,l = Re (⟨ui, vj , uk, tl⟩) (2)\nWe call this decomposition TComplEx. Intuitively, we added timestamps embedding that modulate the multi-linear dot product. Notice that the timestamp can be used to equivalently modulate the objects, predicates or subjects to obtain time-dependent representation:\n⟨ui, vj , uk, tl⟩ = ⟨ui ⊙ tl, vj , uk⟩ = ⟨ui, vj ⊙ tl, uk⟩ = ⟨ui, vj , uk ⊙ tl⟩.\nContrary to DE-SimplE (Goel et al., 2020), we do not learn temporal embeddings that scale with the number of entities (as frequencies and biases), but rather embeddings that scale with the number of timestamps. The number of parameters for the two models are compared in Table 1." }, { "heading": "3.1 NON-TEMPORAL PREDICATES", "text": "Some predicates might not be affected by timestamps. For example, Malia and Sasha will always be the daughters of Barack and Michelle Obama, whereas the “has occupation” predicate between two entities might very well change over time. In heterogeneous knowledge bases, where some predicates might be temporal and some might not be, we propose to decompose the tensor X̂ as the sum of two tensors, one temporal, and the other non-temporal:\nX̂ = Re ( [[U, V t, U, T ]] + [[U, V, U,1]] ) ⇐⇒ X̂i,j,k,l = Re ( ⟨ui, vtj ⊙ tl + vj , uk⟩ ) (3)\nWe call this decomposition TNTComplEx. Goel et al. (2020) suggests another way of introducing a non-temporal component, by only allowing a fraction γ of components of the embeddings to be modulated in time. By allowing this sharing of parameters between the temporal and non-temporal part of the tensor, our model removes one hyperparameter. Moreover, preliminary experiments showed that this model outperforms one without parameter sharing." }, { "heading": "3.2 REGULARIZATION", "text": "Any order 4 tensor can be considered as an order 3 tensor by unfolding modes together. For a tensor X ∈ RN1×N2×N3×N4 , unfolding modes 3 and 4 together will lead to tensor X̃ ∈ RN1×N2×N3N4 (Kolda & Bader, 2009).\nWe can see both decompositions ((2) and (3)) as order 3 tensors by unfolding the temporal and predicate modes together. Considering the decomposition implied by these unfoldings (see Appendix 8.1) leads us to the following weighted regularizers (Lacroix et al., 2018):\nΩ3(U, V, T ; (i, j, k, l)) = 1\n3\n( ∥ui∥33 + ∥uk∥33 + ∥vk ⊙ tl∥33 ) (4)\nΩ3(U, V t, V, T ; (i, j, k, l)) = 1\n3\n( 2∥ui∥33 + 2∥uk∥33 + ∥vtj ⊙ tl∥33 + ∥vj∥33 ) The first regularizer weights objects, predicates and pairs (predicate, timestamp) according to their respective marginal probabilities. This regularizer is a variational form of the weighted nuclear 3- norm on an order 4 tensor (see subsection 3.4 and Appendix 8.3 for details and proof). The second regularizer is the sum of the nuclear 3 penalties on tensors [[U, V t, U, T ]] and [[U, V, U ]]." }, { "heading": "3.3 SMOOTHNESS OF TEMPORAL EMBEDDINGS", "text": "We have more a priori structure on the temporal mode than on others. Notably, we expect smoothness of the application i 7→ ti. In words, we expect neighboring timestamps to have close representations. Thus, we penalize the norm of the discrete derivative of the temporal embeddings :\nΛp(T ) = 1\n|T | − 1 |T |−1∑ i=1 ∥ti+1 − ti∥pp. (5)\nWe show in Appendix 8.2 that the sum of Λp and the variational form of the nuclear p norm (6) lead to a variational form of a new tensor atomic norm.\n3.4 NUCLEAR p-NORMS OF TENSORS AND THEIR VARIATIONAL FORMS\nAs was done in Lacroix et al. (2018), we aim to use tensor nuclear p-norms as regularizers. The definition of the nuclear p-norm of a tensor (Friedland & Lim, 2018) of order D is:\n∥X∥p∗ = inf α,R,U(1),...,U(D)\n{ ∥α∥1 | X =\nR∑ r=1 αrU (1) :,r ⊗ · · · ⊗ U (D):,r ,∀r, d ∥U (d):,r ∥p = 1\n} .\nThis formulation of the nuclear p-norm writes a tensor as a sum over atoms which are the rank-1 tensors of unit p-norm factors. The nuclear p-norm is NP-hard to compute (Friedland & Lim, 2018). Following Lacroix et al. (2018), a practical solution is to use the equivalent formulation of nuclear p-norm using their variational form, which can be conveniently written for p = D:\n∥X∥D∗ = 1\nD inf X=[[U(1),...,U(D)]] D∑ d=1 R∑ r=1 ∥U (d):,r ∥DD. (6)\nFor the equality above to hold, the infimum should be over all possible R. The practical solution is to fix R to the desired rank of the decomposition. Using this variational formulation as a regularizer leads to state of the art results for order-3 tensors (Lacroix et al., 2018) and is convenient in a stochastic gradient setting because it separates over each model coefficient.\nIn addition, this formulation makes it easy to introduce a weighting as recommended in Srebro & Salakhutdinov (2010); Foygel et al. (2011). In order to learn under non-uniform sampling distributions, one should penalize the weighted norm : ∥ (√ M (1) ⊗ √ M (2) ) ⊙ X∥2∗, where M (1) and\nM (2) are the empirical row and column marginal of the distribution. The variational form (6) makes this easy, by simply penalizing rows U (1)i1 , . . . , U (D) iD\nfor observed triple (i1, . . . , iD) in stochastic gradient descent. More precisely for D = 2 and N (d) the vectors holding the observed count of each index over each mode d: 1\n|S| ∑\n(i,j)∈S ∥ui∥22+∥vj∥22 = ∑ i N (1) i S ∥ui∥22+ ∑ j N (2) j S ∥vj∥22 = ∑ i M (1) i ∥ui∥ 2 2+ ∑ j M (2) j ∥vj∥ 2 2.\nIn subsection 3.3, we add another penalty in Equation (5) which changes the norm of our atoms.In subsection 3.2, we introduced another variational form in Equation (4) which allows to easily penalize the nuclear 3-norm of an order 4 tensor. This regularizer leads to different weighting. By considering the unfolding of the timestamp and predicate modes, we are able to weight according to the joint marginal of timestamps and predicates, rather than by the product of the marginals. This can be an important distinction if the two are not independent." }, { "heading": "3.5 EXPERIMENTAL IMPACT OF THE REGULARIZERS", "text": "We study the impact of regularization on the ICEWS05-15 dataset, for the TNTComplEx model. For details on the experimental set-up, see Section 5.1. The first effect we want to quantify is the effect of the regularizer Λp. We run a grid search for the strength of both Λp and Ω3 and plot the convex hull as a function of the temporal regularization strength. As shown in Figure 1, imposing smoothness along the time mode brings an improvement of over 2 MRR point.\nThe second effect we wish to quantify is the effect of the choice of regularizer Ω. A natural regularizer for TNTComplEx would be :\n∆p(U, V, T ; (i, j, k, l)) = 1\np\n( 2∥ui∥pp + 2∥uk∥pp + ∥vtj∥pp + ∥tl∥pp + ∥vj∥pp ) .\nWe compare ∆4, ∆3 and ∆2 with Ω3. The comparison is done with a temporal regularizer of 0 to reduce the experimental space.\n∆2 is the common weight-decay frequently used in deep-learning. Such regularizers have been used in knowledge base completion (Nickel et al., 2011; 2016b; Trouillon et al., 2016), however, Lacroix et al. (2018) showed that the infimum of this penalty is non-convex over tensors.\n∆3 matches the order used in the Ω3 regularizer, and in previous work on knowledge base completion (Lacroix et al., 2018). However, by the same arguments, its minimization does not lead to a convex penalty over tensors.\n∆4 is the sum of the variational forms of the Nuclear 4-norm for the two tensors of order 4 in the TNTComplEx model according to equation (6).\nDetailed results of the impact of regularization on the performances of the model are given in Figure 1. The two regularizers ∆4 and Ω3 are the only regularizers that can be interpreted as sums of tensor norm variational forms and perform better than their lower order counterparts.\nThere are two differences between ∆4 and Ω3. First, whereas the first is a variational form of the nuclear 4-norm, the second is a variational form of the nuclear 3-norm which is closer to the nuclear 2-norm. Results for exact recovery of tensors have been generalized to the nuclear 2-norm, and to the extent of our knowledge, there has been no formal study of generalization properties or exact recovery under the nuclear p-norm for p greater than two.\nSecond, the weighting in ∆4 is done separately over timestamps and predicates, whereas it is done jointly for Ω3. This leads to using the joint empirical marginal as a weighting over timestamps and predicates. The impact of weighting on the guarantees that can be obtained are described more precisely in Foygel et al. (2011).\nThe contribution of all these regularizers over a non-regularized model are summarized in Table 3. Note that careful regularization leads to a 0.05 MRR increase." }, { "heading": "4 A NEW DATASET FOR TEMPORAL AND NON-TEMPORAL KNOWLEDGE BASE COMPLETION", "text": "A dataset based on Wikidata was proposed by García-Durán et al. (2018). However, upon inspection, this dataset contains numerical data as entities, such as ELO rankings of chess players, which are not representative of practically useful link prediction problems. Also, in this dataset, temporal informations is specified in the form of “OccursSince” and “OccursUntil” statements appended to triples, which becomes unwieldy when a predicate holds for several intervals in time. Moreover, this dataset contains only 11k entities and 150k which is insufficient to benchmark methods at scale.\nThe GDelt dataset described in Ma et al. (2018); Goel et al. (2020) holds many triples (2M ), but does not describe enough entities (500). In order to adress these limitations, we created our own dataset from Wikidata, which we make available along with the code for this paper at https: //github.com/facebookresearch/tkbc.\nStarting from Wikidata, we removed all entities that were instance of scholarly articles, proteins and others. We also removed disambiguation, template, category and project pages from wikipedia. Then, we removed all facts for which the object was not an entity. We iteratively filtered out entities that had degree at least 5 and predicates that had at least 50 occurrences. With this method, we obtained a dataset of 432715 entities, 407 predicates and 1724 timestamps (we only kept the years). Each datum is a triple (subject, predicate, object) together a timestamp range (begin, end) where begin, end or both can be unspecified. Our train set contains 7M such tuples, with about 10% partially specified temporal tuples. We kept a validation and test set of size 50k each.\nAt train and test time, for a given datum (subject, predicate, object, [begin, end]), we sample a timestamp (appearing in the dataset) uniformly at random, in the range [begin, end]. For datum without a temporal range, we sample over the maximum date range. Then, we rank the objects for the partial query (subject, predicate, ?, timestamp)." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SET-UP", "text": "We follow the experimental set-up in García-Durán et al. (2018); Goel et al. (2020). We use models from García-Durán et al. (2018) and Goel et al. (2020) as baselines since they are the best performing algorithms on the datasets considered. We report the filtered Mean Reciprocal Rank (MRR) defined in Nickel et al. (2016b). In order to obtaiqn comparable results, we use Table 1 and dataset statistics to compute the rank for each (model, dataset) pair that matches the number of parameters used in Goel et al. (2020). We also report results at ranks 10 times higher. This higher rank set-up gives an estimation of the best possible performance attainable on these datasets, even though the dimension used might be impractical for applied systems. All our models are optimized with Adagrad (Duchi et al., 2011), with a learning rate of 0.1, a batch-size of 1000. More details on the grid-search, actual ranks used and hyper-parameters are given in Appendix 8.7.\nICEWS14 ICEWS15-05 Yago15k TA 0.48 0.47 0.32 DE-SimplE 0.53 0.51 - ComplEx 0.47 (0.47) 0.49 (0.49) 0.35 (0.36)\nTComplEx 0.56 (0.61) 0.58 (0.66) 0.35 (0.36) TNTComplEx 0.56 (0.62) 0.60 (0.67) 0.35 (0.37)\nTable 2: Results for TA (García-Durán et al., 2018) and DE-SimplE (Goel et al., 2020) are the best numbers reported in the respective papers. Our models have as many parameters as DE-SimplE. Numbers in parentheses are for ranks multiplied by 10.\nWe give results on 3 datasets previously used in the litterature : ICEWS14, ICEWS15-05 and Yago15k. The ICEWS datasets are samplings from the Integrated Conflict Early Warning System (ICEWS)(Boschee et al., 2015)1.García-Durán et al. (2018) introduced two subsampling of this data, ICEWS14 which contains all events occuring in 2014 and ICEWS05-15 which contains events occuring between 2005 and 2015. These datasets immediately fit in our framework, since the timestamps are already discretized.\nThe Yago15K dataset (García-Durán et al., 2018) is a modification of FB15k (Bordes et al., 2013) which adds “occursSince” and “occursUntil” timestamps to each triples. Following the evaluation setting of García-Durán et al. (2018), during evaluation, the incomplete triples to complete are of the form (subject, predicate, ?, occursSince | occursUntil, timestamp) (with reciprocal predicates). Rather than deal with tensors of order 5, we choose to unfold the (occursSince, occursUntil) and the predicate mode together, multiplying its size by two.\nSome relations in Wikidata are highly unbalanced (eg. (?, InstanceOf, Human)). For such relations, a ranking evaluation would not make much sense. Instead, we only compute the Mean Reciprocal Rank for missing right hand sides, since the data is such that highly unbalanced relations happen on the left-hand side. However, we follow the same training scheme as for all the other dataset, including reciprocal relations in the training set. The cross-entropy loss evaluated on 400k entities puts a restriction on the dimensionality of embeddings at about d = 100 for a batch-size of 1000. We leave sampling of this loss, which would allow for higher dimensions to future work." }, { "heading": "5.2 RESULTS", "text": "We compare ComplEx with the temporal versions described in this paper. We report results in Table 2. Note that ComplEx has performances that are stable through a tenfold increase of its number of parameters, a rank of 100 is enough to capture the static information of these datasets. For temporal models however, the performance increases a lot with the number of parameters. It is always beneficial to allow a separate modeling of non-temporal predicates, as the performances of TNTComplex show. Finally, our model match or beat the state of the art on all datasets, even at identical number of parameters. Since these datasets are small, we also report results for higher ranks (10 times the number of parameters used for DE-SimplE).\nOn Wikidata, 90% of the triples have no temporal data attached. This leads to ComplEx outperforming all temporal models in term of average MRR, since the Non-Temporal MRR (NT-MRR) far\noutweighs the Temporal MRR (T-MRR). A breakdown of the performances is available in table 4. TNTComplEx obtains performances that are comparable to ComplEx on non-temporal triples, but are better on temporal triples. Moreover, TNTComplEx can minimize the temporal cross-entropy (7) and is thus more flexible on the queries it can answer.\nTraining TNTComplEx on Wikidata with a rank of d = 100 with the full cross-entropy on a Quadro GP 100, we obtain a speed of 5.6k triples per second, leading to experiments time of 7.2 hours. This is to be compared with 5.8k triples per second when training ComplEx for experiments time of 6.9 hours. The additional complexity of our model does not lead to any real impact on runtime, which is dominated by the computation of the cross-entropy over 400k entities." }, { "heading": "6 QUALITATIVE STUDY", "text": "The instantaneous loss described in equation (1), along with the timestamp sampling scheme described in the previous section only enforces correct rankings along the “object” tubes of our order-4 tensor. In order to enforce a stronger temporal consistency, and be able to answer queries of the type (subject, predicate, object, ?), we propose another cross-entropy loss along the temporal tubes:\nℓ̃(X̂; (i, j, k, l)) = −X̂i,j,k,l + log (∑\nl′\nexp ( X̂i,j,k,l′ )) . (7)\nWe optimize the sum of ℓ defined in Equation 1 and ℓ̃ defined in Equation 7. Doing so, we only lose 1 MRR point overall. However, we make our model better at answering queries along the time axis. The macro area under the precision recall curve is 0.92 for a TNTComplEx model learned with ℓ alone and 0.98 for a TNTComplEx model trained with ℓ+ ℓ̃.\nWe plot in Figure 2 the scores along time for train triples (president of the french republic, office holder, {Jacques Chirac | Nicolas Sarkozy | François Hollande | Emmanuel Macron}, [1980, 2020]). The periods where a score is highest matches closely the ground truth of start and end dates of these presidents mandates which is represented as a colored background. This shows that our models are able to learn rankings that are correct along time intervals despite our training method only ever sampling timestamps within these intervals." }, { "heading": "7 CONCLUSION", "text": "Tensor methods have been successful for Knowledge Base completion. In this work, we suggest an extension of these methods to Temporal Knowledge Bases. Our methodology adapts well to the various form of these datasets : point-in-time, beginning and endings or intervals. We show that our methods reach higher performances than the state of the art for similar number of parameters. For several datasets, we also provide performances for higher dimensions. We hope that the gap between low-dimensional and high-dimensional models can motivate further research in models that have increased expressivity at lower number of parameters per entity. Finally, we propose a large scale temporal dataset which we believe represents the challenges of large scale temporal completion in knowledge bases. We give performances of our methods for low-ranks on this dataset. We believe that, given its scale, this dataset could also be an interesting addition to non-temporal knowledge base completion." }, { "heading": "8 APPENDIX", "text": "" }, { "heading": "8.1 UNFOLDING AND THE CP DECOMPOSITION", "text": "Let X = [[U, V,W, T ]], that is Xi,j,k,l = ⟨ui, vj , wk, tl⟩. Then according to Kolda & Bader (2009), unfolding along modes 3 and 4 leads to an order three tensor of decomposition X̃ = [[U, V,W ◦ T ]]. Where ◦ is the Khatri-Rao product (Smilde et al., 2005), which is the column-wise Kronecker product : W ◦ T = (W:,1 ⊗ T:,1, . . . ,W:,R ⊗ T:,R). Note that for a fourth mode of size L: (W ◦ T )L(k−1)+l = wk ⊙ tl. This justifies the regularizers used in Section 3.2." }, { "heading": "8.2 TEMPORAL REGULARIZER AND NUCLEAR NORMS", "text": "Consider the penalty:\nΩ(U, V,W, T ) = 1\n4\n( ∥U∥44 + ∥V ∥44 + ∥W∥44 + ∥T∥44 + α∥T1: − T:−1∥44 ) Let us define a new norm on vectors:\n∥t∥τ4 = ( ∥t∥44 + α∥t1: − t:−1∥44 )1/4 ∥ · ∥τ4 is a norm and lets us rewrite:\nΩ(U, V,W, T ) = R∑ r=1 1 4 ( ∥ur∥44 + ∥vr∥44 + ∥wr∥44 + ∥tr∥4τ4 ) .\nFollowing the proof in Lacroix et al. (2018) which only uses homogeneity of the norms, we can show that Ω(U, V,W, T ) is a variational form of an atomic norm with atoms :" }, { "heading": "A = {u⊗ v ⊗ w ⊗ t | ∥u∥4, ∥v∥4, ∥w∥4 ≤ 1 and ∥t∥τ4 ≤ 1}", "text": "" }, { "heading": "8.3 NUCLEAR NORMS ON UNFOLDINGS", "text": "We consider the regularizer :\nΩN3(U, V, T ; (i, j, k, l)) = 1\n3\n( ∥ui∥33 + ∥uk∥33 + ∥vk ⊙ tl∥33 ) .\nLet Dsubj (resp. obj, pred/time) the diagonal matrix containing the cubic-roots of the marginal probabilities of each subject (resp. obj, pred/time) in the dataset. We denote by ◦ the Kathri-Rao product between two matrices (the columnwise Kronecker product). Summing over the entire dataset, we obtain the penalty:\n1 |S| ∑\n(i,j,k,l)∈S\nΩN3(U, V, T ; (i, j, k, l)) = 1\n3\n( ∥DsubjU∥33 + ∥DobjU∥33 + ∥Dpred/time(V ◦ T )∥33 ) .\nDropping the weightings to simplify notations, we state the equivalence between this regularizer and a variational form of the nuclear 3-norm of an order 4 tensor:\ninf [U1,U2,U3,U4]=X\n1\n3\n( R∑\nr=1\n∥u(1)r ∥33 + ∥u(2)r ∥33 + ∥u(3)r ⊗ u(4)r ∥33\n) = inf\n[U1,U2,U3,U4]=X\n1\n3\n( R∑\nr=1 4∏ d=1 ∥u(d)r ∥3\n) .\nThe proof follows Lacroix et al. (2018), noting that ∥u(3)r ⊗ u(4)r ∥33 = ∥u (3) r ∥33∥u (4) r ∥33. Note that for Dpred/time = DpredDtime, there would also be equality of the weighted norms. However, in the application considered, time and predicate are most likely not independent, leading to different weightings of the norms." }, { "heading": "8.4 DATASET STATISTICS", "text": "Statistics of all the datasets used in this work are gathered in Table 5." }, { "heading": "8.5 DETAILED RESULTS", "text": "" }, { "heading": "8.6 STANDARD DEVIATIONS", "text": "We give the standard deviations for the MRR computed over 5 runs of TNTComplEx on all datasets:\nICEWS14 ICEWS15-05 Yago15k Wikidata (T) Wikidata (NT) TNTComplEx 0.0016 0.0011 0.00076 0.0035 0.0012" }, { "heading": "8.7 GRID SEARCH", "text": "For ICEWS14, ICEWS05-15 and Yago15k, we follow the grid-search below :\nUsing Table 1 to compute the number of parameters and the dataset statistics in Table 5, we use the following ranks to match the number of parameters of DE-SimplE in dimension 100:\nICEWS14 ICEWS05-15 Yago15k DE-SimplE 100 100 100\nComplEx 182 186 196 TComplEx 174 136 194 TTComplEx 156 128 189" } ]
2,020
TEMPORAL KNOWLEDGE BASE COMPLETION
SP:138632d011d3fcc86cff90f9e2fa8b1929d008cb
[ "The paper presents an improved analysis of the signSGD gradient estimator. The authors propose to relax the requirements on the gradient estimator in Bernstein (2019). The only requirement imposed on the gradient is that it should have the correct sign with probability greater than 1/2. In particular this approach allows the gradient estimate to be biased as opposed to Bernstein (2019) which requires unbiased gradients. The authors also show this condition to be necessary by a small counterexample.", "This paper focuses on signSGD with the aim of improving theoretical understanding of the method. The main contribution of the paper is to identify a condition SPB (success probability bounds), which is necessary for convergence of signSGD and study its connections with the other conditions known in the literature for signSGD analysis. One important point here is that the norm in which the authors show convergence now depends on SPB, meaning that the probabilities in SPB are used to define the norm-like function they use in the theorems." ]
Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, like ADAM. In this paper, we perform a general analysis of sign-based methods for non-convex optimization. Our analysis is built on intuitive bounds on success probabilities and does not rely on special noise distributions nor on the boundedness of the variance of stochastic gradients. Extending the theory to distributed setting within a parameter server framework, we assure exponentially fast variance reduction with respect to number of nodes, maintaining 1-bit compression in both directions and using small mini-batch sizes. We validate our theoretical findings experimentally.
[]
[ { "authors": [ "Dan Alistarh", "Demjan Grubic", "Jerry Li", "Ryota Tomioka", "Milan Vojnovic" ], "title": "QSGD: Communicationefficient SGD via gradient quantization and encoding", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Lukas Balles", "Philipp Hennig" ], "title": "Dissecting Adam: The sign, magnitude and variance of stochastic gradients", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Animashree Anandkumar" ], "title": "signSGD: Compressed optimisation for non-convex problems", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jeremy Bernstein", "Jiawei Zhao", "Kamyar Azizzadenesheli", "Animashree Anandkumar" ], "title": "signSGD with majority vote is communication efficient and fault tolerant", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Léon Bottou", "Yann Le Cun" ], "title": "Large scale online learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2003 }, { "authors": [ "David Carlson", "Volkan Cevher", "Lawrence Carin" ], "title": "Stochastic spectral descent for restricted boltzmann machines", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2015 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "In Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan" ], "title": "Stochastic first-and zeroth-order methods for nonconvex stochastic programming", "venue": "In SIAM Journal on Optimization,", "year": 2013 }, { "authors": [ "Sai Praneeth Karimireddy", "Quentin Rebjock", "Sebastian Stich", "Martin Jaggi" ], "title": "Error feedback fixes SignSGD and other gradient compression schemes", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sarit Khirirat", "Hamid Reza Feyzmahdavian", "Mikael Johansson" ], "title": "Distributed learning with compressed gradients", "venue": "In arXiv preprint arXiv:1806.06573,", "year": 2018 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Yujun Lin", "Song Han", "Huizi Mao", "Yu Wang", "William J. Dally" ], "title": "Deep gradient compression: Reducing the communication bandwidth for distributed training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sijia Liu", "Pin-Yu Chen", "Xiangyi Chen", "Mingyi Hong" ], "title": "signSGD via zeroth-order oracle", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Konstantin Mishchenko", "Eduard Gorbunov", "Martin Takáč", "Peter Richtárik" ], "title": "Distributed learning with compressed gradient differences", "venue": "In arXiv preprint arXiv:1901.09269,", "year": 2019 }, { "authors": [ "Xun Qian", "Peter Richtárik", "Robert Mansel Gower", "Alibek Sailanbayev", "Nicolas Loizou", "Egor" ], "title": "Shulgin. SGD with arbitrary sampling: General analysis and improved rates", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sashank Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of Adam and beyond", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Martin Riedmiller", "Heinrich Braun" ], "title": "A direct adaptive method for faster backpropagation learning: The Rprop algorithm", "venue": "In IEEE International Conference on Neural Networks,", "year": 1993 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "In The Annals of Mathematical Statistics,", "year": 1951 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Deep learning in neural networks: An overview", "venue": "In Neural networks,", "year": 2015 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux" ], "title": "Fast convergence of stochastic gradient descent under a strong growth condition", "venue": "In arXiv preprint arXiv:1308.6370,", "year": 2013 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux", "Francis Bach" ], "title": "Minimizing finite sums with the stochastic average gradient", "venue": "In Mathematical Programming,", "year": 2017 }, { "authors": [ "Frank Seide", "Hao Fu", "Jasha Droppo", "Gang Li", "Dong Yu" ], "title": "1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs", "venue": "In Fifteenth Annual Conference of the International Speech Communication Association,", "year": 2014 }, { "authors": [ "Irina Shevtsova" ], "title": "On the absolute constants in the berry–esseen type inequalities for identically distributed summands", "venue": "In arXiv preprint arXiv:1111.6554,", "year": 2011 }, { "authors": [ "Nikko Strom" ], "title": "Scalable distributed DNN training using commodity GPU cloud computing", "venue": "In Sixteenth Annual Conference of the International Speech Communication Association,", "year": 2015 }, { "authors": [ "Tijmen Tieleman", "Geoffrey E. Hinton" ], "title": "RMSprop. In Coursera: Neural Networks for Machine Learning, Lecture", "venue": null, "year": 2012 }, { "authors": [ "Sharan Vaswani", "Francis Bach", "Mark Schmidt" ], "title": "Fast and faster convergence of SGD for overparameterized models (and an accelerated perceptron)", "venue": "In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, PMLR,", "year": 2019 }, { "authors": [ "Roman Vershynin" ], "title": "High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics", "venue": null, "year": 2018 }, { "authors": [ "Hongyi Wang", "Scott Sievert", "Shengchao Liu", "Zachary Charles", "Dimitris Papailiopoulos", "Stephen Wright" ], "title": "Atomo: Communication-efficient learning via atomic sparsification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wei Wen", "Cong Xu", "Feng Yan", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Terngrad: Ternary gradients to reduce communication in distributed deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ashia Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Manzil Zaheer", "Sashank Reddi", "Devendra Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Matthew D. Zeiler" ], "title": "ADADELTA: An Adaptive Learning Rate Method", "venue": "In arXiv e-prints,", "year": 2012 }, { "authors": [ "Hantian Zhang", "Jerry Li", "Kaan Kara", "Dan Alistarh", "Ji Liu", "Ce Zhang" ], "title": "ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "D γ" ], "title": "RECOVERING THEOREM 1 IN (BERNSTEIN ET AL., 2019) FROM THEOREM 1 To recover Theorem 1 in (Bernstein et al., 2019), first note that choosing a particular step size γ", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "One of the key factors behind the success of modern machine learning models is the availability of large amounts of training data (Bottou & Le Cun, 2003; Krizhevsky et al., 2012; Schmidhuber, 2015). However, the state-of-the-art deep learning models deployed in industry typically rely on datasets too large to fit the memory of a single computer, and hence the training data is typically split and stored across a number of compute nodes capable of working in parallel. Training such models then amounts to solving optimization problems of the form\nminx∈Rd f(x) := 1 M M∑ m=1 fm(x), (1)\nwhere fm : Rd → R represents the non-convex loss of a deep learning model parameterized by x ∈ Rd associated with data stored on node m. Arguably, stochastic gradient descent (SGD) (Robbins & Monro, 1951; Vaswani et al., 2019; Qian et al., 2019) in of its many variants (Kingma & Ba, 2015; Duchi et al., 2011; Schmidt et al., 2017; Zeiler, 2012; Ghadimi & Lan, 2013) is the most popular algorithm for solving (1). In its basic implementation, all workers m ∈ {1, 2, . . . ,M} in parallel compute a random approximation ĝm(xk) of ∇fm(xk), known as the stochastic gradient. These approximations are then sent to a master node which performs the aggregation\nĝ(xk) := 1 M M∑ m=1 ĝm(xk).\nThe aggregated vector is subsequently broadcast back to the nodes, each of which performs an update of the form xk+1 = xk − γkĝ(xk), thus updating their local copies of the parameters of the model." }, { "heading": "1.1 GRADIENT COMPRESSION", "text": "Typically, communication of the local gradient estimators ĝm(xk) to the master forms the bottleneck of such a system (Seide et al., 2014; Zhang et al., 2017; Lin et al., 2018). In an attempt to alleviate this communication bottleneck, a number of compression schemes for gradient updates have been proposed and analyzed (Alistarh et al., 2017; Wang et al., 2018; Wen et al., 2017; Khirirat et al., 2018;\nMishchenko et al., 2019). A compression scheme is a (possibly randomized) mapping Q : Rd → Rd, applied by the nodes to ĝm(xk) (and possibly also by the master to aggregated update in situations when broadcasting is expensive as well) in order to reduce the number of bits of the communicated message.\nSign-based compression. Although most of the existing theory is limited to unbiased compression schemes, i.e., on operators Q satisfying EQ(x) = x, biased schemes such as those based on communicating signs of the update entries only often perform much better (Seide et al., 2014; Strom, 2015; Wen et al., 2017; Carlson et al., 2015; Balles & Hennig, 2018; Bernstein et al., 2018; 2019; Zaheer et al., 2018; Liu et al., 2019). The simplest among these sign-based methods is signSGD (see also Algorithm 1; Option 1), whose update direction is assembled from the component-wise signs of the stochastic gradient.\nAdaptive methods. While ADAM is one of the most popular adaptive optimization methods used in deep learning (Kingma & Ba, 2015), there are issues with its convergence (Reddi et al., 2019) and generalization (Wilson et al., 2017) properties. It was noted in Balles & Hennig (2018) that the behaviour of ADAM is similar to a momentum version of signSGD. Connection between sign-based and adaptive methods has long history, originating at least in Rprop (Riedmiller & Braun, 1993) and RMSprop (Tieleman & Hinton, 2012). Therefore, investigating the behavior of signSGD can improve our understanding on the convergence of adaptive methods such as ADAM." }, { "heading": "1.2 CONTRIBUTIONS", "text": "We now summarize the main contributions of this work. Our key results are summarized in Table 1.\n1In fact, bounded variance assumption, being weaker than bounded second moment assumption, is stronger (or, to be strict, more curtain) than SPB assumption in the sense of differential entropy, but not in the direct sense. The entropy of probability distribution under the bounded variance assumption is bounded, while under the SPB assumption it could be arbitrarily large. This observation is followed by the fact that for continuous random variables, the Gaussian distribution has the maximum differential entropy for a given variance (see https://en.wikipedia.org/wiki/Differential_entropy).\n• 2 methods for 1-node setup. In the M = 1 case, we study two general classes of sign based methods for minimizing a smooth non-convex function f . The first method has the standard form2\nxk+1 ← xk − γk sign ĝ(xk), (2)\nwhile the second has a new form not considered in the literature before:\nxk+1 ← arg min{f(xk), f(xk − γk sign ĝ(xk))}. (3)\n• Key novelty. The key novelty of our methods is in a substantial relaxation of the requirements that need to be imposed on the gradient estimator ĝ(xk) of the true gradient ∇f(xk). In sharp contrast with existing approaches, we allow ĝ(xk) to be biased. Remarkably, we only need one additional and rather weak assumption on ĝ(xk) for the methods to provably converge: we require the signs of the entries of ĝ(xk) to be equal to the signs of the entries of ∇f(xk) with a probability strictly larger than 1/2 (see Section 2; Assumption 1). We show through a counterexample (see Section 2.2) that this assumption is necessary.\n• Geometry. As a byproduct of our analysis, we uncover a mixed l1-l2 geometry of sign descent methods (see Section 3).\n• Convergence theory. We perform a complexity analysis of methods (2) and (3) (see Section 4.1; Theorem 1). While our complexity bounds have the same O(1/√K) dependence on the number of iterations, they have a better dependence on the smoothness parameters associated with f . Theorem 1 is the first result on signSGD for non-convex functions which does not rely on mini-batching, and which allows for step sizes independent of the total number of iterations K. Finally, Theorem 1 in Bernstein et al. (2019) can be recovered from our general Theorem 1. Our bounds are cast in terms of a novel norm-like function, which we call the ρ-norm, which is a weighted l1 norm with positive variable weights.\n• Distributed setup. We extend our results to the distributed setting with arbitrary M (Section 4.2), where we also consider sign-based compression of the aggregated gradients." }, { "heading": "2 SUCCESS PROBABILITIES AND GRADIENT NOISE", "text": "In this section we describe our key (and weak) assumption on the gradient estimator ĝ(x) of the true gradient∇f(x), and give an example which shows that without this assumption, method (2) can fail." }, { "heading": "2.1 SUCCESS PROBABILITY BOUNDS", "text": "Assumption 1 (SPB: Success Probability Bounds). For any x ∈ Rd, we have access to an independent (and not necessarily unbiased) estimator ĝ(x) of the true gradient g(x) := ∇f(x) that satisfies\nρi(x) := Prob (sign ĝi(x) = sign gi(x)) > 1 2 , if gi(x) 6= 0 (4)\nfor all x ∈ Rd and all i ∈ {1, 2, . . . , d}.\nWe will refer to the probabilities ρi as success probabilities. As we will see, they play a central role in the convergence of sign based methods. We stress that Assumption 1 is the only assumption on gradient noise in this paper. Moreover, we argue that it is reasonable to require from the sign of stochastic gradient to show true gradient direction more likely than the opposite one. Extreme cases of this assumption are the absence of gradient noise, in which case ρi = 1, and an overly noisy stochastic gradient, in which case ρi ≈ 12 . Remark 1. Assumption 1 can be relaxed by replacing bounds (4) with\nE [sign (ĝi(x) · gi(x))] > 0, if gi(x) 6= 0.\nHowever, if Prob(sign ĝi(x) = 0) = 0 (e.g. in the case of ĝi(x) has continuous distributions), then these two bounds are identical.\n2sign g is applied element-wise to the entries g1, g2, . . . , gd of g ∈ Rd. For t ∈ R we define sign t = 1 if t > 0, sign t = 0 if t = 0, and sign t = −1 if t < 0.\nExtension to stochastic sign oracle. Notice that we do not require ĝ to be unbiased. Moreover, we do not assume uniform boundedness of the variance, or of the second moment. This observation allows to extend existing theory to more general sign-based methods with a stochastic sign oracle. By a stochastic sign oracle we mean an oracle that takes xk ∈ Rd as an input, and outputs a random vector ŝk ∈ Rd with entries in ±1. However, for the sake of simplicity, in the rest of the paper we will work with the signSGD formulation, i.e., we let ŝk = sign ĝ(xk)." }, { "heading": "2.2 A COUNTEREXAMPLE TO SIGNSGD", "text": "Here we analyze a counterexample to signSGD discussed in Karimireddy et al. (2019). Consider the following least-squares problem with unique minimizer x∗ = (0, 0):\nmin x∈R2\nf(x) = 12 [ 〈a1, x〉2 + 〈a2, x〉2 ] , a1 = [ 1+ε −1+ε ] , a2 = [−1+ε 1+ε ] ,\nwhere ε ∈ (0, 1) and stochastic gradient ĝ(x) = ∇〈ai, x〉2 = 2〈ai, x〉ai with probabilities 1/2 for i = 1, 2. Let us take any point from the line l = {(z1, z2) : z1 + z2 = 2} as initial point x0 for the algorithm and notice that sign ĝ(x) = ±(1,−1) for any x ∈ l. Therefore, signSGD with any step-size sequence remains stuck along the line l, whereas the problem has a unique minimizer at the origin.\nWe now investigate the cause of the divergence. In this counterexample, Assumption 1 is violated. Indeed, note that\nsign ĝ(x) = (−1)i sign〈ai, x〉 [−1\n1 ] with probabilities 12 for i = 1, 2.\nBy S := {x ∈ R2 : 〈a1, x〉 · 〈a2, x〉 > 0} 6= ∅ denote the open cone of points having either an acute or an obtuse angle with both ai’s. Then for any x ∈ S, the sign of the stochastic gradient is ±(1,−1) with probabilities 1/2. Hence for any x ∈ S, we have low success probabilities:\nρi(x) = Prob (sign ĝi(x) = sign gi(x)) ≤ 12 , i = 1, 2. So, in this case we have an entire conic region with low success probabilities, which clearly violates (4). Furthermore, if we take a point from the complement open cone S̄c, then the sign of stochastic gradient equals to the sign of gradient, which is perpendicular to the axis of S (thus in the next step of the iteration we get closer to S). For example, if 〈a1, x〉 < 0 and 〈a2, x〉 > 0, then sign ĝ(x) = (1,−1) with probability 1, in which case x − γ sign ĝ(x) gets closer to low success probability region S.\nIn summary, in this counterexample there is a conic region where the sign of the stochastic gradient is useless (or behaves adversarially), and for any point outside that region, moving direction (which is the opposite of the sign of gradient) leads toward that conic region." }, { "heading": "2.3 SUFFICIENT CONDITIONS FOR SPB", "text": "To justify our SPB assumption, we show that it holds under general assumptions on gradient noise. Lemma 1 (see B.1). Assume that for any point x ∈ Rd, we have access to an independent and unbiased estimator ĝ(x) of the true gradient g(x). Assume further that each coordinate ĝi has a unimodal and symmetric distribution with variance σ2i = σ 2 i (x), 1 ≤ i ≤ d. Then ρi ≥ 1 2 + 1 2 |gi| |gi|+ √ 3σi > 12 if gi 6= 0.\nNext, we remove the distribution condition and add a strong growth condition (Schmidt & Le Roux, 2013; Vaswani et al., 2019) together with fixed mini-batch size. Lemma 2 (see B.2). Assume that for any point x ∈ Rd, we have access to an independent, unbiased estimator ĝ(x) of the true gradient g(x), with coordinate-wise bounded variances σ2i (x) ≤ c g2i (x) for some constant c. Then, choosing a mini-batch size τ > 2c, we get ρi ≥ 1− c/τ > 12 , if gi 6= 0.\nFinally, we give an adaptive condition on mini-batch size for the SPB assumption to hold. Lemma 3 (see B.3). Assume that for any point x ∈ Rd we have access to an independent and unbiased estimator ĝ(x) of the true gradient g(x). Let σ2i = σ 2 i (x) be the variance and ν 3 i = ν 3 i (x) be the 3th central moment of ĝi(x), 1 ≤ i ≤ d. Then SPB assumption holds if mini-batch size τ > 2 min ( σ2i/g2i , ν 3 i/|gi|σ2i ) ." }, { "heading": "3 A NEW “NORM” FOR MEASURING THE SIZE OF THE GRADIENTS", "text": "In this section we introduce the concept of a norm-like function, which call ρ-norm, induced from success probabilities. Used to measure gradients in our convergence rates, ρ-norm is a technical tool enabling the analysis. Definition 1 (ρ-norm). Let ρ := {ρi(x)}di=1 be the collection of probability functions from the SPB assumption. We define the ρ-norm of gradient g(x) via ‖g(x)‖ρ := ∑d i=1(2ρi(x)− 1)|gi(x)|.\nNote that ρ-norm is not a norm as it may not satisfy the triangle inequality. However, under SPB assumption, ρ-norm is positive definite as it is a weighted l1 norm with positive (and variable) weights 2ρi(x) − 1 > 0. That is, ‖g‖ρ ≥ 0, and ‖g‖ρ = 0 if and only if g = 0. Under the assumptions of Lemma 2, ρ-norm can be lower bounded by a weighted l1 norm with positive constant weights 1− 2c2i > 0: ‖g‖ρ = ∑d i=1(2ρi− 1)|gi| ≥ ∑d i=1(1− 2c2i )|gi|. Under the assumptions of Lemma 1, ρ-norm can be lower bounded by a mixture of the l1 and squared l2 norms:\n‖g‖ρ = d∑ i=1 (2ρi − 1)|gi| ≥ d∑ i=1 g2i |gi|+ √ 3σi := ‖g‖l1,2 . (5)\nNote that l1,2-norm is again not a norm. However, it is positive definite, continuous and order preserving, i.e., for any gk, g, g̃ ∈ Rd we have: i) ‖g‖l1,2 ≥ 0 and ‖g‖l1,2 = 0 if and only if g = 0; ii) gk → g (in l2 sense) implies ‖gk‖l1,2 → ‖g‖l1,2 , and iii) 0 ≤ gi ≤ g̃i for any 1 ≤ i ≤ d implies ‖g‖l1,2 ≤ ‖g̃‖l1,2 . From these three properties it follows that ‖gk‖l1,2 → 0 implies gk → 0. These properties are important as we will measure convergence rate in terms of the l1,2 norm in the case of unimodal and symmetric noise assumption. To understand the nature of the l1,2 norm, consider the following two cases when σi(x) ≤ c|gi(x)|+ c̃ for some constants c, c̃ ≥ 0. If the iterations are in ε-neighbourhood of a minimizer x∗ with respect to the l∞ norm (i.e., max1≤i≤d |gi| ≤ ε), then the l1,2 norm is equivalent to scaled l2 norm squared: 1\n(1+ √ 3c)ε+ √ 3c̃ ‖g‖22 ≤ ‖g‖l1,2 ≤ 1√3c̃‖g‖ 2 2. On\nthe other hand, if iterations are away from a minimizer (i.e., min1≤i≤d |gi| ≥ L), then the l1,2-norm is equivalent to scaled l1 norm: 1\n1+ √ 3(c+c̃/L) ‖g‖1 ≤ ‖g‖l1,2 ≤ 11+√3c‖g‖1. These equivalences are\nvisible in Figure 1, where we plot the level sets of g 7→ ‖g‖l1,2 at various distances from the origin. Similar mixed norm observation was also noted in Bernstein et al. (2019)." }, { "heading": "4 CONVERGENCE THEORY", "text": "Now we turn to our theoretical results of sign based methods. First we give our general convergence results under the SPB assumption. Afterwards, we present convergence result in the distributed setting under the unimodal and symmetric noise assumptions.\nThroughout the paper we assume that f : Rd → R is lower bounded, i.e., f(x) ≥ f∗, x ∈ Rd and is L-smooth with some non-negative constants L = [L1, . . . , Ld]. That is, we assume that f(y) ≤ f(x)+ 〈∇f(x), y−x〉+ ∑d i=1 Li 2 (yi−xi)\n2 for all x, y ∈ Rd. We allow f to be nonconvex. Let L̄ := 1d ∑ i Li and Lmax = maxi Li." }, { "heading": "4.1 CONVERGENCE ANALYSIS FOR M = 1", "text": "We now state our convergence result for Algorithm 1 under the general SPB assumption.\nAlgorithm 1 SIGNSGD 1: Input: step size γk, current point xk 2: ĝk ← StochasticGradient(xk) 3: Option 1: xk+1 ← xk − γk sign ĝk 4: Option 2: xk+1 ← arg min{f(xk), f(xk − γk sign ĝk)}\nTheorem 1 (Non-convex convergence of signSGD, see B.4). Under the SPB assumption, signSGD (Algorithm 1 with Option 1) with step sizes γk = γ0/ √ k + 1 converges as follows\nmin 0≤k<K E‖∇f(xk)‖ρ ≤ 1√K [ f(x0)−f∗ γ0 + γ0dL̄ ] + γ0dL̄2 logK√ K . (6)\nIf γk ≡ γ > 0, we get 1/K convergence to a neighbourhood of the solution:\n1 K K−1∑ k=0 E‖∇f(xk)‖ρ ≤ f(x0)−f ∗ γK + dL̄ 2 γ . (7)\nWe now comment on the above result:\n• Generalization. Theorem 1 is the first general result on signSGD for non-convex functions without mini-batching, and with step sizes independent of the total number of iterations K. Known convergence results (Bernstein et al., 2018; 2019) on signSGD use mini-batches and/or step sizes dependent on K. Moreover, they also use unbiasedness and unimodal symmetric noise assumptions, which are stronger assumptions than our SPB assumption (see Lemma 1). Finally, Theorem 1 in Bernstein et al. (2019) can be recovered from Theorem 1 (see Section D for the details).\n• Convergence rate. Rates (6) and (7) can be arbitrarily slow, depending on the probabilities ρi. This is to be expected. At one extreme, if the gradient noise was completely random, i.e., if ρi ≡ 1/2, then the ρ-norm would become identical zero for any gradient vector and rates would be trivial inequalities, leading to divergence as in the counterexample. At other extreme, if there was no gradient noise, i.e., if ρi ≡ 1, then the ρ-norm would be just the l1 norm and from (6) we get the rate Õ(1/ √ K) with respect to the l1 norm. However, if we know that ρi > 1/2, then we can ensure that the method will eventually converge.\n• Geometry. The presence of the ρ-norm in these rates suggests that there is no particular geometry (e.g., l1 or l2) associated with signSGD. Instead, the geometry is induced from the success probabilities. For example, in the case of unbiased and unimodal symmetric noise, the geometry is described by the mixture norm l1,2.\n• Practicality. The rate (7) (as well as (30)) supports the common learning schedule practice of using a constant step size for a period of time, and then halving the step-size and continuing this process.\nFor a reader interested in comparing Theorem 1 with a standard result for SGD, we state the standard result in the Section C. We now state a general convergence rate for Algorithm 1 with Option 2. Theorem 2 (see B.5). Under the SPB assumption, Algorithm 1 (Option 2) with step sizes γk = γ0/ √ k + 1 converges as follows: 1K ∑K−1 k=0 E‖∇f(xk)‖ρ ≤ 1√ K [ f(x0)−f∗ γ0 + γ0dL̄ ] . In the case of constant step size γk = γ > 0, the same rate as (7) is achieved.\nComparing Theorem 2 with Theorem 1, notice that a small modification in Algorithm 1 can remove the log-dependent factor from (6); we then bound the average of past gradient norms instead of the minimum. On the other hand, in a big data regime, function evaluations in Algorithm 1 (Option 2, line 4) are infeasible. Clearly, Option 2 is useful only when one can afford function evaluations and has rough estimates about the gradients (i.e., signs of stochastic gradients). This option should be considered within the framework of derivative-free optimization." }, { "heading": "4.2 CONVERGENCE ANALYSIS IN DISTRIBUTED SETTING", "text": "In this part we present the convergence result of distributed signSGD (Algorithm 2) with majority vote introduced in Bernstein et al. (2018). Majority vote is considered within a parameter server framework, where for each coordinate parameter server receives one sign from each node and sends\nback the sign sent by the majority of nodes. Known convergence results (Bernstein et al., 2018; 2019) use O(K) mini-batch size as well as O(1/K) constant step size. In the sequel we remove this limitations extending Theorem 1 to distributed training. In distributed setting the number of nodes M get involved in geometry introducing new ρM -norm, which is defined by the regularized incomplete beta function I (see B.6).\nAlgorithm 2 DISTRIBUTED SIGNSGD WITH MAJORITY VOTE 1: Input: step sizes {γk}, current point xk, # of nodes M 2: on each node 3: ĝm(xk)← StochasticGradient(xk) 4: on server 5: pull sign ĝm(xk) from each node 6: push sign [∑M m=1 sign ĝ m(xk) ]\nto each node 7: on each node 8: xk+1 ← xk − γk sign [∑M m=1 sign ĝ m(xk) ]\nDefinition 2 (ρM -norm). Let M ≥ 1 be the number of nodes and l = [ M+1\n2\n] . Define ρM -norm of\ngradient g(x) at x ∈ Rd as ‖g(x)‖ρM = ∑d i=1 (2I(ρi(x); l, l)− 1) |gi(x)|.\nNow we can state the convergence rate of distributed signSGD with majority vote. Theorem 3 (Non-convex convergence of distributed signSGD, see B.6). Under SPB assumption, distributed signSGD (Algorithm 2) with step sizes γk = γ0/ √ k + 1 converges as follows\nmin0≤k<K E‖∇f(xk)‖ρM ≤ 1√K [ f(x0)−f∗ γ0 + γ0dL̄ ] + γ0dL̄2 logK√ K . (8)\nFor constant step sizes γk ≡ γ > 0, we have convergence up to a level proportional to step size γ:\n1 K K−1∑ k=0 E‖∇f(xk)‖ρM ≤ f(x0)−f∗ γK + dL̄ 2 γ. (9)\nVariance Reduction. Using Hoeffding’s inequality, we show that ‖g(x)‖ρM → ‖g(x)‖1 exponentially fast as M → ∞: ( 1− exp ( −(2ρ(x)− 1)2l )) ‖g(x)‖1 ≤ ‖g(x)‖ρM ≤ ‖g(x)‖1, where ρ(x) = min1≤i≤d ρi(x) > 1/2. Hence, in some sense, we have exponential variance reduction in terms of number of nodes (see B.7).\nNumber of nodes. Notice that theoretically there is no difference between 2l−1 and 2l nodes, and this in not a limitation of the analysis. Indeed, as it is shown in the proof, expected sign vector at the master with M = 2l − 1 nodes is the same as with M = 2l nodes: E sign(ĝ(2l)i · gi) = E sign(ĝ (2l−1) i · gi), where ĝ(M) is the sum of stochastic sign vectors aggregated from nodes. The intuition behind this phenomenon is that majority vote with even number of nodes, e.g. M = 2l, fails to provide any sign\nwith little probability (it is the probability of half nodes voting for +1, and half nodes voting for −1). However, if we remove one node, e.g. M = 2l − 1, then master receives one sign-vote less but gets rid of that little probability of failing the vote (sum of odd number of ±1 cannot vanish). So, somehow this two things cancel each other and we gain no improvement in expectation adding one more node to parameter server framework with odd number of nodes." }, { "heading": "5 EXPERIMENTS", "text": "We verify our theoretical results experimentally using the MNIST dataset with feed-forward neural network (FNN) and the well known Rosenbrock (non-convex) function with d = 10 variables:\nf(x) = ∑d−1 i=1 fi(x) = ∑d−1 i=1 100(xi+1 − x2i )2 + (1− xi)2, x ∈ Rd. (10)\nStochastic formulation of minimization problem for Rosenbrock function is as follows: at any point x ∈ Rd we have access to biased stochastic gradient ĝ(x) = ∇fi(x) + ξ, where index i is chosen uniformly at random from {1, 2, . . . , d− 1} and ξ ∼ N (0, ν2I) with ν > 0.\nFigure 2 illustrates the effect of multiple nodes in distributed training with majority vote. As we see increasing the number of nodes improves the convergence rate. It also supports the claim that in expectation there is no improvement from 2l − 1 nodes to 2l nodes. Figure 4 shows the robustness of SPB assumption in the convergence rate (7) with constant step size. We exploited four levels of noise in each column to demonstrate the correlation between success probabilities and convergence rate. In the first experiment (first column) SPB assumption is violated strongly and the corresponding rate shows divergence. In the second column, probabilities still violating SPB assumption are close to the threshold and the rate shows oscillations. Next columns show the improvement in rates when success probabilities are pushed to be close to 1." }, { "heading": "Appendix: “On Stochastic Sign Descent", "text": "Methods”" }, { "heading": "A EXTRA EXPERIMENTS", "text": "In this section we perform several additional experiments for further insights." }, { "heading": "B PROOFS", "text": "B.1 SUFFICIENT CONDITIONS FOR SPB: PROOF OF LEMMA 1\nHere we state the well-known Gauss’s inequality on unimodal distributions3. Theorem 4 (Gauss’s inequality). Let X be a unimodal random variable with mode m, and let σ2m be the expected value of (X −m)2. Then for any positive value of r,\nProb(|X −m| > r) ≤\n{ 4 9 ( σm r )2 , if r ≥ 2√ 3 σm\n1− 1√ 3 r σm , otherwise\nApplying this inequality on unimodal and symmetric distributions, direct algebraic manipulations give the following bound:\nProb(|X − µ| ≤ r) ≥\n{ 1− 49 ( σ r )2 , if σr ≤ √ 3\n2 1√ 3 r σ , otherwise ≥ r/σ r/σ + √ 3 ,\nwhere m = µ and σ2m = σ 2 are the mean and variance of unimodal, symmetric random variable X , and r ≥ 0. Now, using the assumption that each ĝi(x) has unimodal and symmetric distribution, we apply this bound for X = ĝi(x), µ = gi(x), σ2 = σ2i (x) and get a bound for success probabilities\nProb(sign ĝi = sign gi) = { Prob(ĝi ≥ 0), if gi > 0 Prob(ĝi ≤ 0), if gi < 0\n= { 1 2 + Prob(0 ≤ ĝi ≤ gi), if gi > 0 1 2 + Prob(gi ≤ ĝi ≤ 0), if gi < 0\n=\n{ 1 2 + 1 2Prob(0 ≤ ĝi ≤ 2gi), if gi > 0\n1 2 + 1 2Prob(2gi ≤ ĝi ≤ 0), if gi < 0\n= 1\n2 +\n1 2 Prob(|ĝi − gi| ≤ |gi|)\n≥ 1 2 + 1 2 |gi|/σi |gi|/σi + √ 3\n= 1\n2 +\n1\n2 |gi| |gi|+ √ 3σi\nImprovment on Lemma 1 and l1,2 norm: The bound after Gauss inequality can be improved including a second order term\nProb(|X − µ| ≤ r) ≥\n{ 1− 49 ( σ r )2 , if σr ≤ √ 3\n2 1√ 3 r σ , otherwise\n≥ 1− 1 1 + r/ √ 3σ + (r/ √ 3σ)2 .\nIndeed, letting z := r/√3σ ≥ 2/3, we get 1− 49 1 3z2 ≥ 1− 1 1+z+z2 as it reduces to 23z 2− 4z− 4 ≥ 0. Otherwise, if 0 ≤ z ≤ 2/3, then z ≥ 1− 11+z+z2 as it reduces to 1 ≥ 1− z 3. The improvement is tighter as r/σ\nr/σ + √ 3 = 1− 1 1 + r/ √ 3σ ≤ 1− 1 1 + r/ √ 3σ + (r/ √ 3σ)2 .\nHence, continuing the proof of Lemma 1, we get\nProb(sign ĝi = sign gi) ≥ 1− 1\n2\n1\n1 + |gi|/ √ 3σi + (|gi|/ √ 3σi)2\nand we could have defined l1,2-norm in a bit more complicated form as\n‖g‖l1,2 := d∑ i=1 ( 1− 1 1 + |gi|/ √ 3σi + (|gi|/ √ 3σi)2 ) |gi|.\n3see https://en.wikipedia.org/wiki/Gauss%27s_inequality\nB.2 SUFFICIENT CONDITIONS FOR SPB: PROOF OF LEMMA 2\nLet ĝ(τ) be the gradient estimator with mini-batch size τ . It is known that the variance for ĝ(τ) is dropped by at least a factor of τ , i.e.\nE[(ĝ(τ)i − gi) 2] ≤ σ\n2 i\nτ .\nHence, estimating the failure probabilities of sign ĝ(τ) when gi 6= 0, we have\nProb(sign ĝ (τ) i 6= sign gi) = Prob(|ĝ (τ) i − gi| = |ĝ (τ) i |+ |gi|)\n≤ Prob(|ĝ(τ)i − gi| ≥ |gi|)\n= Prob((ĝ (τ) i − gi) 2 ≥ g2i )\n≤ E[(ĝ (τ) i − gi)2] g2i = σ2i τg2i ,\nwhich imples\nρi = Prob(sign ĝi = sign gi) ≥ 1− σ2i τg2i ≥ 1− c τ .\nB.3 SUFFICIENT CONDITIONS FOR SPB: PROOF OF LEMMA 3\nWe will split the derivation into three lemmas providing some intuition on the way. The first two lemmas establish success probability bounds in terms of mini-batch size. Essentially, we present two methods: one works well in the case of small randomness, while the other one in the case of non-small randomness. In the third lemma, we combine those two bounds to get the condition on mini-batch size ensuring SPB assumption. Lemma 4. Let X1, X2, . . . , Xτ be i.i.d. random variables with non-zero mean µ := EX1 6= 0, finite variance σ2 := E|X1 − µ|2 <∞. Then for any mini-batch size τ ≥ 1\nProb ( sign [ 1\nτ τ∑ i=1 Xi\n] = signµ ) ≥ 1− σ 2\nτµ2 . (11)\nProof. Without loss of generality, we assume µ > 0. Then, after some adjustments, the proof follows from the Chebyshev’s inequality:\nProb ( sign [ 1\nτ τ∑ i=1 Xi\n] = signµ ) = Prob ( 1\nτ τ∑ i=1 Xi > 0\n)\n≥ Prob (∣∣∣∣∣1τ τ∑ i=1 Xi − µ ∣∣∣∣∣ < µ )\n= 1− Prob (∣∣∣∣∣1τ τ∑ i=1 Xi − µ ∣∣∣∣∣ ≥ µ )\n≥ 1− 1 µ2 Var\n[ 1\nτ τ∑ i=1 Xi\n]\n= 1− σ 2\nτµ2 ,\nwhere in the last step we used independence of random variables X1, X2, . . . , Xτ .\nObviously, bound (11) is not optimal for big variance as it becomes a trivial inequality. In the case of non-small randomness a better bound is achievable additionally assuming the finitness of 3th central moment.\nLemma 5. Let X1, X2, . . . , Xτ be i.i.d. random variables with non-zero mean µ := EX1 6= 0, positive variance σ2 := E|X1 − µ|2 > 0 and finite 3th central moment ν3 := E|X1 − µ|3 < ∞. Then for any mini-batch size τ ≥ 1\nProb ( sign [ 1\nτ τ∑ i=1 Xi\n] = signµ ) ≥ 1\n2\n( 1 + erf ( |µ| √ τ√\n2σ\n) − ν 3\nσ3 √ τ\n) , (12)\nwhere error function erf is defined as\nerf(x) = 2√ π ∫ x 0 e−t 2 dt, x ∈ R.\nProof. Again, without loss of generality, we may assume that µ > 0. Informally, the proof goes as follows. As we have an average of i.i.d. random variables, we approximate it (in the sense of distribution) by normal distribution using the Central Limit Theorem (CLT). Then we compute success probabilities for normal distribution with the error function erf . Finally, we take into account the approximation error in CLT, from which the third term with negative sign appears. More formally, we apply Berry–Esseen inequality4 on the rate of approximation in CLT (Shevtsova, 2011):∣∣∣∣∣Prob ( 1 σ √ τ τ∑ i=1 (Xi − µ) > t ) − Prob (N > t)\n∣∣∣∣∣ ≤ 12 ν3σ3√τ , t ∈ R, where N ∼ N (0, 1) has the standard normal distribution. Setting t = −µ\n√ τ/σ, we get∣∣∣∣∣Prob ( 1 τ τ∑ i=1 Xi > 0 ) − Prob ( N > −µ √ τ σ )∣∣∣∣∣ ≤ 12 ν3σ3√τ . (13) It remains to compute the second probability using the cumulative distribution function of normal distribuition and express it in terms of the error function:\nProb ( sign [ 1\nτ τ∑ i=1 Xi\n] = signµ ) = Prob ( 1\nτ τ∑ i=1 Xi > 0 ) (13)\n≥ Prob ( N > −µ √ τ\nσ\n) − 1\n2\nν3\nσ3 √ τ\n= 1√ 2π ∫ ∞ −µ √ τ/σ e−t 2/2 dt− 1 2 ν3 σ3 √ τ\n= 1\n2\n( 1 + √ 2\nπ ∫ µ√τ/σ 0 e−t 2/2 dt− ν 3 σ3 √ τ )\n= 1\n2\n( 1 + erf ( µ √ τ√\n2σ\n) − ν 3\nσ3 √ τ\n) .\nClearly, bound (12) is better than (11) when randomness is high. On the other hand, bound (12) is not optimal for small randomness (σ ≈ 0). Indeed, one can show that in a small randomness regime, while both variance σ2 and third moment ν3 are small, the ration ν/σ might blow up to infinity producing trivial inequality. For instance, taking Xi ∼ Bernoulli(p) and letting p→ 1 gives ν/σ = O ( (1− p)−1/6 ) . This behaviour stems from the fact that we are using CLT: less randomness implies slower rate of approximation in CLT.\nAs a result of these two bounds on success probabilities, we conclude a condition on mini-batch size for the SPB assumption to hold.\n4see https://en.wikipedia.org/wiki/Berry-Esseen_theorem\nLemma 6. Let X1, X2, . . . , Xτ be i.i.d. random variables with non-zero mean µ 6= 0 and finite variance σ2 <∞. Then\nProb ( sign [ 1\nτ τ∑ i=1 Xi\n] = signµ ) > 1\n2 , if τ > 2 min\n( σ2\nµ2 , ν3 |µ|σ2\n) , (14)\nwhere ν3 is (possibly infinite) 3th central moment.\nProof. First, if σ = 0 then the lemma holds trivially. If ν = ∞, then it follows immediately from Lemma 4. Assume both σ and ν are positive and finite.\nIn case of τ > 2σ2/µ2 we apply Lemma 4 again. Consider the case τ ≤ 2σ2/µ2, which implies µ √ τ√\n2σ ≤ 1. It is easy to check that erf(x) is concave on [0, 1] (in fact on [0,∞)), therefore erf(x) ≥\nerf(1)x for any x ∈ [0, 1]. Setting x = µ √ τ√\n2σ we get\nerf ( µ √ τ√\n2σ\n) ≥ erf(1)√\n2\nµ √ τ\nσ ,\nwhich together with (12) gives\nProb ( sign [ 1\nτ τ∑ i=1 Xi\n] = signµ ) ≥ 1\n2\n( 1 +\nerf(1)√ 2\nµ √ τ\nσ − ν\n3\nσ3 √ τ\n) .\nHence, SPB assumption holds if\nτ >\n√ 2\nerf(1)\nν3\nµσ2 .\nIt remains to show that erf(1) > 1/√2. Convexity of ex on x ∈ [−1, 0] implies ex ≥ 1 + (1− 1/e)x for any x ∈ [−1, 0]. Therefore\nerf(1) = 2√ π ∫ 1 0 e−t 2 dt\n≥ 2√ π ∫ 1 0 ( 1− (1− 1/e)t2 ) dt\n= 2√ π\n( 2\n3 +\n1\n3e\n) >\n2√ 4\n( 2\n3 +\n1\n3 · 3\n) = 7\n9 > 1√ 2 .\nLemma (3) follows from Lemma (6) applying it to i.i.d. data ĝ1i (x), ĝ 2 i (x), . . . , ĝ M i (x).\nB.4 CONVERGENCE ANALYSIS: PROOF OF THEOREM 1\nFirst, from L-smoothness assumption we have\nf(xk+1) = f(xk − γk sign ĝk)\n≤ f(xk)− 〈gk, γk sign ĝk〉+ d∑ i=1 Li 2 (γk sign ĝk,i) 2\n= f(xk)− γk〈gk, sign ĝk〉+ dL̄\n2 γ2k,\nwhere gk = g(xk), ĝk = ĝ(xk), ĝk,i is the i-th component of ĝk and L̄ is the average value of Li’s. Taking conditional expectation given current iteration xk gives\nE[f(xk+1)|xk] ≤ f(xk)− γkE[〈gk, sign ĝk〉] + dL̄\n2 γ2k. (15)\nUsing the definition of success probabilities ρi we get\nE[〈gk, sign ĝk〉] = 〈gk,E[sign ĝk]〉 (16)\n= d∑ i=1 gk,i · E[sign ĝk,i] = ∑\n1≤i≤d gk,i 6=0\ngk,i · E[sign ĝk,i] (17)\n= ∑\n1≤i≤d gk,i 6=0\ngk,i (ρi(xk) sign gk,i + (1− ρi(xk))(− sign gk,i)) (18)\n= ∑\n1≤i≤d gk,i 6=0\n(2ρi(xk)− 1)|gk,i| = d∑ i=1 (2ρi(xk)− 1)|gk,i| = ‖gk‖ρ. (19)\nPlugging this into (15) and taking full expectation, we get\nE‖gk‖ρ ≤ E[f(xk)]− E[f(xk+1)] γk + dL̄ 2 γk. (20)\nTherefore\nK−1∑ k=0 γkE‖gk‖ρ ≤ (f(x0)− f∗) + dL̄ 2 K−1∑ k=0 γ2k. (21)\nNow, in case of decreasing step sizes γk = γ0/ √ k + 1\nmin 0≤k<K E‖gk‖ρ ≤ K−1∑ k=0 γ0√ k + 1 E‖gk‖ρ /K−1∑ k=0 γ0√ k + 1\n≤ 1√ K\n[ f(x0)− f∗\nγ0 + dL̄ 2 γ0 K−1∑ k=0 1 k + 1\n]\n≤ 1√ K\n[ f(x0)− f∗\nγ0 + γ0dL̄+\nγ0dL̄\n2 logK ] =\n1√ K\n[ f(x0)− f∗\nγ0 + γ0dL̄\n] + γ0dL̄\n2 logK√ K .\nwhere we have used the following standard inequalities\nK∑ k=1 1√ k ≥ √ K, K∑ k=1 1 k ≤ 2 + logK. (22)\nIn the case of constant step size γk = γ\n1\nK K−1∑ k=0 E‖gk‖ρ ≤ 1 γK [ (f(x0)− f∗) + dL̄ 2 γ2K ] = f(x0)− f∗ γK + dL̄ 2 γ.\nB.5 CONVERGENCE ANALYSIS: PROOF OF THEOREM 2\nClearly, the iterations {xk}k≥0 of Algorithm 1 (Option 2) do not increase the function value in any iteration, i.e. E[f(xk+1)|xk] ≤ f(xk). Continuing the proof of Theorem 1 from (20), we get\n1\nK K−1∑ k=0 E‖gk‖ρ ≤ 1 K K−1∑ k=0 E[f(xk)]− E[f(xk+1)] γk + dL̄ 2 γk\n= 1\nK K−1∑ k=0 E[f(xk)]− E[f(xk+1)] γ0 √ k + 1 + dL̄ 2K K−1∑ k=0 γ0√ k + 1\n≤ 1√ K K−1∑ k=0 E[f(xk)]− E[f(xk+1)] γ0 + γ0dL̄√ K\n= f(x0)− E[f(xK)]\nγ0 √ K\n+ γ0dL̄√ K\n≤ 1√ K\n[ f(x0)− f∗\nγ0 + γ0dL̄\n] ,\nwhere we have used the following inequality K∑ k=1 1√ k ≤ 2 √ K.\nThe proof for constant step size is the same as in Theorem 1.\nB.6 CONVERGENCE ANALYSIS IN DISTRIBUTED SETTING: PROOF OF THEOREM 3\nFirst, denote by I(p; a, b) the regularized incomplete beta function, which is defined as follows\nI(p; a, b) = B(p; a, b)\nB(a, b) =\n∫ p 0 ta−1(1− t)b−1 dt∫ 1\n0 ta−1(1− t)b−1 dt\n, a, b > 0, p ∈ [0, 1]. (23)\nThe proof of Theorem 3 goes with the same steps as in Theorem 1, except the derivation (16)–(19) is replaced by\nE[〈gk, sign ĝ(M)k 〉] = 〈gk,E[sign ĝ (M) k ]〉\n= d∑ i=1 gk,i · E[sign ĝ(M)k,i ]\n= ∑\n1≤i≤d gk,i 6=0\n|gk,i| · E [ sign ( ĝ (M) k,i · gk,i )]\n= ∑\n1≤i≤d gk,i 6=0\n|gk,i| (2I(ρi(xk); l, l)− 1) = ‖gk‖ρM ,\nwhere we have used the following lemma. Lemma 7. Assume that for some point x ∈ Rd and some coordinate i ∈ {1, 2, . . . , d}, master node receives M independent stochastic signs sign ĝmi (x), m = 1, . . . ,M of true gradient gi(x) 6= 0. Let ĝ(M)(x) be the sum of stochastic signs aggregated from nodes:\nĝ(M) = M∑ m=1 sign ĝm.\nThen E [ sign ( ĝ\n(M) i · gi )] = 2I(ρi; l, l)− 1, (24)\nwhere l = [(M+1)/2] and ρi > 1/2 is the success probablity for coordinate i.\nProof. Denote by Smi the Bernoulli trial of node m corresponding to ith coordinate, where “success” is the sign match between stochastic gradient and gradient:\nSmi := { 1, if sign ĝmi = sign gi 0, otherwise ∼ Bernoulli(ρi). (25)\nSince nodes have their own independent stochastic gradients and the objective function (or dataset) is shared, then master node receives i.i.d. trials Smi , which sum up to a binomial random variable Si:\nSi := M∑ m=1 Smi ∼ Binomial(M,ρi). (26)\nFirst, let us consider the case when there are odd number of nodes, i.e. M = 2l − 1, l ≥ 1. In this case, taking into account (25) and (26), we have\nProb (\nsign ĝ (M) i = 0\n) = 0,\nρ (M) i := Prob\n( sign ĝ\n(M) i = sign gi ) = Prob(Si ≥ l),\n1− ρ(M)i = Prob ( sign ĝ (M) i = − sign gi ) .\nIt is well known that cumulative distribution function of binomial random variable can be expressed with regularized incomplete beta function:\nProb(Si ≥ l) = I(ρi; l,M − l + 1) = I(ρi; l, l). (27)\nTherefore,\nE [ sign ( ĝ\n(M) i · gi\n)] = ρ\n(M) i · 1 + (1− ρ (M) i ) · (−1)\n= 2ρ (M) i − 1 = 2Prob(Si ≥ l)− 1 = 2I(ρi; l, l)− 1.\nIn the case of even number of nodes, i.e. M = 2l, l ≥ 1, there is a probability to fail the vote Prob ( sign ĝ\n(M) i = 0\n) > 0. However using (27) and properties of beta function5 gives\nE [ sign ( ĝ\n(2l) i · gi )] = Prob(Si ≥ l + 1) · 1 + Prob(Si ≤ l − 1) · (−1)\n= I(ρi; l + 1, l) + I(ρi; l, l + 1)− 1 = 2I(ρi; l, l)− 1\n= E [ sign ( ĝ\n(2l−1) i · gi\n)] .\nThis also shows that in expectation there is no difference between having 2l − 1 and 2l nodes.\nB.7 CONVERGENCE ANALYSIS IN DISTRIBUTED SETTING: VARIANCE REDUCTION\nHere we show exponential variance reduction in distributed setting in terms of number of nodes. We first state the well-known Hoeffding’s inequality: Theorem 5 (Hoeffding’s inequality for general bounded random variables; see (Vershynin, 2018), Theorem 2.2.6). Let X1, X2, . . . , XM be independent random variables. Assume that Xm ∈ [Am, Bm] for every m. Then, for any t > 0, we have\nProb ( M∑ m=1 (Xm − EXm) ≥ t ) ≤ exp ( − 2t 2∑M m=1(Bm −Am)2 ) .\n5see https://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function\nDefine random variables Xmi , m = 1, 2, . . . ,M showing the missmatch between stochastic gradient sign and full gradient sign from node m and coordinate i:\nXmi := { −1, if sign ĝmi = sign gi 1, otherwise\n(28)\nClearly EXmi = 1− 2ρi and Hoeffding’s inequality gives\nProb ( M∑ m=1 Xmi −M(1− 2ρi) ≥ t ) ≤ exp ( − t 2 2M ) , t > 0.\nChoosing t = M(2ρi − 1) > 0 (because of SPB assumption) yields\nProb ( M∑ m=1 Xmi ≥ 0 ) ≤ exp ( −1 2 (2ρi − 1)2M ) .\nUsing Lemma 24, we get 2I(ρi, l; l)− 1 = E [ sign ( ĝ (M) i · gi )] = 1− Prob ( M∑ m=1 Xmi ≥ 0 ) ≥ 1− exp ( −(2ρi − 1)2l ) ,\nwhich provides the following estimate for ρM -norm:( 1− exp ( −(2ρ(x)− 1)2l )) ‖g(x)‖1 ≤ ‖g(x)‖ρM ≤ ‖g(x)‖1,\nwhere ρ(x) = min1≤i≤d ρi(x) > 1/2." }, { "heading": "C CONVERGENCE RESULT FOR STANDARD SGD", "text": "For comparison, here we state and prove non-convex convergence rates of standard SGD with the same step sizes. Theorem 6 (Non-convex convergence of SGD). Let ĝ be an unbiased estimator of the gradient∇f and assume that E‖ĝ‖22 ≤ C for some C > 0. Then SGD with step sizes γk = γ0/ √ k + 1 converges as follows\nmin 0≤k<K E‖∇f(xk)‖22 ≤ 1√ K\n[ f(x0)− f∗\nγ0 + γ0CLmax\n] + γ0CLmax\n2\nlogK√ K . (29)\nIn the case of constant step size γk ≡ γ > 0\n1\nK K−1∑ k=0 E‖∇f(xk)‖22 ≤ f(x0)− f∗ γK + CLmax 2 γ. (30)\nProof. From L-smoothness assumption we have\nE[f(xk+1)|xk] = E[f(xk − γkĝk)|xk]\n≤ f(xk)− E[〈gk, γkĝk〉] + Lmax\n2 γ2kE[‖ĝk‖22]\n= f(xk)− γk‖gk‖22 + Lmax\n2 γ2k E[‖ĝk‖22].\nTaking full expectation, using variance bound assumption, we have\nE[f(xk+1)]− E[f(xk)] ≤ −γk E‖gk‖22 + Lmax\n2 γ2kC\nTherefore γkE‖gk‖22 ≤ E[f(xk)]− E[f(xk+1)] +\nCLmax 2 γ2k\nSumming k = 0, 1, . . . ,K − 1 gives K−1∑ k=0 γkE‖gk‖22 ≤ (f(x0)− f∗) + CLmax 2 K−1∑ k=0 γ2k.\nNow, in case of decreasing step sizes γk = γ0/ √ k + 1\nmin 0≤k<K E‖gk‖22 ≤ K−1∑ k=0 γ0√ k + 1 E‖gk‖22 /K−1∑ k=0 γ0√ k + 1\n≤ 1√ K\n[ f(x0)− f∗\nγ0 + CLmax 2 γ0 K−1∑ k=0 1 k + 1\n]\n≤ 1√ K\n[ f(x0)− f∗\nγ0 + γ0CLmax + γ0CLmax 2 logK ] =\n1√ K\n[ f(x0)− f∗\nγ0 + γ0CLmax\n] + γ0CLmax\n2\nlogK√ K .\nwhere again we have used inequalities (22). In the case of constant step size γk = γ\n1\nK K−1∑ k=0 E‖gk‖22 ≤ 1 γK [ (f(x0)− f∗) + CLmax 2 γ2K ] = f(x0)− f∗ γK + CLmax 2 γ." }, { "heading": "D RECOVERING THEOREM 1 IN (BERNSTEIN ET AL., 2019) FROM", "text": "THEOREM 1\nTo recover Theorem 1 in (Bernstein et al., 2019), first note that choosing a particular step size γ in (7) yields\n1\nK K−1∑ k=0 E‖gk‖ρ ≤ √ 2dL̄(f(x0)− f∗) K , with γ = √ 2(f(x0)− f∗) dL̄K . (31)\nThen, due to Lemma 1, under unbiasedness and unimodal symmetric noise assumption, we can lower bound general ρ-norm by mixed l1,2 norm. Finally we further lower bound our l1,2 norm to obtain the mixed norm used in Theorem 1 of Bernstein et al. (2019): let Hk = {1 ≤ i ≤ d : σi < √ 3/2|gk,i|}\n5\n√ dL̄(f(x0)− f∗)\nK ≥ 5√\n2\n1\nK K−1∑ k=0 E‖gk‖ρ\n≥ 5√ 2 1 K K−1∑ k=0 E‖gk‖l1,2 = 5√ 2 1 K K−1∑ k=0 [ d∑ i=1 g2i |gi|+ √ 3σi ]\n≥ 5√ 2 1 K K−1∑ k=0 E 2 5 ∑ i∈Hk |gk,i|+ √ 3 5 ∑ i/∈Hk g2k,i σi ≥ 1 K K−1∑ k=0 E ∑ i∈Hk |gk,i|+ ∑ i/∈Hk g2k,i σi\n ." }, { "heading": "E STOCHASTIC SIGNSGD", "text": "Our experiments and the counterexample show that signSGD might fail to converge in general. What we proved is that SPB assumption is roughly a necessary and sufficient for general convergence. There are several ways to overcome SPB assumption and make signSGD to work in general, e.g.\nscaled version of signSGD with error feedback (Karimireddy et al., 2019). Here we to present a simple way of fixing this issue, which is more natural to signSGD. The issue with signSGD is that sign of stochastic gradient is biased, which also complicates the analysis.\nWe define stochastic sign operator s̃ign, which unlike the deterministic sign operator is unbiased with appropriate scaling factor.\nDefinition 3 (Stochastic Sign). Define the stochastic sign operator s̃ign : Rd → Rd as( s̃ign g ) i = { +1, with prob. 12 + 1 2 gi ‖g‖2\n−1, with prob. 12 − 1 2 gi ‖g‖2\n, 1 ≤ i ≤ d,\nand s̃ign0 = 0 with probability 1.\nFurthermore, we define stochastic compression operator C : Rd → Rd as C(x) = ‖x‖2 · s̃ignx, which compresses rd bits to r + d bits (r bits per one floating point number). Then for any unbiased estimator ĝ we get\nE [C(ĝ)] = E [E[C(ĝ) | ĝ]] = E [ ‖ĝ‖2 ( 1\n2 +\n1\n2\nĝ\n‖ĝ‖2\n) − ‖ĝ‖2 ( 1\n2 − 1 2 ĝ ‖ĝ‖2\n)] = E[ĝ] = g,\nVar [C(ĝ)] = E [ ‖C(ĝ)− ĝ‖22 ] = E [ ‖C(ĝ)‖22 ] − E [ ‖ĝ‖22 ] = (d− 1)E‖ĝ‖22.\nUsing this relations, any analysis for SGD can be repeated for stochastic signSGD giving the same convergence rate with less communication and with (d− 1) times worse coefficients. Another scaled version of signSGD investigated in Karimireddy et al. (2019) uses non-stochastic compression operator C′ : Rd → Rd defined as C′(x) = ‖x‖1d signx. It is shown (see Karimireddy et al. (2019), Theorem II) to converge as\n1\nK K−1∑ k=0 E‖∇f(xk)‖22 ≤ 2 (f(x0)− f∗) γK + γLmaxC 2 + 4d(d− 1)γ2L2maxC,\nwhere the error of current gradient compression is stored to be used in the next step. On the other hand, adopting the analysis of Theorem 6 for the stochastic compression operator C, we get a bound\n1\nK K−1∑ k=0 E‖∇f(xk)‖22 ≤ f(x0)− f∗ γK + γLmaxCd 2 ,\nwhere no data needs to be stored. Furthermore, ignoring the factor 2 at the first term, later bound is better if γ ≥ 1/8dLmax." } ]
2,019
null
SP:c9affd2ef30c7b4e0873aeb5783105b8ea6c056b
[ "The authors propose a new pooling layer, LaPool, for hierarchical graph representation learning (Ying et al., 2019) by clustering nodes around centroids that are selected based on \"signal intensity variation\". The signal intensity variation of node x is defined as sum_{y in HOP(x, h)} ||x - y|| where HOP(x, h) is the set of nodes reachable from x within h hops. Once top k maximizers are selected as centroids (k can be predetermined or dynamically chosen), a sparse cluster assignment distribution is computed for each node using sparsemax (Laha et al., 2018), and the affinity matrix and the node embeddings are coarsened as in Ying et al. (2019). The authors show that LaPool can improve performance in various graph-related tasks over baselines and generate interpretable clusters. ", "The paper introduces a new pooling approach \"Laplacian pooling\" for graph neural networks, which the authors claim is able to better preserve information about the local structure, and to provide interpretability. Namely, the pooling approach is based on finding centroids (nodes having high signal variation compared to their neighbors, via graph-Laplacian) and assigning other nodes to be \"followers\" based on a soft-attention mechanism. The authors add these new pooling layers to existing GNN architectures and show improved performance on problems of classification and generative modeling of molecular graphs. The paper also extends CNN interpretability techniques (integrated gradients) to GNNs." ]
Recent work in graph neural networks (GNNs) has led to improvements in molecular activity and property prediction tasks. Unfortunately, GNNs often fail to capture the relative importance of interactions between molecular substructures, in part due to the absence of efficient intermediate pooling steps. To address these issues, we propose LaPool (Laplacian Pooling), a novel, data-driven, and interpretable hierarchical graph pooling method that takes into account both node features and graph structure to improve molecular understanding. We benchmark LaPool and show that it not only outperforms recent GNNs on molecular graph understanding and prediction tasks but also remains highly competitive on other graph types. We then demonstrate the improved interpretability achieved with LaPool using both qualitative and quantitative assessments, highlighting its potential applications in drug discovery.
[]
[ { "authors": [ "Rim Assouel", "Mohamed Ahmed", "Marwin H. Segler", "Amir Saffari", "Yoshua Bengio" ], "title": "DEFactor: Differentiable Edge Factorization-based Probabilistic Graph Generation", "venue": "[cs],", "year": 2018 }, { "authors": [ "Siheng Chen", "Rohan Varma", "Aliaksei Sandryhaila", "Jelena Kovačević" ], "title": "Discrete Signal Processing on Graphs: Sampling Theory", "venue": "IEEE Transactions on Signal Processing,", "year": 1941 }, { "authors": [ "Siheng Chen", "Rohan Varma", "Aarti Singh", "Jelena Kovačević" ], "title": "Signal representations on graphs: Tools and applications", "venue": "arXiv preprint arXiv:1512.05406,", "year": 2015 }, { "authors": [ "Siheng Chen", "Rohan Varma", "Aarti Singh", "Jelena Kovačević" ], "title": "Signal Representations on Graphs: Tools and Applications. arXiv:1512.05406 [cs, math], December 2015c", "venue": "URL http://arxiv.org/abs/1512", "year": 2015 }, { "authors": [ "National Research Council" ], "title": "Toxicity testing in the 21st century: a vision and a strategy", "venue": null, "year": 2007 }, { "authors": [ "Shahaf Dafna", "Carlos Guestrin" ], "title": "Learning Thin Junction Trees via Graph Cuts", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "MolGAN: An implicit generative model for small molecular graphs", "venue": "ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models,", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Hongyang Gao", "Shuiwang Ji. Graph U-Net. under review", "September" ], "title": "URL https://openreview", "venue": "net/forum?id=HJePRoAct7.", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural Message Passing for Quantum Chemistry", "venue": "[cs],", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Mark S Gordon", "Dmitri G Fedorov", "Spencer R Pruitt", "Lyudmila V Slipchenko" ], "title": "Fragmentation methods: A route to accurate calculations on large systems", "venue": "Chemical reviews,", "year": 2011 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep Convolutional Networks on Graph-Structured Data", "venue": "[cs],", "year": 2015 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction Tree Variational Autoencoder for Molecular Graph Generation. arXiv:1802.04364 [cs, stat], February 2018", "venue": "URL http://arxiv.org/abs/1802.04364", "year": 2018 }, { "authors": [ "Artur Kadurin", "Sergey Nikolenko", "Kuzma Khrabrov", "Alex Aliper", "Alex Zhavoronkov" ], "title": "drugan: an advanced generative adversarial autoencoder model for de novo generation of new molecules with desired molecular properties in silico", "venue": "Molecular pharmaceutics,", "year": 2017 }, { "authors": [ "Steven Kearnes", "Kevin McCloskey", "Marc Berndl", "Vijay Pande", "Patrick Riley" ], "title": "Molecular graph convolutions: moving beyond fingerprints", "venue": "Journal of computer-aided molecular design,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Anirban Laha", "Saneem Ahmed Chemmengath", "Priyanka Agrawal", "Mitesh Khapra", "Karthik Sankaranarayanan", "Harish G Ramaswamy" ], "title": "On Controllable Sparse Alternatives to Softmax", "venue": null, "year": 2016 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper Insights into Graph Convolutional Networks for SemiSupervised Learning. arXiv:1801.07606 [cs, stat], January 2018a", "venue": "URL http://arxiv.org/abs/ 1801.07606", "year": 2018 }, { "authors": [ "Yibo Li", "Liangren Zhang", "Zhenming Liu" ], "title": "Multi-objective de novo drug design with conditional graph generative model", "venue": "Journal of Cheminformatics,", "year": 2018 }, { "authors": [ "Yibo Li", "Liangren Zhang", "Zhenming Liu" ], "title": "Multi-objective de novo drug design with conditional graph generative model", "venue": "Journal of cheminformatics,", "year": 2018 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "arXiv preprint arXiv:1511.05493,", "year": 2015 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter Battaglia. Learning Deep Generative Models of Graphs." ], "title": "ISSN 2326-8298", "venue": "doi: 10.1146/annurev-statistics-010814-020120. URL http: //arxiv.org/abs/1803.03324.", "year": 2018 }, { "authors": [ "Yao Ma", "Suhang Wang", "Charu C. Aggarwal", "Jiliang Tang" ], "title": "Graph Convolutional Networks with EigenPooling. arXiv:1904.13107 [cs, stat], April 2019", "venue": "URL http://arxiv.org/abs/1904.13107", "year": 1904 }, { "authors": [ "André F.T. Martins", "Ramón Fernandez Astudillo" ], "title": "From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification. arXiv:1602.02068 [cs, stat], February 2016", "venue": "URL http://arxiv.org/ abs/1602.02068", "year": 2068 }, { "authors": [ "Tim Miller" ], "title": "Explanation in artificial intelligence: Insights from the social sciences", "venue": "Artificial Intelligence,", "year": 2018 }, { "authors": [ "Marcus Olivecrona", "Thomas Blaschke", "Ola Engkvist", "Hongming Chen" ], "title": "Molecular de-novo design through deep reinforcement learning", "venue": "Journal of Cheminformatics,", "year": 2017 }, { "authors": [ "Phillip E Pope", "Soheil Kolouri", "Mohammad Rostami", "Charles E Martin", "Heiko Hoffmann" ], "title": "Explainability methods for graph convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "David Rogers", "Mathew Hahn" ], "title": "Extended-connectivity fingerprints", "venue": "Journal of chemical information and modeling,", "year": 2010 }, { "authors": [ "Nadine Schneider", "Roger A Sayle", "Gregory A Landrum" ], "title": "Get your atoms in order - an open-source implementation of a novel and robust molecular canonicalization algorithm", "venue": "Journal of chemical information and modeling,", "year": 2015 }, { "authors": [ "David I Shuman", "Sunil K Narang", "Pascal Frossard", "Antonio Ortega", "Pierre Vandergheynst" ], "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "venue": "arXiv preprint arXiv:1211.0053,", "year": 2012 }, { "authors": [ "Sangeetha Subramaniam", "Monica Mehrotra", "Dinesh Gupta" ], "title": "Virtual high throughput screening", "venue": "(vhts)-a perspective. Bioinformation,", "year": 2008 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "CoRR, abs/1703.01365,", "year": 2017 }, { "authors": [ "Ulrike von Luxburg" ], "title": "A Tutorial on Spectral Clustering", "venue": "[cs],", "year": 2007 }, { "authors": [ "Zhenqin Wu", "Bharath Ramsundar", "Evan N Feinberg", "Joseph Gomes", "Caleb Geniesse", "Aneesh S Pappu", "Karl Leswing", "Vijay Pande" ], "title": "Moleculenet: a benchmark for molecular machine learning", "venue": "Chemical science,", "year": 2018 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "Philip S Yu" ], "title": "A comprehensive survey on graph neural networks", "venue": null, "year": 1901 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How Powerful are Graph Neural Networks? arXiv:1810.00826 [cs, stat], October 2018", "venue": "URL http://arxiv.org/abs/1810.00826", "year": 2018 }, { "authors": [ "Rex Ying", "Jiaxuan You", "Christopher Morris", "Xiang Ren", "William L. Hamilton", "Jure Leskovec" ], "title": "Hierarchical Graph Representation Learning with Differentiable Pooling", "venue": "In Proceedings of the 32Nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Zhitao Ying", "Vijay Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William L Hamilton", "Jure Leskovec" ], "title": "Graphrnn: Generating realistic graphs with deep auto-regressive models", "venue": "arXiv preprint arXiv:1802.08773,", "year": 2018 }, { "authors": [ "Muhan Zhang", "Zhicheng Cui", "Marion Neumann", "Yixin Chen" ], "title": "An end-to-end deep learning architecture for graph classification", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Li" ], "title": "2018d) to capture the interdependency between them, we used a simple MLP that takes the latent code z as input an pass it through two fully connected layers (128, 64). The output of those layers is used as shared embedding for two networks: one predicting the upper triangular entries of the edge tensor, and the second predicting the node features tensor. This results in faster convergence", "venue": "(You et al., 2018b; Assouel et al.,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Following the recent rise of deep learning for image and speech processing, there has been great interest in generalizing convolutional neural networks to arbitrary graph-structured data (Gilmer et al., 2017; Henaff et al., 2015; Xu et al., 2018). To this end, graph neural networks (GNN) falling into either spectral-based or spatial-based approaches have been proposed. Spectral methods define the graph convolution (GC) as a filtering operator of the graph signal (Defferrard et al., 2016), while spatial methods define the GC as a message passing and aggregation across nodes (Henaff et al., 2015; Xu et al., 2018; Jin et al., 2018). In drug discovery, GNNs have been very successful across several molecular graph classification and generation tasks. In particular, they outperform predetermined molecular fingerprints and string-based approaches for molecular property prediction and the de novo generation of drug-like compounds (Jin et al., 2018; Li et al., 2018b).\nHowever, the node feature update performed by GNNs introduces important limitations. For instance, experimental results indicate a performance decrease for deeper GNNs due to the signal smoothing effect of each GC layer (Li et al., 2018a). This limits the network’s depth and restricts the receptive field of the vertices in the graph to a few-hop neighborhood, which is insufficient to properly capture local structures, relationships between nodes, and subgraph importance in sparse graphs such as molecules. For example, at least three consecutive GC layers are needed for atoms at the opposite side of a benzene ring to exchange information. This issue is exacerbated by the single global pooling step performed at the end of most GNNs that ignores any hierarchical structure within the graph.\nTo cope with these limitations, graph coarsening (pooling) methods have been proposed to reduce graph size and enable long-distance interaction between nodes. The earliest proposed methods relied solely on deterministic clustering of the graphs, making them non-differentiable and task-independent (Jin et al., 2018; Dafna and Guestrin, 2009; von Luxburg, 2007; Ma et al., 2019). In contrast, more recent methods use node features but, as we will show, are unable to preserve the graph structures after pooling (Ying et al., 2018; Gao and Ji, 2018), limiting their interpretability.\nBorrowing from theory in graph signal processing, we propose LaPool (Laplacian Pooling), a differentiable pooling method that takes into account both the graph structure and its node features. LaPool performs a dynamic and hierarchical segmentation of graphs by selecting a set of centroid nodes as cluster representatives (centroids) using the graph Laplacian, then learns a sparse assignment of the remaining nodes (followers) into these clusters using an attention mechanism. The primary contributions of this paper are summarized below:\n• We propose a novel and differentiable pooling module (LaPool) that can be incorporated into existing GNNs to yield more expressive networks for molecular data. • We propose a graph structure understanding dataset for benchmarking GNNs that is based\non molecular substructure prediction. • We show that LaPool outperforms recently proposed graph pooling layers on both discrimi-\nnative and generative molecular graph benchmarks, while also remaining competitive on other graph benchmarks. • We highlight the improved interpretability achieved by LaPool using both qualitative and\nquantitative assessments.\nWe argue that the enhanced performance and interpretability achieved by LaPool can improve our understanding of molecular structure-activity relationships, and therefore has important applications in drug discovery." }, { "heading": "2 RELATED WORK", "text": "In this section, we first introduce related work on graph pooling, then provide an overview of techniques used in computational drug discovery to put our work into context. As our focus herein is on graph pooling, we refer readers to (Wu et al., 2019) for an overview of recent progress in GNNs.\nIn traditional GNN architectures, global sum/average/max-pooling layers have been used to aggregate node embeddings into a graph-level representation. Recently, more sophisticated methods have been proposed. For example, Li et al. (2015) uses a gated mechanism, Zhang et al. (2018) proposed SortPool which sorts node features before feeding them into a 1D convolution, while in (Gilmer et al., 2017) node feature averaging was substituted by a Set2Set architecture. Although these new global aggregation methods have been shown to outperform standard global pooling, they completely overlook the rich structural information on graphs which has been shown as necessary for building effective GNN models (Ying et al., 2018; Ma et al., 2019).\nConsequently, hierarchical graph pooling methods have been proposed. They act by reducing graph size and increasing node receptive fields without increasing network depth. However, in contrast to the regular structure of images, graphs are irregular and complex, making it challenging to properly pool nodes together. Certain hierarchical graph pooling methods therefore rely on deterministic and non-differentiable clustering to segment the graph (Defferrard et al., 2016; Jin et al., 2018). More recently, differentiable hierarchical graph pooling layers have been proposed. Ying et al. (2018) proposed DiffPool, a pooling layer that performs a similarity-based node clustering using a soft affinity matrix learned by a GNN. Likewise, Graph U-Net was proposed in (Gao and Ji, 2018) as a sampling method that retains and propagates only the top-k nodes at each pooling step based on a learned importance value.\nIn computer-aided drug discovery, methods such as virtual screening and de novo drug design serve as efficient complements to physical high-throughput screening of large molecular libraries. For example, virtual screening, which aims to accurately predict molecular properties directly from molecular structure, can play an important role in rapidly triaging promising compounds early in drug discovery (Subramaniam et al., 2008). Importantly, data-driven virtual screening approaches that leverage advances in deep learning, rather than pre-determined features such as molecular fingerprints (Rogers and Hahn, 2010) and SMILES string representations, have been shown to dramatically improve prediction accuracy (Kearnes et al., 2016; Wu et al., 2018). Similarly, advances in generative models have enabled the application of deep generative techniques such as VAE (Kingma and Welling, 2013) and GAN (Goodfellow et al., 2014) to the de novo design of drug-like molecules. The first molecular generative models (e.g. Grammar-VAE (Kusner et al., 2017)) resorted to generating string representations of molecules (via SMILES), which resulted in a high proportion of invalid structures due to the complex syntax of SMILES. Graph generative models have since been developed (e.g. JT-VAE (Jin et al., 2018), GraphVAE (Simonovsky and Komodakis), MolGAN (De Cao and Kipf, 2018), MolMP (Li et al., 2018b), etc.) and have been shown to improve the validity and novelty of generated molecules. In addition, these methods allow conditional molecule generation via Bayesian optimization or reinforcement learning (Jin et al., 2018; Olivecrona et al., 2017; Assouel et al., 2018; Li et al., 2018d; You et al., 2018a). In this work, we are mainly interested in the impact of molecular representation on generative performance as opposed to the optimization procedure itself." }, { "heading": "3 GRAPH LAPLACIAN POOLING", "text": "A reliable pooling operator should maintain the overall structure and connectivity of a graph. LaPool achieves this by taking into account the local structure defined by the neighborhood of each node. As shown in Figure 1, the method uses a standard GC layer with a centroid selection and a follower selection step. First, the centroids of the graph are selected based on the local signal variation (see Section 3.2). Next, LaPool learns an affinity matrix C using a distance normalized attention mechanism to assign all nodes of the graph to the centroids (see Section 3.3). Finally, the affinity matrix is used to coarsen the graph. These steps are detailed below." }, { "heading": "3.1 PRELIMINARIES", "text": "Notation Let G = 〈V,A,X〉 be an undirected graph, where V = { v1, . . . vn } is its vertex set,\nA ∈ {0, 1}n×n denotes its adjacency matrix, and X = [ x1, . . .xn ]ᵀ ∈ Rn×d is the node feature matrix with each node vi having d-dimensional feature xi. X can also be viewed as a d-dimensional signal on G (Shuman et al., 2012). Without loss of generality we may assume a fixed ordering of the nodes that is respected in V , A, and X . The neighborhood of radius h (or h-hop) neighborhood of a node vi ∈ V is the set of nodes separated from vi by a path of length at most h and is denoted by N h(vi). For simplicity, we will use N (vi) to refer to the set of nodes adjacent to vi. Graph Signal For any graph G, its graph Laplacian matrix L is defined as L = D −A, where D is a diagonal matrix with Di,i being the degree of node vi. The graph Laplacian is a difference operator and can be used to define the smoothness s(X) (the extent at which the signal changes between connected nodes) of a signal X on G. For 1-dimensional signal f = [f1, . . . , fn] :\ns(f) = (fᵀLf) = 1\n2 n∑ i,j Ai,j(fi − fj)2 (1)\nGraph Neural Networks We consider GNNs that act in the graph spatial domain as message passing (Gilmer et al., 2017). We focus on the Graph Isomorphism Network (GIN) (Xu et al., 2018), which uses a SUM-aggregator on messages received by each node to achieve a better understanding of the graph structure:\nxli = M l Θ x(l−1)i + ∑ vj∈N (vi) A (l−1) i,j x (l−1) j (2) where MlΘ is a neural network with trainable parameters Θ, xi is the feature vector for node vi, vj ∈ N (vi) are the neighbors of vi and l is the layer number. Notice the term Ai,j that takes into account the edge weight between nodes vi and vj when A is not binary.\nIn this work, we focus mainly on molecular graphs in supervised settings where, given a molecule m and its corresponding molecular graph Gm, we aim to predict some properties of m. Molecular graphs present two particularities: (1) they are often sparse and (2) there is no regularity in the graph signal (non-smooth variation) as adjacent nodes tend not to have similar features." }, { "heading": "3.2 GRAPH DOWNSAMPLING", "text": "This section details how LaPool downsamples the original graph by selecting a set VC of nodes as centroids after l consecutive GC layers.\nCentroid Selection For any given vertex vi, we can define a local measure si of intensity of signal variation around vi. As si measures how different the signal residing at a node vi is from its neighbors, we are interested in the set VC of nodes that have an intensity of signal variation si greater than their neighborhood. In this work, we use a definition of local signal variation similar to the local normalized neighboring signal variation described in (Chen et al., 2015b), with the only difference being the absence of degree normalization:\nsi = ∥∥∑ j∈N (vi) Ai,j(xi − xj) ∥∥ 2 , S = [ s1, . . . sn ]ᵀ = ‖LX‖2,Rd VC = topV (LS, k) (3)\nwhere topV (L · S, k) corresponds to the top k nodes with the greatest intensity of signal variation, and where || · ||2,Rd corresponds to taking the vector norms over the d-dimensional rows of LX . Instead of using the direct neighbors, one can also generalize the computation of S in equation 3 to an h−hop neighborhood by taking powers of the Laplacian. Observe that the GC layers preceding each pooling step perform a smoothing of the graph signal and thus act as a low-pass filter. Eq. (3) emphasizes instead the high variation regions, resulting overall in a filtering of X that attenuates low and high-frequency noise, yet retains the important signal information. The intuition of using the Laplacian maxima for selecting the centroids is that a smooth signal can be very well approximated using a linear interpolation between its local maxima and minima. This is in contrast with most approaches in GSP that use the lower frequencies for signal conservation but requires the signal to be k-bandlimited (Ma et al., 2019; Chen et al., 2015c;a). For a 1D signal, LaPool selects points, usually near the maxima/minima, where the derivative changes the most and is hardest to interpolate linearly. For molecular graphs, this often corresponds to sampling a subset of nodes critical for reconstructing the original molecule.\nDynamic Selection of the Centroids The method presented in Eq. (3) implies the selection of k centroids. Because the optimal value of k can be graph-dependent and might result in densely located centroids, we explore alternative in which we dynamically select the nodes with signal variation si greater than its neighbors sj :\nVC = {vi ∈ V | ∀ vj , si −Aijsj > 0} (4)" }, { "heading": "3.3 LEARNING THE NODE-TO-CLUSTER ASSIGNMENT MATRIX", "text": "Once the set VC of centroid nodes is determined, we compute a mapping of the remaining “follower” nodes VF = V \\ VC into the new clusters formed by the nodes in VC . This mapping gives the cluster assignment C = [ c1, ...cn ]ᵀ ∈ [0, 1]n×m s.t. ∀i : 1cᵀi = 1, where each row ci corresponds to the affinity of node vi towards each of the m clusters in VC .\nLetX(l) be the node embedding matrix at an arbitrary layer andX(l)C the embedding of the “centroids”. We compute C using a soft-attention mechanism (Graves et al., 2014) measured by the cosine similarity between X(l) and X(l)C :\nci = δi,j if vi ∈ VCsparsemax(βi xi(l)·X(l)C‖xi(l)‖‖X(l)C ‖ ) otherwise (5)\nwhere δi,j is the Kronecker delta and sparsemax (Laha et al., 2016; Martins and Astudillo, 2016), is an alternative to the softmax operator defined by:\nsparsemax(z) = arg min p∈∆K−1 ‖p − z‖2 which corresponds to the euclidean projection of z onto the probability simplex ∆K−1 = { p ∈ RK |1ᵀp = 1,p ≥ 0 } . The sparsemax operator ensures the sparsity of the attention coefficients and encourages the assignment of each node to a single centroid. It further alleviates the need for entropy minimization as done in DiffPool.\nEq. (5) also prevents the selected centroid nodes from being assigned to other clusters. Moreover, notice the term βi that regularizes the value of the attention for each node. We can define βi = 1di,VC\n, where di,VC is the shortest path distance between each node vi ∈ VF and centroids in VC . Although this regularization incurs a cost O(|V |2|VC |), it will strengthen the affinity to closer centroids and ensure the connectivity of the resulting pooled graph. Note that this regularization is considered as a hyperparameter of the layer and can be turned off, or alternatively, the mapping can be restricted to centroids within a fixed h−hop neighborhood of each node.\nFinally, after Cl is computed at layer l, the coarsened graph G(l+1) = 〈V (l+1), A(l+1), X(l+1)〉 is computed using Eq. (6), as in (Ying et al., 2018). In these equations, MΨ is a neural network with trainable parameters Ψ that is used to update the embedding of nodes in G(l+1) after the mapping.\nA(l+1) = C(l) ᵀ A(l)C(l) ∈ R|V (l) C |×|V (l) C |, X(l+1) = MΨ ( C(l) ᵀ X(l) ) (6)\nThis process can be repeated by feeding the new graph G(l+1) into another GNN layer." }, { "heading": "3.4 PROPERTIES OF THE LAPOOL METHOD", "text": "Preservation of Structural Information In addition to identifying graph nodes as centroids for pooling in a data-driven way, LaPool retains the feature content of the other nodes in a graph via the soft-assignment of followers to their centroids.\nSubstructure Identification By construction, the soft assignment of nodes to centroids clusters existing substructures of the graph together, thus identifying important subgraphs according to the classification task. By controlling for differences in signal variations within neighborhoods, we encourage these clusters to be spread out across different areas of the graph.\nDynamic Cluster Dimension As discussed in section 3.2, LaPool offers the unique flexibility of determining the clustering dynamically, when training graphs sequentially, or statically when performing batch training.\nPermutation Invariance It is trivial to show that LaPool is permutation invariant as long as the GNN used as its basis is permutation invariant, since both the graph downsampling (Eq. 3,4) and the node mapping (Eq. 5,6) are not affected by any permutation on the vertex set.\nEmphasizing the Strong Features Similar to how most CNNs implement a max-pooling layer to emphasize the strong features, LaPool does so by selecting the nodes with high signal as centroids. For molecular graphs, the centroids are biased towards high degree nodes and atoms different than their neighbors (e.g. a Nitrogen in a Carbon ring)." }, { "heading": "4 RESULTS AND DISCUSSION", "text": "A fundamental objective of LaPool is to learn an interpretable representation of sparse graphs, notably molecular substructures. We argue that this is an essential step towards shedding light upon the decision process within neural networks and ultimately increasing their utility in the design of new drug-like molecules. This implies that GNNs should be able to identify semantically important substructure components from molecular graphs, and eventually reconstruct these graphs from such components. This stems from the intuition that molecular validity and functional properties derive more from chemical fragments than individual atoms.\nOur experimental results thus aim to empirically demonstrate the following properties of LaPool, as benchmarked against current state-of-the-art pooling models and the Graph Isomorphism Network:\n• LaPool’s consideration of semantically important information such as node distance translates to improved performance on molecular understanding and molecular activity prediction tasks. • Visualization of LaPool’s behaviour at the pooling layer demonstrates its ability to identify\ncoherent and meaningful molecular substructures. • The hierarchical representation enforced by LaPool, which preserves the original graph\nstructure improves model interpretability.\n• Learning meaningful substructures can be leveraged to construct a generative model which leads to more realistic and feasible molecules.\nThroughout our experiments, we use the same architecture for all models to ensure an even comparison across all pooling layers: 2 layer GNNs with 128 filters each before the optional pooling layer, followed by 2 GNNs with 64 filters each and two fully connected layers including the output layer. Detailed information on architectural tuning and pooling-specific hyper-parameter search are provided in Supplemental Section A." }, { "heading": "4.1 BENCHMARK ON MOLECULAR GRAPH UNDERSTANDING", "text": "DiffPool and Graph U-Net models have been shown to outperform standard graph convolution networks on several graph benchmark datasets (Ying et al., 2018; Gao and Ji, 2018). Although not explicitly stated, both methods are most effective when the graph signal is smooth. In such cases where adjacent nodes tend to be similar, the DiffPool procedure will cluster together nodes in the same neighborhood, maintaining the overall graph structure, while Graph U-net will select nodes in the same neighborhood and will not create isolated components that no longer exchange information. On molecular graphs, however, the graph signal is rarely smooth. Therefore, we expect these two methods to be less effective at identifying the important molecular substructures,given that they do not explicitly consider structural relationships. We demonstrate this empirically by extracting known molecular substructure information from publicly available molecular datasets and evaluating performance in identifying these structures. We use a subset of approximately 17,000 molecules extracted from the ChEMBL database (Li et al., 2018c) and benchmark all methods on different types of substructures to verify the robustness of the comparison.\nAs shown in Table 1, the capture of structural relationships translates into superior performance of LaPool, as measured across standard metrics on various substructure prediction tasks. Indeed, we find that for predicting the presence of both 86 molecular fragments arising purely from structural information, as well as 55 structural alerts associated with molecule toxicity, LaPool globally outperforms other pooling models and GIN for the F1 and ROC-AUC metrics (micro-averaged to deal with high class imbalance). In particular, on the harder and extremely imbalanced molecular alerts prediction task, all models performed poorly compared to LaPool, suggesting that the hierarchical representation learned by LaPool helps to achieve a better understanding of the molecular graphs." }, { "heading": "4.2 EXPERIMENTS ON STANDARD GRAPH CLASSIFICATION BENCHMARKS", "text": "In addition to evaluating molecular structural understanding of the pooling models, we benchmark our model on molecular toxicity prediction using the TOX21 dataset (Council et al., 2007). We further conduct experiments on non-molecular benchmark graph datasets (DD, PROTEINS, FRANKENSTEIN), which usually contain larger and often denser graphs compared to molecular graphs (see Supplemental section C for dataset statistics). For TOX21, we report the test ROC-AUC averaged over 5-folds (following the 80-10-10 split proportion used in Wu et al. (2018)), while we follow prior work (Ying et al., 2018) on the remaining datasets by reporting the best accuracy on a 10-fold cross-validation.\nAs shown in Table 2, LaPool outperforms all other approaches on the well known TOX21 dataset and on the PROTEINS and FRANKENSTEIN, both of which contain non-molecular graphs with size similar to the TOX21 molecules. In particular, on the PROTEINS dataset LaPool achieved an\naccuracy of 83.83, representing a significant gap relative to DiffPool, its closest competitor, at 77.25. This suggests that the LaPool method is not restricted to molecular data but has broad applicability, especially in the context of sparse graph classification." }, { "heading": "4.3 MOLECULAR GENERATION", "text": "We aim to showcase LaPool’s utility in drug discovery by demonstrating that it can be leveraged to generate molecules. In previous work, GANs and VAEs were used to generate either string or graph representations of molecules. Here, we use the GAN-based Wasserstein Auto-Encoder recently proposed in (Tolstikhin et al., 2017) to model the data distribution of molecules (see Figure 1 in Supplemental Material). For the encoder, we use a similar network architecture as in our supervised experiments. The decoder and discriminator are simple MLPs, with complete architectural details provided in Supplemental Section A.4. Although the encoder is permutation invariant, the decoding process may not be. To force the decoder to learn a single graph ordering, we use a canonicalization algorithm (Schneider et al., 2015) that reorders atoms to ensure a unique graph for each molecule. We further improve the robustness of our generative model to node permutations by computing the reconstruction loss using a permutation-invariant embedding, parameterized by a GIN, on both the input and reconstructed graphs (see Supplemental Section A.4.2). We find that such a formulation improves the reconstruction loss and increases the ratio of valid molecules generated.\nDataset and Baseline Models Following previous work on molecular generation, we evaluate our generative model with an encoder enhanced by the LaPool layer (referred to as WAE-LaP) on the QM9 molecular dataset (Ramakrishnan et al., 2014). This dataset contains 133,885 small drug-like organic compounds with up to 9 heavy atoms (C, O, N, F). We compare WAE-LaP to alternatives within our WAE framework where either no pooling is used (WAE-GNN) or where DiffPool is used as the pooling layer (WAE-Diff). Our results are also compared to previous results on the same dataset, including Grammar-VAE, GraphVAE, and MolGAN.\nEvaluation Metrics We measure the performance of the generative model using metrics standard in the field: validity (proportion of valid molecules from generated samples), uniqueness (proportion of unique molecules generated), and novelty (proportion of generated samples not found in the training set). All metrics were computed on a set of 10,000 generated molecules.\nAs shown in Table 3, WAE-LaP generated the most valid and unique molecules compared to all WAEbased generative models with slightly lower but similar novelty. Although MolGAN performed best on the novelty metric, it has among the lowest percentage of unique molecules. We hypothesize that the decrease in novelty observed with LaPool might be a result of its pooling mechanism encouraging fragment novelty during sampling, thus limiting novelty resulting from rearrangement at the atom level. Nevertheless, as all WAE-based methods produced similar proportions of novel molecules, our results still suggest that combining LaPool with other generative approaches could improve the uniqueness and validity of generated compounds. We therefore conclude that the pooling performed by LaPool can improve molecular graph representation, which is crucial in a generative setting." }, { "heading": "4.4 IMPROVED INTERPRETABILITY", "text": "To better understand the insights provided by LaPool, we conduct model interpretability experiments on molecular fragment prediction. Here we refer to interpretability as the degree to which a human (in this context, a medicinal chemist) can understand the cause of the model’s decision (Miller, 2018). This explains our focus on fragment prediction since in that setting an “interpretable model” that achieves high performance would need to first understand the graph structure and the relationship between nodes." }, { "heading": "Graph U-net", "text": "We first investigate the behavior of LaPool and DiffPool by analyzing the clustering made at the pooling layer level. This comparison was limited to DiffPool since Graph U-net does not perform a node clustering. We argue that an interpretable pooling layer should preserve the overall graph structure after pooling and produce meaningful clusters that could provide insight into the contribution of each molecular subgraph from the perspective of an expert chemist. While defining what is meaningful is inherently subjective, we attempt to shed light on these models by observing their behavior in the drug discovery domain, using our understanding of chemical structure as reference.\nWe show in Figure 2 that LaPool is able to coarsen the molecular graphs into sparsely connected graphs, which can be interpreted as the skeleton of the molecules. Indeed, the data-driven dynamic segmentation it performed is akin to chemical fragmentation (Gordon et al., 2011). In contrast, DiffPool’s cluster assignment is more uniform across the graph, leading to densely connected coarsened graphs which are less interpretable from a chemical viewpoint. In particular, it fails in the presence of molecular symmetry, as it encourages the mapping of nodes with similar features to the same clusters. This is illustrated in both example (c) which shows how DiffPool creates a fully connected graph from an originally disconnected graph, and example (b) which shows how symmetric elements, despite being far from each other, are assigned identically. On the other hand, we observe that Graph U-net ignores the graph structure, typically disconnecting it. It also appears very biased toward selecting atoms in similar environment to ones already selected. Such failures are not present when using LaPool, since the dynamic centroid selection and the subsequent distance regularization enforce preservation of the molecular graph structure. A typical failure case for LaPool is seen in (e) and corresponds to a missing centroid node in a given region of the graph, which results in a soft assignment of the region to multiple clusters. However, this behavior is inherent to most DiffPool samples since the fixed number of clusters and the inability to consider node distance cannot account for the diversity of molecular datasets." }, { "heading": "C25", "text": "In addition to assessing the quality of clustering performed by LaPool and DiffPool, we attempt to directly target interpretability by computing an explanation of each model decision and comparing it to a ground truth. We design a simple experiment in which we predict the presence of either Epoxide, Thioepoxide, or Aziridine substructures (denoted by the molecular pattern “C1[O,S,N]C1”), that are indicative of molecular toxicity. Interpretability is therefore defined as the accuracy of the importance attributed by each model to relevant substructures of the input molecules, given the presence of the underlying ground truth fragment we wish to predict. Similar to (Pope et al., 2019), we adapt an existing explainability method for CNNs to GNNs. Specifically, we choose to compute the integrated gradient (Sundararajan et al., 2017) over the input node features due to its stability and robustness in the presence of zero-value features (see Supplemental section C for discussion and alternate approach). Next, we derive an importance score for each node using the L1-norm of the feature attribution map for the node. By both qualitatively observing samples from the data (Figure 3), and by measuring the PR-AUC over the computed importance values given the ground truth to assess the ability to distinguish between important and non-important nodes (Table 4), we find LaPool to more robustly identify the salient structure, resulting in improved overall interpretability. An interesting outcome of this experiment is the performance of Graph U-net, ranked second best. Such performance is a direct result of using a cluster size large enough to cover the toxic fragment size." }, { "heading": "5 CONCLUSION", "text": "In this work, we have proposed LaPool, a differentiable and robust pooling operator for molecular and sparse graphs that considers both node information and graph structure. In doing so, we have proposed a method which is able to identify important substructures of a graph by leveraging the graph Laplacian. In contrast with previous work, this method retains the connectivity structure and\nfeature information of the graph during the coarsening procedure, while encouraging nodes belonging to the same substructure to be mapped together in the coarsened graph.\nIncorporating the proposed pooling layer into existing graph neural networks, we have demonstrated that the enforced hierarchization allows for the capture of a richer and more relevant set of features at the graph-level representation. We discussed the performance of LaPool relative to existing graph pooling layers and demonstrated on both molecular graph classification and generation benchmarks that LaPool outperforms existing graph pooling modules and produces more interpretable results. In particular, we argue that the molecular graph segmentation performed by LaPool provides greater insight into molecular activity and that the associated properties can be leveraged in drug discovery. Finally, we show that although LaPool was designed for molecular graphs, it generalizes well to other graph types. In future work, we aim to investigate how additional sources of information such as edge features could be incorporated into the graph pooling process." }, { "heading": "A NETWORK ARCHITECTURE AND TRAINING PROCEDURE", "text": "Below, we describe the network architecture and the training procedure used for both supervised and generative experiments." }, { "heading": "A.1 EDGE ATTRIBUTES", "text": "Part of the work presented assumes the absence of edge attributes in the graphs. However, in molecular graphs, the nature of a bond between two atoms plays an important role regarding activity and property. As such, edge types should be considered, especially in generative models. To consider this, we add to our network an initial Edge-GC layer described in the following.\nLet G = 〈V,E,X〉 be an undirected molecular graph, such that E = [E1, . . . Ek] ∈ {0, 1}e×n×n where n is the number of nodes in the graph and e is the number of possible edge. We have that∑\n1≤i≤e\nE::i = A ∈ {0, 1}n×n (7)\nwhere A is the adjacency matrix of the graph. The Edge GC layer is simply defined as :\nY = MΘ1(E1, X)‖ . . . ‖MΘe(Ee, X) (8)\nwhere ‖ is the concatenation operator on the node feature dimension and MΘ1 are graph neural networks parameterized to learn different features for each edge type. A new graph defined as G′ = 〈V,A, Y 〉 can then be feed into the subsequent layers of the network." }, { "heading": "A.2 ATOM AND EDGE FEATURES", "text": "In our experiments, the initial node feature tensor is represented by a one-hot encoding of atoms (ignoring hydrogens) within the respective datasets and additional properties such as the atom implicit valence, its formal charge, number of radical electrons and whether it is in a molecular ring. For edge attributes, we consider the single, double and triple bond, which were enough to cover all molecules in the datasets, given that feature extraction was preceded by kekulization of molecules." }, { "heading": "A.3 SUPERVISED EXPERIMENTS", "text": "In all of our supervised experiments, we use a graph convolution module consisting of two graph convolutional layers of 128 channels each with ReLU activation; followed by an optional hierarchical graph pooling layer; then two additional graph convolution layers (64) with skip connection to introduce jumping knowledge and a gated global graph pooling layer (Li et al., 2015) to yield a graph-level representation. This is further followed by one fully connected layers (128) with batch normalization and ReLU activation, finalized by a linear output layer with appropriate activation for the task readouts. Notice that we used one pooling layer, since no noticeable improvement was observed when using more in our experimental setting.\nFor DiffPool, we performed a hyperparameter search to find the optimal number of clusters (12.5%, 25%, 50% of the maximum number of nodes in the batch (Ying et al., 2018)). A similar search is also performed for the Graph U-net pooling layer. For LaPool, we consider the same number of clusters and the dynamic node seelction. We also performed a grid search over the window size k used as regularization to prevent nodes from mapping to centroids that are more than k-hop away as an alternative to the distance-regularized version. The grid search was performed for k ∈ {1, 2, 3}. For the supervised experiments, we use a batch size of 64 and train the networks for 100 epochs, with early stopping." }, { "heading": "A.4 GENERATIVE MODELS", "text": "" }, { "heading": "A.4.1 WAE MODEL", "text": "We use a Wasserstein Auto-Encoder (WAE) as our generative model (see Figure S1. The WAE minimizes a penalized form of the Wasserstein distance between a model distribution and a target\nG NN\nL ay\ner\nEG\nNN L\nay er\nG ra\nph P\noo lin\ng\nG -S\num Po\nol\nFC L\nay er\nO O\nNH\nSE\nG O\nO\nNH\n0/1 D\nG NN\nL ay\ner\n... ...\nl1\nl2\na) Adversarial autoencoder (AAE) b) Encoder network\nFigure S1: Model architecture for the generative model. (a) We use a WAE, in which a generator (auto-encoder) progressively learns the true molecular data distribution. (b) Architecture used for the encoder network.\ndistribution, and has been shown to improve learning stability. As described in (Tolstikhin et al., 2017), we aim to minimize the following objective:\ninf Q(Z|X)∈Q EPXEQ(Z|X)[cost(X,G(Z)] + λDZ(QZ , PZ) (9)\nwhere Q is any nonparametric set of probabilistic encoders, DZ is the Jensen-Shannon divergence between the learned latent distribution QZ and prior PZ , and λ > 0 is a hyperparameter. DZ is estimated using adversarial training (discriminator).\nFor our generative model, the encoder follows a similar structure as the network used for our supervised experiments, with the exception being that the network now learns a continuous latent space qΨ(z|G) given a set of input molecular graphs G = {G1, · · · , Gn}. More precisely, it consists of one edge graph layer, followed by two GCs (32 channels each), an optional hierarchical graph pooling, two additional GC layers (64, 64), then one global sum pooling step (128) and two fully connected layers (128), meaning the molecular graphs are embedded into a latent space of dimension 128. Instead of modeling the node/edge decoding with an autoregressive framework as done in recent works (You et al., 2018b; Assouel et al., 2018; Li et al., 2018d) to capture the interdependency between them, we used a simple MLP that takes the latent code z as input an pass it through two fully connected layers (128, 64). The output of those layers is used as shared embedding for two networks: one predicting the upper triangular entries of the edge tensor, and the second predicting the node features tensor. This results in faster convergence.\nFor the discriminator, we use a simple MLP that predicts whether the latent code comes from a normal prior distribution z N (0, 1). This MLP is constituted by two stacked FCLs (64, 32) followed by an output layer with sigmoid activation. As in (Kadurin et al., 2017), we do not use batch-normalization, since it resulted in a mismatch between the discriminator and the generator.\nAll models use the same basic generative architecture, with the only difference being the presence of a pooling-layer and its associated parameters. For DiffPool, we fixed the number of cluster to three, while for LaPool, we use the distance-based regularization." }, { "heading": "A.4.2 RECONSTRUCTION LOSS", "text": "For each input molecular graph G = 〈V,E,X〉, the decoder reconstruct a graph G̃ = 〈Ṽ , Ẽ, X̃〉. Since we use a canonical ordering (available in RDKit) to construct G from the SMILES representation of molecules, the decoder is forced to learn how to generate a graph under this order. Therefore, the decoding process is not necessarily able to consider permutations on the vertices set, and generation of isomorphic graphs will be heavily penalized in the reconstruction loss. In (Simonovsky and Komodakis), the authors use an expensive graph matching procedure to overcome that limitation. We argue that it suffices to compute the reconstruction loss on γ(G) and γ(G̃), where γ is a permutation invariant embedding function. As a heuristic, we used a Graph Isomorphism Network (GIN), with weights fixed to 1, in order to approximate the Weisfeiler-Lehman graph isomorphism test (see (Xu\net al., 2018) for more details). In particular, we use an edge-aware GIN layer (see section A.1) to embed both G and G̃. The reconstruction loss is then defined as:\nLrec = 1 |V | ∑ i (γ(G)i − γ(G̃)i)2 (10)\nOur experiments show that this loss function was able to produce a higher number of valid molecules, although we speculate that such a heuristic might prove harder to optimize on datasets with larger graphs." }, { "heading": "A.4.3 TRAINING PROCEDURE", "text": "The QM9 dataset was split into a train (60%), valid (20%) and a hold-out test dataset (20%). Note that only 25% of the training set is sampled during each epoch (batch size 32). The generator network (encoder-decoder) and the discriminator network are trained independently, using the Adam optimizer Kingma and Ba (2014) with an initial learning rate of 1e− 4 for the generator and 1e− 3 for the discriminator. During training, we slowly reduce the learning rate by a factor of 0.5, for the generator, on plateau. To stabilize the learning process and prevent the discriminator from becoming \"too good\" at distinguishing the true data distribution from the prior, we train the generator two times more often." }, { "heading": "B GENERATED MOLECULES", "text": "Below, we highlight a few molecules generated by WAE-LaP on the QM9 dataset." }, { "heading": "OH NH2", "text": "" }, { "heading": "C DATASET STATISTICS", "text": "TOX21 DD PROTEINS FRANKENSTEIN\nAvg. nodes 18.51 284.32 39.05 16.83 Avg. edges 19.23 715.66 72.82 17.88 #Graphs 8014 1178 1113 4337 #Classes 12 2 2 2" }, { "heading": "D NODE IMPORTANCE INTERPRETABILITY SCORE", "text": "In addition to assessing the quality of clustering performed by LaPool and DiffPool, we attempt to measure the interpretability of their predictions. We consider a setting in which the goal is to predict the presence of either Epoxides, Thioepoxides or Aziridines substructures in molecular graphs. These\nthree fragments correspond to structural alerts that are often indicative of molecular toxicity. We attempt to identify the relational inductive bias used by each model during prediction. In our setting, we define the interpretability of a model as its ability to focus on nodes that are directly relevant to the structural alerts and leverage that information for its prediction. In other words, we expect the most important nodes for the model prediction to correspond to nodes that are part of the structural alerts. We measure the importance of each atom toward the model prediction using the Integrated gradient method. Briefly, we compute perturbation of node and edge attributes over a continuous spectrum, then integrate the gradient of each of the model loss with respect to both the perturbed adjacency matrix and node features. Similar to saliency maps, we then take the sum of the absolute integrated gradients over each node as an approximate attribution score for the nodes. Finally, we compute the interpretability score using the Precision-Recall AUC between measured importance and ground truth which is defined by a binary mask of nodes that are part of the structural alerts. The PR-AUC allows us to assess the node importance separation capacity of each model while taking imbalance into account. We only focus on positive predictions for each model. As an alternative to the Integrated Gradient, we also measure the interpretability score using Guided BackPropagation (see Table 6)" }, { "heading": "E SIGNAL PRESERVATION THROUGH LAPLACIAN MAXIMA", "text": "We illustrate here on a 1-d signal S, how using the Laplacian maxima serves to retain the most prominent regions of the graph signal, after smoothing (Figure S3). We measure the energy conservation after downsampling: δE(S) = E(S)− E(Sdown) of the 1-d signal energy to highlight why selecting the Laplacian maxima allow reconstructing the signal with a low error when compared to the minimum Laplacian (which focuses on low frequencies). The energy ES of a discrete signal yi is defined in (11), and is similar to the energy of a wave in a physical system (without the constants).\nES = ∑ i |yi|2 (11)\n0 5 10 15 20 25 Node number\n0\n2\n4\nSi gn\nal in\nte ns ity Random smooth 1D signal - 25 nodes, ES = 71.1Maximum Laplacian pooling - 8 leaders, ES = 72.5 Minimum Laplacian pooling - 7 leaders, ES = 76.3\nFigure S3: Comparison of maximum/minimum Laplacian pooling for a random and smoothed signal on a 1D graph with 25 nodes. The graph energy ES is indicated.\nTo mimic the molecular graph signal at the pooling stage, the given signal is built from an 8-terms random Fourier series with added Gaussian noise, then smoothed with 2 consecutive neighbor average smoothing. For the pooling methods, a linear interpolation is used to cover the same signal space before computing ES . As expected, the maxima Laplacian selection requires a fewer number of samples for signal reconstruction and energy preservaton. It also significantly outperforms minima selection." } ]
2,019
null
SP:3e64cbaffc0f2c9cf7bb2d7716b50795f03fe1fa
[ "This paper proposes a method called AugMix, which is intended to improve model robustness to data distribution shift. AugMix appears fairly simple to implement. Several new images are created by augmenting an original image through chains of sequentially applied transformations (the \"Aug\" part of AugMix), then the augmented images are combined together, along with the original image, via a weighted sum (the \"Mix\" part of AugMix). Additionally, a Jensen-Shannon Divergence consistency loss is applied during training to encourage the model to make similar predictions for all augmented variations of a single image. This technique is shown to achieve state-of-the-art performance on standard robustness benchmarks without loss of clean test accuracy, and is also shown to improve calibration of model confidence estimates.", "The paper discusses a new data augmentation method which improves the accuracy of the network for several specific shifted domain scenarios. The main goal of the paper is to increase the robustness of the deep model trained on the augmented data to generalize well beyond the data corruption like the rotation, translation, noise,.... For each input, they apply $k$ different operation of image shift and make the weighted combination of them. The weight vector is generated randomly from Dirichlet distribution with the parameter $\\alpha$. The weighted combined images would be added to the original image in convex combination. The convex weights are generated from distribution Beta with parameter $\\beta$. Later they train the network with adding the Jensen-Shannon divergence for the posterior distributions of augmented images as the consistency regularizer. They show this data augmentation will increase the accuracy of the model for shifted and non-shifted domains and also it leads to more calibrated model for domain shift problem." ]
Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AUGMIX, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AUGMIX significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.
[ { "affiliations": [], "name": "Dan Hendrycks" }, { "affiliations": [], "name": "Norman Mu" }, { "affiliations": [], "name": "Ekin D. Cubuk" }, { "affiliations": [], "name": "Justin Gilmer" }, { "affiliations": [], "name": "Balaji Lakshminarayanan" } ]
[ { "authors": [ "Aharon Azulay", "Yair Weiss" ], "title": "Why do deep convolutional networks generalize so poorly to small image transformations", "venue": "arXiv preprint,", "year": 2018 }, { "authors": [ "Sanghyuk Chun", "Seong Joon Oh", "Sangdoo Yun", "Dongyoon Han", "Junsuk Choe", "Youngjoon Yoo" ], "title": "An empirical evaluation on robustness and uncertainty of regularization methods", "venue": "ICML Workshop on Uncertainty and Robustness in Deep Learning,", "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": null, "year": 1909 }, { "authors": [ "Ekin Dogus Cubuk", "Barret Zoph", "Dandelion Mané", "Vijay Vasudevan", "Quoc V. Le" ], "title": "AutoAugment: Learning augmentation policies from data", "venue": null, "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": null, "year": 2009 }, { "authors": [ "Terrance Devries", "Graham W. Taylor" ], "title": "Improved regularization of convolutional neural networks with Cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Anish Athalye" ], "title": "Evaluating and understanding the robustness of adversarial logit pairing", "venue": "arXiv preprint,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Carlos R.M. Temme", "Jonas Rauber", "Heiko H. Schütt", "Matthias Bethge", "Felix A. Wichmann" ], "title": "Generalisation in humans and deep neural networks. NeurIPS, 2018", "venue": null, "year": 2018 }, { "authors": [ "Justin Gilmer", "Dan Hendrycks" ], "title": "A discussion of’adversarial examples are not bugs, they are features’: Adversarial example researchers need to expand what is meant by’robustness", "venue": "Distill,", "year": 2019 }, { "authors": [ "Justin Gilmer", "Ryan P. Adams", "Ian J. Goodfellow", "David Andersen", "George E. Dahl" ], "title": "Motivating the rules of the game for adversarial example research", "venue": null, "year": 2018 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": null, "year": 2017 }, { "authors": [ "Keren Gu", "Brandon Yang", "Jiquan Ngiam", "Quoc Le", "Jonathon Shlens" ], "title": "Using videos to evaluate image model robustness, 2019", "venue": null, "year": 2019 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": null, "year": 2017 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Mixup as locally linear out-of-manifold regularization", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2015 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": null, "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": null, "year": 2017 }, { "authors": [ "Dan Hendrycks", "Kimin Lee", "Mantas Mazeika" ], "title": "Using pre-training can improve model robustness and uncertainty", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Thomas Dietterich" ], "title": "Deep anomaly detection with outlier exposure", "venue": "ICLR,", "year": 2019 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Daniel Kang", "Yi Sun", "Dan Hendrycks", "Tom Brown", "Jacob Steinhardt" ], "title": "Testing robustness against unforeseen adversaries", "venue": "arXiv preprint,", "year": 2019 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "NeurIPS,", "year": 2012 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Zachary Chase Lipton", "Yu-Xiang Wang", "Alexander J. Smola" ], "title": "Detecting and correcting for label shift with black box", "venue": "predictors. ArXiv,", "year": 2018 }, { "authors": [ "Raphael Gontijo Lopes", "Dong Yin", "Ben Poole", "Justin Gilmer", "Ekin Dogus Cubuk" ], "title": "Improving robustness without sacrificing accuracy with patch Gaussian augmentation", "venue": null, "year": 1906 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: stochastic gradient descent with warm restarts", "venue": null, "year": 2016 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": null, "year": 2018 }, { "authors": [ "Khanh Nguyen", "Brendan O’Connor" ], "title": "Posterior calibration and exploratory analysis for natural language processing models", "venue": null, "year": 2015 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D Sculley", "Sebastian Nowozin", "Joshua V Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Aditi Raghunathan", "Sang Michael Xie", "Fanny Yang", "John C. Duchi", "Percy Liang" ], "title": "Adversarial training can hurt generalization", "venue": null, "year": 1906 }, { "authors": [ "Tim Salimans", "Diederik Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "NeurIPS,", "year": 2016 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin A. Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "CoRR, abs/1412.6806,", "year": 2014 }, { "authors": [ "Yuji Tokozume", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Between-class learning for image classification", "venue": null, "year": 2018 }, { "authors": [ "Antonio Torralba", "Alexei A. Efros" ], "title": "Unbiased look at dataset bias", "venue": null, "year": 2011 }, { "authors": [ "Igor Vasiljevic", "Ayan Chakrabarti", "Gregory Shakhnarovich" ], "title": "Examining the impact of blur on recognition by convolutional networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Qizhe Xie", "Zihang Dai", "Eduard Hovy", "Minh-Thang Luong", "Quoc V Le" ], "title": "Unsupervised data augmentation", "venue": "arXiv preprint arXiv:1904.12848,", "year": 2019 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollr", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": null, "year": 2016 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jonathon Shlens", "Ekin D Cubuk", "Justin Gilmer" ], "title": "A Fourier perspective on model robustness in computer vision", "venue": null, "year": 1906 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": null, "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cissé", "Yann Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": null, "year": 2017 }, { "authors": [ "Richard Zhang" ], "title": "Making convolutional networks shift-invariant again", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": null, "year": 2016 }, { "authors": [ "Zhun Zhong", "Liang Zheng", "Guoliang Kang", "Shaozi Li", "Yi Yang" ], "title": "Random erasing data augmentation", "venue": "arXiv preprint arXiv:1708.04896,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Current machine learning models depend on the ability of training data to faithfully represent the data encountered during deployment. In practice, data distributions evolve (Lipton et al., 2018), models encounter new scenarios (Hendrycks & Gimpel, 2017), and data curation procedures may capture only a narrow slice of the underlying data distribution (Torralba & Efros, 2011). Mismatches between the train and test data are commonplace, yet the study of this problem is not. As it stands, models do not robustly generalize across shifts in the data distribution. If models could identify when they are likely to be mistaken, or estimate uncertainty accurately, then the impact of such fragility might be ameliorated. Unfortunately, modern models already produce overconfident predictions when the training examples are independent and identically distributed to the test distribution. This overconfidence and miscalibration is greatly exacerbated by mismatched training and testing distributions.\nSmall corruptions to the data distribution are enough to subvert existing classifiers, and techniques to improve corruption robustness remain few in number. Hendrycks & Dietterich (2019) show that classification error of modern models rises from 22% on the usual ImageNet test set to 64% on ImageNet-C, a test set consisting of various corruptions applied to ImageNet test images. Even methods which aim to explicitly quantify uncertainty, such as probabilistic and Bayesian neural networks, struggle under data shift, as recently demonstrated by Ovadia et al. (2019). Improving performance in this setting has been difficult. One reason is that training against corruptions only encourages networks to memorize the specific corruptions seen during training and leaves models unable to generalize to new corruptions (Vasiljevic et al., 2016; Geirhos et al., 2018). Further, networks trained on translation augmentations remain highly sensitive to images shifted by a single pixel (Gu et al., 2019; Hendrycks & Dietterich, 2019). Others have proposed aggressive data augmentation schemes (Cubuk et al., 2018), though at the cost of a computational increase. Chun et al. (2019) demonstrates that many techniques may improve clean accuracy at the cost of robustness while many techniques which improve robustness harm uncertainty, and contrariwise. In all, existing techniques have considerable trade-offs. ∗Equal Contribution. †Corresponding author.\nCutOut MixUp CutMix AugMix\nIn this work, we propose a technique to improve both the robustness and uncertainty estimates of classifiers under data shift. We propose AUGMIX, a method which simultaneously achieves new state-of-the-art results for robustness and uncertainty estimation while maintaining or improving accuracy on standard benchmark datasets. AUGMIX utilizes stochasticity and diverse augmentations, a Jensen-Shannon Divergence consistency loss, and a formulation to mix multiple augmented images to achieve state-of-the-art performance. On CIFAR-10 and CIFAR-100, our method roughly halves the corruption robustness error of standard training procedures from 28.4% to 12.4% and 54.3% to 37.8% error, respectively. On ImageNet, AUGMIX also achieves state-of-the-art corruption robustness and decreases perturbation instability from 57.2% to 37.4%. Code is available at https://github.com/google-research/augmix.\n2 RELATED WORK\nRobustness under Data Shift. Geirhos et al. (2018) show that training against distortions can often fail to generalize to unseen distortions, as networks have a tendency to memorize properties of the specific training distortion. Vasiljevic et al. (2016) show training with various blur augmentations can fail to generalize to unseen blurs or blurs with different parameter settings. Hendrycks & Dietterich (2019) propose measuring generalization to unseen corruptions and provide benchmarks for doing so. Kang et al. (2019) construct an adversarial version of the aforementioned benchmark. Gilmer et al. (2018); Gilmer & Hendrycks (2019) argue that robustness to data shift is a pressing problem which greatly affects the reliability of real-world machine learning systems.\nCalibration under Data Shift. Guo et al. (2017); Nguyen & O’Connor (2015) propose metrics for determining the calibration of machine learning models. Lakshminarayanan et al. (2017) find that simply ensembling classifier predictions improves prediction calibration. Hendrycks et al. (2019a) show that pre-training can also improve calibration. Ovadia et al. (2019) demonstrate that model calibration substantially deteriorates under data shift.\nData Augmentation. Data augmentation can greatly improve generalization performance. For image data, random left-right flipping and cropping are commonly used He et al. (2015). Random occlusion techniques such as Cutout can also improve accuracy on clean data (Devries & Taylor, 2017; Zhong et al., 2017). Rather than occluding a portion of an image, CutMix\nreplaces a portion of an image with a portion of a different image (Yun et al., 2019). Mixup also uses information from two images. Rather than implanting one portion of an image inside another, Mixup produces an elementwise convex combination of two images (Zhang et al., 2017; Tokozume et al.,\n2018). Guo et al. (2019) show that Mixup can be improved with an adaptive mixing policy, so as to prevent manifold intrusion. Separate from these approaches are learned augmentation methods such as AutoAugment (Cubuk et al., 2018), where a group of augmentations is tuned to optimize performance on a downstream task. Patch Gaussian augments data with Gaussian noise applied to a randomly chosen portion of an image (Lopes et al., 2019). A popular way to make networks robust to `p adversarial examples is with adversarial training (Madry et al., 2018), which we use in this paper. However, this tends to increase training time by an order of magnitude and substantially degrades accuracy on non-adversarial images (Raghunathan et al., 2019)." }, { "heading": "3 AUGMIX", "text": "AUGMIX is a data augmentation technique which improves model robustness and uncertainty estimates, and slots in easily to existing training pipelines. At a high level, AugMix is characterized by its utilization of simple augmentation operations in concert with a consistency loss. These augmentation operations are sampled stochastically and layered to produce a high diversity of augmented images. We then enforce a consistent embedding by the classifier across diverse augmentations of the same input image through the use of Jensen-Shannon divergence as a consistency loss.\nMixing augmentations allows us to generate diverse transformations, which are important for inducing robustness, as a common failure mode of deep models in the arena of corruption robustness is the memorization of fixed augmentations (Vasiljevic et al., 2016; Geirhos et al., 2018). Previous methods have attempted to increase diversity by directly composing augmentation primitives in a chain, but this can cause the image to quickly degrade and drift off the data manifold, as depicted in Figure 3. Such image degradation can be mitigated and the augmentation diversity can be maintained by mixing together the results of several augmentation chains in convex combinations. A concrete account of the algorithm is given in the pseudocode below.\nAlgorithm AUGMIX Pseudocode 1: Input: Model p̂, Classification Loss L, Image xorig, Operations O = {rotate, . . . , posterize} 2: function AugmentAndMix(xorig, k = 3, α = 1) 3: Fill xaug with zeros 4: Sample mixing weights (w1, w2, . . . , wk) ∼ Dirichlet(α, α, . . . , α) 5: for i = 1, . . . , k do 6: Sample operations op1, op2, op3 ∼ O 7: Compose operations with varying depth op12 = op2 ◦ op1 and op123 = op3 ◦ op2 ◦ op1 8: Sample uniformly from one of these operations chain ∼ {op1, op12, op123} 9: xaug += wi · chain(xorig) . Addition is elementwise 10: end for 11: Sample weight m ∼ Beta(α, α) 12: Interpolate with rule xaugmix = mxorig + (1−m)xaug 13: returnxaugmix 14: end function 15: xaugmix1 = AugmentAndMix(xorig) . xaugmix1 is stochastically generated 16: xaugmix2 = AugmentAndMix(xorig) . xaugmix1 6= xaugmix2 17: Loss Output: L(p̂(y | xorig), y) + λ Jensen-Shannon(p̂(y | xorig); p̂(y|xaugmix1); p̂(y|xaugmix2))\nAugmentations. Our method consists of mixing the results from augmentation chains or compositions of augmentation operations. We use operations from AutoAugment. Each operation is visualized in Appendix C. Crucially, we exclude operations which overlap with ImageNet-C corruptions. In particular, we remove the contrast, color, brightness, sharpness, and Cutout operations so that our set of operations and the ImageNet-C corruptions are disjoint. In turn, we do not use any image noising nor image blurring operations so that ImageNet-C corruptions are encountered only at test time. Operations such as rotate can be realized with varying severities, like 2◦ or −15◦. For operations with varying severities, we uniformly sample the severity upon each application. Next, we randomly sample k augmentation chains, where k = 3 by default. Each augmentation chain is constructed by composing from one to three randomly selected augmentation operations.\nMixing. The resulting images from these augmentation chains are combined by mixing. While we considered mixing by alpha compositing, we chose to use elementwise convex combinations for simplicity. The k-dimensional vector of convex coefficients is randomly sampled from a Dirichlet(α, . . . , α) distribution. Once these images are mixed, we use a “skip connection” to combine the result of the augmentation chain and the original image through a second random convex combination sampled from a Beta(α, α) distribution. The final image incorporates several sources of randomness from the choice of operations, the severity of these operations, the lengths of the augmentation chains, and the mixing weights.\nJensen-Shannon Divergence Consistency Loss. We couple with this augmentation scheme a loss that enforces smoother neural network responses. Since the semantic content of an image is approximately preserved with AUGMIX, we should like the model to embed xorig, xaugmix1, xaugmix2 similarly. Toward this end, we minimize the Jensen-Shannon divergence among the posterior distributions of the original sample xorig and its augmented variants. That is, for porig = p̂(y | xorig), paugmix1 = p̂(y | xaugmix1), paugmix2 = p̂(y|xaugmix2), we replace the original loss L with the loss\nL(porig, y) + λ JS(porig; paugmix1; paugmix2). (1)\nTo interpret this loss, imagine a sample from one of the three distributions porig, paugmix1, paugmix2. The Jensen-Shannon divergence can be understood to measure the average information that the sample reveals about the identity of the distribution from which it was sampled.\nThis loss can be computed by first obtaining M = (porig + paugmix1 + paugmix2)/3 and then computing\nJS(porig; paugmix1; paugmix2) = 1\n3\n( KL[porig‖M ] + KL[paugmix1‖M ] + KL[paugmix2‖M ] ) . (2)\nUnlike an arbitrary KL Divergence between porig and paugmix, the Jensen-Shannon divergence is upper bounded, in this case by the logarithm of the number of classes. Note that we could instead compute JS(porig; paugmix1), though this does not perform as well. The gain of training with JS(porig; paugmix1; paugmix2; paugmix3) is marginal. The Jensen-Shannon Consistency Loss impels to model to be stable, consistent, and insensitive across to a diverse range of inputs (Zheng et al., 2016; Kannan et al., 2018; Xie et al., 2019). Ablations are in Section 4.3 and Appendix A." }, { "heading": "4 EXPERIMENTS", "text": "Datasets. The two CIFAR (Krizhevsky & Hinton, 2009) datasets contain small 32× 32× 3 color natural images, both with 50,000 training images and 10,000 testing images. CIFAR-10 has 10 categories, and CIFAR-100 has 100. The ImageNet (Deng et al., 2009) dataset contains 1,000 classes of approximately 1.2 million large-scale color images.\nIn order to measure a model’s resilience to data shift, we evaluate on the CIFAR-10-C, CIFAR-100C, and ImageNet-C datasets (Hendrycks & Dietterich, 2019). These datasets are constructed by corrupting the original CIFAR and ImageNet test sets. For each dataset, there are a total of 15 noise, blur, weather, and digital corruption types, each appearing at 5 severity levels or intensities. Since these datasets are used to measure network behavior under data shift, we take care not to introduce these 15 corruptions into the training procedure.\nThe CIFAR-10-P, CIFAR-100-P, and ImageNet-P datasets also modify the original CIFAR and ImageNet datasets. These datasets contain smaller perturbations than CIFAR-C and are used to measure the classifier’s prediction stability. Each example in these datasets is a video. For instance, a video with the brightness perturbation shows an image getting progressively brighter over time. We should like the network not to give inconsistent or volatile predictions between frames of the video as the brightness increases. Thus these datasets enable the measurement of the “jaggedness” (Azulay & Weiss, 2018) of a network’s prediction stream.\nMetrics. The Clean Error is the usual classification error on the clean or uncorrupted test data. In our experiments, corrupted test data appears at five different intensities or severity levels 1 ≤ s ≤ 5. For a given corruption c, the error rate at corruption severity s is Ec,s. We can compute the average error across these severities to create the unnormalized corruption error uCEc = ∑5 s=1Ec,s. On CIFAR-10C and CIFAR-100-C we average these values over all 15 corruptions. Meanwhile, on ImageNet we follow the convention of normalizing the corruption error by the corruption error of AlexNet (Krizhevsky et al., 2012). We compute CEc = ∑5 s=1Ec,s/ ∑5 s=1E AlexNet c,s . The average of the 15 corruption errors CEGaussian Noise,CEShot Noise, . . . ,CEPixelate,CEJPEG gives us the Mean Corruption Error (mCE).\nPerturbation robustness is not measured by accuracy but whether video frame predictions match. Consequently we compute what is called the flip probability. Concretely, for videos such as those with steadily increasing brightness, we determine the probability that two adjacent frames, or two frames with slightly different brightness levels, have “flipped” or mismatched predictions. There are 10 different perturbation types, and the mean across these is the mean Flip Probability (mFP). As with ImageNet-C, we can normalize by AlexNet’s flip probabilities and obtain the mean Flip Rate (mFR).\nIn order to assess a model’s uncertainty estimates, we measure its miscalibration. Classifiers capable of reliably forecasting their accuracy are considered “calibrated.” For instance, a calibrated classifier should be correct 70% of the time on examples to which it assigns 70% confidence. Let the classifier’s confidence that its prediction Ŷ is correct be written C. Then the idealized RMS Calibration Error is√ EC [(P(Y = Ŷ |C = c)− c)2] , which is the squared difference between the accuracy at a given confidence level and actual the confidence level. In Appendix E, we show how to empirically estimate this quantity and calculate the Brier Score." }, { "heading": "4.1 CIFAR-10 AND CIFAR-100", "text": "Training Setup. In the following experiments we show that AUGMIX endows robustness to various architectures including an All Convolutional Network (Springenberg et al., 2014; Salimans & Kingma, 2016), a DenseNet-BC (k = 12, d = 100) (Huang et al., 2017) , a 40-2 Wide ResNet (Zagoruyko & Komodakis, 2016), and a ResNeXt-29 (32 × 4) (Xie et al., 2016). All networks use an initial learning rate of 0.1 which decays following a cosine learning rate (Loshchilov & Hutter, 2016). All input images are pre-processed with standard random left-right flipping and cropping prior to any augmentations. We do not change AUGMIX parameters across CIFAR-10 and CIFAR-100 experiments for consistency. The All Convolutional Network and Wide ResNet train for 100 epochs, and the DenseNet and ResNeXt require 200 epochs for convergence. We optimize with stochastic gradient descent using Nesterov momentum. Following Zhang et al. (2017); Guo et al. (2019), we use a weight decay of 0.0001 for Mixup and 0.0005 otherwise.\nResults. Simply mixing random augmentations and using the Jensen-Shannon loss substantially improves robustness and uncertainty estimates. Compared to the “Standard” data augmentation baseline ResNeXt on CIFAR-10-C, AUGMIX achieves 16.6% lower absolute corruption error as shown in Figure 5. In addition to surpassing numerous other data augmentation techniques, Table 1 demonstrates that these gains directly transfer across architectures and on CIFAR-100-C with zero additional tuning. Crucially, the robustness gains do not only exist when measured in aggregate. Figure 12 shows that AUGMIX improves corruption robustness across every individual corruption and severity level. Our method additionally achieves the lowest mFP on CIFAR-10-P across three different models all while maintaining accuracy on clean CIFAR-10, as shown in Figure 6 (left) and Table 6. Finally, we demonstrate that AUGMIX improves the RMS calibration error on CIFAR-10 and CIFAR-10-C, as shown in Figure 6 (right) and Table 5. Expanded CIFAR-10-P and calibration results are in Appendix D, and Fourier Sensitivity analysis is in Appendix B." }, { "heading": "4.2 IMAGENET", "text": "Baselines. To demonstrate the utility of AUGMIX on ImageNet, we compare to many techniques designed for large-scale images. While techniques such as Cutout (Devries & Taylor, 2017) have not been demonstrated to help on the ImageNet scale, and while few have had success training adversarially robust models on ImageNet (Engstrom et al., 2018), other techniques such as Stylized ImageNet have been demonstrated to help on ImageNet-C. Patch Uniform (Lopes et al., 2019) is similar to Cutout except that randomly chosen regions of the image are injected with uniform noise; the original paper uses Gaussian noise, but that appears in the ImageNet-C test set so we use uniform noise. We tune Patch Uniform over 30 hyperparameter settings. Next, AutoAugment (Cubuk et al., 2018) searches over data augmentation policies to find a high-performing data augmentation policy. We denote AutoAugment results with AutoAugment* since we remove augmentation operations that overlap with ImageNet-C corruptions, as with AUGMIX. We also test with Random AutoAugment*, an augmentation scheme where each image has a randomly sampled augmentation policy using AutoAugment* operations. In contrast to AutoAugment, Random AutoAugment* and AUGMIX require far less computation and provide more augmentation variety, which can offset their lack of optimization. Note that Random AutoAugment* is different from RandAugment introduced recently by Cubuk et al. (2019): RandAugment uses AutoAugment operations and optimizes a single distortion magnitude hyperparameter for all operations, while Random AutoAugment* randomly samples magnitudes for each operation and uses the same operations as AUGMIX. MaxBlur Pooling (Zhang, 2019) is a recently proposed architectural modification which smooths the results of pooling. Now, Stylized ImageNet (SIN) is a technique where models are trained with the original ImageNet images and also ImageNet images with style transfer applied. Whereas the original Stylized ImageNet technique pretrains on ImageNet-C and performs style transfer with a content loss coefficient of 0 and a style loss coefficient of 1, we find that using 0.5 content and style loss coefficients decreases the mCE by 0.6%. Later, we show that SIN and AUGMIX can be combined. All models are trained from scratch, except MaxBlur Pooling models which has trained models available.\nTraining Setup. Methods are trained with ResNet-50 and we follow the standard training scheme of Goyal et al. (2017), in which we linearly scale the learning rate with the batch size, and use a learning rate warm-up for the first 5 epochs, and AutoAugment and AUGMIX train for 180 epochs. All input images are first pre-processed with standard random cropping horizontal mirroring.\nResults. Our method achieves 68.4% mCE as shown in Table 2, down from the baseline 80.6% mCE. Additionally, we note that AUGMIX allows straightforward stacking with other methods such as SIN to achieve an even lower corruption error of 64.1% mCE. Other techniques such as AutoAugment* require much tuning, while ours does not. Across increasing severities of corruptions, our method also produces much more calibrated predictions measured by both the Brier Score and RMS Calibration Error as shown in Figure 7. As shown in Table 3, AUGMIX also achieves a state-of-the art result on ImageNet-P at with an mFR of 37.4%, down from 57.2%. We demonstrate that scaling up AUGMIX from CIFAR to ImageNet also leads to state-of-the-art results in robustness and uncertainty estimation." }, { "heading": "4.3 ABLATIONS", "text": "We locate the utility of AUGMIX in three factors: training set diversity, our Jensen-Shannon divergence consistency loss, and mixing. Improving training set diversity via increased variety of augmentations can greatly improve robustness. For instance, augmenting each example with a\nrandomly sampled augmentation chain decreases the error rate of Wide ResNet on CIFAR-10-C from 26.9% to 17.0% Table 4. Adding in the Jensen-Shannon divergence consistency loss drops error rate further to 14.7%. Mixing random augmentations without the Jenson-Shannon divergence loss gives us an error rate of 13.1%. Finally, re-introducing the Jensen-Shannon divergence gives us AUGMIX with an error rate of 11.2%. Note that adding even more mixing is not necessarily beneficial. For instance, applying AUGMIX on top of Mixup increases the error rate to 13.3%, possibly due to an increased chance of manifold intrusion (Guo et al., 2019). Hence AUGMIX’s careful combination of variety, consistency loss, and mixing explain its performance." }, { "heading": "5 CONCLUSION", "text": "AUGMIX is a data processing technique which mixes randomly generated augmentations and uses a Jensen-Shannon loss to enforce consistency. Our simple-to-implement technique obtains state-of-the-art performance on CIFAR-10/100-C, ImageNet-C, CIFAR-10/100-P, and ImageNet-P. AUGMIX models achieve state-of-the-art calibration and can maintain calibration even as the distribution shifts. We hope that AUGMIX will enable more reliable models, a necessity for models deployed in safety-critical environments." }, { "heading": "A HYPERPARAMETER ABLATIONS", "text": "In this section we demonstrate that AUGMIX’s hyperparameters are not highly sensitive, so that AUGMIX performs reliably without careful tuning. For this set of experiments, the baseline AUGMIX model trains for 90 epochs, has a mixing coefficient of α = 0.5, has 3 examples per Jensen-Shannon Divergence (1 clean image, 2 augmented images), has a chain depth stochastically varying from 1 to 3, and has k = 3 augmentation chains. Figure 8 shows that the performance of various AUGMIX models with different hyperparameters. Under these hyperparameter changes, the mCE does not change substantially." }, { "heading": "B FOURIER ANALYSIS", "text": "A commonly mentioned hypothesis (Gilmer & Hendrycks, 2019) for the lack of robustness of deep neural networks is that they readily latch onto spurious high-frequency correlations that exist in the data. In order to better understand the reliance of models to such correlations, we measure model sensitivity to additive noise at differing frequencies. We create a 32× 32 sensitivity heatmap. That is,\nwe add a total of 32× 32 Fourier basis vectors to the CIFAR-10 test set, one at a time, and record the resulting error rate after adding each Fourier basis vector. Each point in the heatmap shows the error rate on the CIFAR-10 test set after it has been perturbed by a single Fourier basis vector. Points corresponding to low frequency vectors are shown in the center of the heatmap, whereas high frequency vectors are farther from the center. For further details on Fourier sensitivity analysis, we refer the reader to Section 2 of Yin et al. (2019). In Figure 9 we observe that the baseline model is robust to low frequency perturbations but severely lacks robustness to high frequency perturbations, where error rates exceed 80%. The model trained with Cutout shows a similar lack of robustness. In contrast, the model trained with AUGMIX maintains robustness to low frequency perturbations, and on the mid and high frequencies AUGMIX is conspicuously more robust." }, { "heading": "C AUGMENTATION OPERATIONS", "text": "The augmentation operations we use for AUGMIX are shown in Figure 10.\nWe do not use augmentations such as contrast, color, brightness, sharpness, and Cutout as they may overlap with ImageNet-C test set corruptions. We should note that augmentation choice requires additional care. Guo et al. (2019) show that blithely applying augmentations can potentially cause augmented images to take different classes. Figure 11 shows how histogram color swapping augmentation may change a bird’s class, leading to a manifold intrusion." }, { "heading": "D ADDITIONAL RESULTS", "text": "We include various additional results for CIFAR-10, CIFAR-10-C and CIFAR-10-P below. Figure 12 reports accuracy for each corruption, Table 5 reports calibration results for various architectures and Table 6 reports clean error and mFR. We refer to Section 4.1 for details about the architecture and training setup." }, { "heading": "E CALIBRATION METRICS", "text": "Due to the finite size of empirical test sets, the RMS Calibration Error must be estimated by partitioning all n test set examples into b contiguous bins {B1, B2, . . . , Bb} ordered by prediction confidence. In this work we use bins which contain 100 predictions, so that we adaptively partition confidence scores on the interval [0, 1] (Nguyen & O’Connor, 2015; Hendrycks et al., 2019b). Other works partition the interval [0, 1] with 15 bins of uniform length (Guo et al., 2017). With these b bins, we estimate the RMS Calibration Error empirically with the formula√√√√ b∑\ni=1\n|Bi| n\n( 1 |Bi| ∑ k∈Bi 1(yk = ŷk)− 1 |Bi| ∑ k∈Bi ck )2 . (3)\nThis is separate from classification error because a random classifier with an approximately uniform posterior distribution is approximately calibrated. Also note that adding the “refinement” EC [(P(Y = Ŷ |C = c)(1− (P(Y = Ŷ |C = c))] to the square of the RMS Calibration Error gives us the Brier Score (Nguyen & O’Connor, 2015)." } ]
2,019
null
SP:9c4bfe5e2bd7e16ad54d8b37b67f9d86192f9124
[ "The paper addresses the challenge of hard exploration tasks. The approach taken is to apply self-imitation to a diverse selection of trajectories from past experience -- practice re-doing the strangest things you've ever done. This is claimed to drive more efficient exploration in sparse-reward problems, leading to SOTA results for Montezuma's Revenge without certain common aides.", "The authors identify and address the problem of sub-optimal and myopic behaviors of self-imitation learning in environments with sparse rewards. The authors propose DTSIL to learn a trajectory-conditioned policy to imitate diverse trajectories from the agent’s own past experience. Unlike other self-imitation learning methods, the proposed method not only leverages sub-trajectories with high rewards, but lower-reward trajectories to encourage agent exploration diversity. The authors claim the proposed method to be more likely to find a global optimal solution. " ]
Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards. However, it is very difficult to achieve similar success without relying on expert demonstrations. Recent works on self-imitation learning showed that imitating the agent’s own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior. To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks. We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent’s own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards. Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima. In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Peter Auer" ], "title": "Using confidence bounds for exploitation-exploration trade-offs", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Yusuf Aytar", "Tobias Pfaff", "David Budden", "Thomas Paine", "Ziyu Wang", "Nando de Freitas" ], "title": "Playing hard exploration games by watching youtube", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Nuttapong Chentanez", "Andrew G Barto", "Satinder P Singh" ], "title": "Intrinsically motivated reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Jongwook Choi", "Yijie Guo", "Marcin Moczulski", "Junhyuk Oh", "Neal Wu", "Mohammad Norouzi", "Honglak Lee" ], "title": "Contingency-aware exploration in reinforcement learning", "venue": "arXiv preprint arXiv:1811.01483,", "year": 2018 }, { "authors": [ "Honghua Dong", "Jiayuan Mao", "Xinyue Cui", "Lihong Li" ], "title": "Explicit recall for efficient exploration, 2019", "venue": "URL https://openreview.net/forum?id=B1GIB3A9YX", "year": 2019 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yan Duan", "Marcin Andrychowicz", "Bradly Stadie", "OpenAI Jonathan Ho", "Jonas Schneider", "Ilya Sutskever", "Pieter Abbeel", "Wojciech Zaremba" ], "title": "One-shot imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Go-explore: a new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "Carlos Florensa", "David Held", "Xinyang Geng", "Pieter Abbeel" ], "title": "Automatic goal generation for reinforcement learning agents", "venue": "arXiv preprint arXiv:1705.06366,", "year": 2017 }, { "authors": [ "Tanmay Gangwani", "Qiang Liu", "Jian Peng" ], "title": "Learning self-imitating diverse policies", "venue": "arXiv preprint arXiv:1805.10309,", "year": 2018 }, { "authors": [ "Dibya Ghosh", "Avi Singh", "Aravind Rajeswaran", "Vikash Kumar", "Sergey Levine" ], "title": "Divide-andconquer reinforcement learning", "venue": "arXiv preprint arXiv:1711.09874,", "year": 2017 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv preprint arXiv:1611.07507,", "year": 2016 }, { "authors": [ "Todd Hester", "Matej Vecerik", "Olivier Pietquin", "Marc Lanctot", "Tom Schaul", "Bilal Piot", "Dan Horgan", "John Quan", "Andrew Sendonaris", "Ian Osband" ], "title": "Deep q-learning from demonstrations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Evan Zheran Liu", "Ramtin Keramati", "Sudarshan Seshadri", "Kelvin Guu", "Panupong Pasupat", "Emma Brunskill", "Percy Liang" ], "title": "Learning abstract models for long-horizon exploration, 2019", "venue": "URL https://openreview.net/forum?id=ryxLG2RcYX", "year": 2019 }, { "authors": [ "Marlos C Machado", "Marc G Bellemare", "Erik Talvitie", "Joel Veness", "Matthew Hausknecht", "Michael Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2017 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature,", "year": 2015 }, { "authors": [ "Ashvin Nair", "Dian Chen", "Pulkit Agrawal", "Phillip Isola", "Pieter Abbeel", "Jitendra Malik", "Sergey Levine" ], "title": "Combining self-supervised learning and imitation for vision-based rope manipulation", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Ian Osband", "Yotam Doron", "Matteo Hessel", "John Aslanides", "Eren Sezener", "Andre Saraiva", "Katrina McKinney", "Tor Lattimore", "Csaba Szepesvári", "Satinder Singh", "Benjamin Van Roy", "Richard Sutton", "David Silver", "Hado van Hasselt" ], "title": "Behaviour suite for reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Tom Le Paine", "Caglar Gulcehre", "Bobak Shahriari", "Misha Denil", "Matt Hoffman", "Hubert Soyer", "Richard Tanburn", "Steven Kapturowski", "Neil Rabinowitz", "Duncan Williams" ], "title": "Making efficient use of demonstrations to solve hard exploration problems", "venue": "arXiv preprint arXiv:1909.01387,", "year": 2019 }, { "authors": [ "Deepak Pathak", "Parsa Mahmoudieh", "Guanghao Luo", "Pulkit Agrawal", "Dian Chen", "Yide Shentu", "Evan Shelhamer", "Jitendra Malik", "Alexei A Efros", "Trevor Darrell" ], "title": "Zero-shot visual imitation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "arXiv preprint arXiv:1802.09464,", "year": 2018 }, { "authors": [ "Tobias Pohlen", "Bilal Piot", "Todd Hester", "Mohammad Gheshlaghi Azar", "Dan Horgan", "David Budden", "Gabriel Barth-Maron", "Hado van Hasselt", "John Quan", "Mel Večerı́k" ], "title": "Observe and look further: Achieving consistent performance on atari", "venue": "arXiv preprint arXiv:1805.11593,", "year": 2018 }, { "authors": [ "Vitchyr H Pong", "Murtaza Dalal", "Steven Lin", "Ashvin Nair", "Shikhar Bahl", "Sergey Levine" ], "title": "Skew-fit: State-covering self-supervised reinforcement learning", "venue": null, "year": 1903 }, { "authors": [ "Melrose Roderick", "Christopher Grimm", "Stefanie Tellex" ], "title": "Deep abstract q-networks", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 131–138. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Tim Salimans", "Richard Chen" ], "title": "Learning montezuma’s revenge from a single demonstration", "venue": "arXiv preprint arXiv:1812.03381,", "year": 2018 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Adaptive confidence and adaptive curiosity", "venue": "Technical report, Citeseer,", "year": 1991 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Christopher Stanton", "Jeff Clune" ], "title": "Deep curiosity search: Intra-life exploration improves performance on challenging deep reinforcement learning problems", "venue": "arXiv preprint arXiv:1806.00553,", "year": 2018 }, { "authors": [ "Alexander L Strehl", "Michael L Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Kaushik Subramanian", "Charles L Isbell Jr.", "Andrea L Thomaz" ], "title": "Exploration from demonstration for interactive reinforcement learning", "venue": "In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems,", "year": 2016 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Choi" ], "title": "ri), the state embedding (levelt, roomt, xt, yt, kt) makes the size of embedding space smaller so that the exploration could be more efficient. Such state representation conflates similar states while not conflating states that are meaningfully different. Therefore, our method could reach a higher average score around 29,817", "venue": null, "year": 2018 }, { "authors": [ "Oh" ], "title": "such good trajectories as demonstrations to guide the agent, while PPO+EXP might forget the good trajectories occasionally found or fails to exploit them before the exploration bonus vanishes. The importance of exploitation of the good experience to help the agent reproduce high-reward trajectories", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Hard-exploration tasks, particularly characterized by sparse environment rewards, are traditionally challenging in reinforcement learning (RL), because the agent must carefully balance the exploration and exploitation when taking a long sequence of actions to receive infrequent non-zero rewards. Demonstration data has been shown to be helpful for tackling hard-exploration problems (Subramanian et al., 2016); many existing methods (Hester et al., 2018; Pohlen et al., 2018; Aytar et al., 2018; Salimans & Chen, 2018) provide the guidance for exploration based on imitation learning of expert demonstrations and achieve strong performances on hard-exploration tasks. However, the reliance on human demonstrations largely limits the general applicability of such approaches.\nThe agent’s own past good trajectories with high total rewards are easily accessible (though imperfect) alternatives for the human-expert demonstrations. Recent works (Oh et al., 2018; Gangwani et al., 2018) verify that imitation learning from the agent’s previous good trajectories could indirectly drive exploration in certain environments. However, imitation of good experiences within limited directions might hurt exploration in some cases. Specifically, in environments with misleading rewards which may trap the agent in local optima, simply imitating ‘good’ trajectories that would accumulate misleading positive rewards may guide the agent to a myopic behavior and hinder it from reaching a higher return in the longer term. Therefore, imitating diverse trajectories would\nbe more desirable to help encourage exploration in diverse directions and avoid being distracted by the misleading rewards. For example, as illustrated in Figure 1, the agent starts in the bottom left corner where it can easily collect apples near its initial location by random exploration and achieve a small positive reward. If the agent imitates the trajectories around the orange path, it would receive the nearby positive rewards quickly but it is unlikely to collect the gold within a given time limit. Therefore, in order to find the optimal path (purple), it is better to exploit the past experiences in diverse directions (gray paths), instead of focusing only on the trajectories with the myopic behavior.\nThis paper investigates how imitation of diverse past trajectories leads a further exploration and helps avoid getting stuck at a sub-optimal behavior. Our main contributions are summarized as follows: (1) We propose a novel architecture for a trajectory-conditioned policy that can imitate diverse demonstrations. (2) We show the importance of imitating diverse past experiences to indirectly drive exploration to different regions of the environment, by comparing to existing approaches on various sparse-reward reinforcement learning tasks with discrete and continuous action space. (3) We achieve a performance comparable with the state-of-the-art on hard-exploration Atari game of Montezuma’s Revenge and Pitfall without using expert demonstrations or resetting to arbitrary states." }, { "heading": "2 RELATED WORK", "text": "Imitation Learning The goal of imitation learning is to train a policy to mimic a given demonstration. Many previous works achieve good results on hard-exploration Atari games by imitating human demonstrations. DQfD (Hester et al., 2018) combines the temporal difference updates in Q-learning with the supervised classification of the demonstrators’ actions. Ape-X DQfD (Pohlen et al., 2018) extends DQfD with the transformed Bellman operator and temporal consistency loss to improve the efficiency of exploration. Aytar et al. (2018) learn embeddings from a variety of demonstrations videos and proposes the one-shot imitation learning reward, which inspires the design of reward in our method. All these successful attempts rely on the availability of human demonstrations. In contrast, our method treats the agent’s past trajectories as demonstrations. Imitation learning is more difficult when the environment becomes more stochastic because the demonstrations could not account for all possible situations. Our method allows for some flexibility to follow the demonstrations in a soft-order and thus could perform well in the environment with a moderate degree of stochasticity. As discussed in Appendix C.1, we can easily extend our method to handle a more challenging scenario (e.g., where the location of objects could be random). Yet, imitation learning in extremely stochastic environments is still an open problem (Ghosh et al., 2017; Paine et al., 2019). Self-Imitation Learning a good policy by imitating past experiences has been discussed where the agent is trained to imitate only the high-reward trajectories with the SIL (Oh et al., 2018) or GAIL objective (Gangwani et al., 2018). In contrast, we store the past trajectories ending with diverse states in the buffer, because trajectories with low reward in the short term might lead to high reward in the long term, and thus following a diverse set of trajectories could be beneficial for discovering optimal solutions. Furthermore, our method focuses on explicit trajectory-level imitation while existing methods use sampled state-action pairs from the buffer to update the policy. Gangwani et al. (2018) proposed to learn multiple diverse policies in a SIL framework using the Stein Variational Policy Gradient. Empirically, their exploration can be limited by the number of policies learned simultaneously and the exploration performance of every single policy, as shown in Appendix F. Exploration Many exploration methods (Schmidhuber, 1991; Auer, 2002; Chentanez et al., 2005; Strehl & Littman, 2008) in RL tend to award a bonus to encourage an agent to visit novel states. Recently this idea was scaled up to large state spaces (Tang et al., 2017; Bellemare et al., 2016; Ostrovski et al., 2017; Burda et al., 2018). We propose that instead of directly taking a quantification of novelty as an intrinsic reward, one can encourage exploration by rewarding the agent when it successfully imitates demonstrations that would lead to novel states and gain the advantages in exploitation, as discussed in Appendix I. Go-Explore (Ecoffet et al., 2019) also shows the benefit of exploration by returning to promising states. Our method can be viewed in general as an extension of Go-Explore, though we do not need to explicitly divide learning into two phases of exploration and robustification. Go-Explore relies on the assumption that the environment is resettable. Resetting to an arbitrary state is often infeasible in real environments and gives an unfair advantage. When using a perfect goal-conditioned policy instead of a direct ‘reset’ function, this variant of Go-Explore may not explore as efficiently as our method, as discussed in Appendix H. Previous works attempted reaching a goal state by learning a set of sub-policies (Liu et al., 2019) or a goal-conditioned policy in pixel observation space (Dong et al., 2019). Gregor et al. (2016); Eysenbach et al. (2018); Pong et al. (2019) seek a diversity of exploration by maximizing the entropy of mixture skill policies or generated goal states. However, these methods do not show experimental results performing well on sparse-reward environments with a rich observation space like Atari games. Goal-Conditioned Policy Andrychowicz et al. (2017); Nair et al. (2017); Schaul et al. (2015a); Pathak et al. (2018) studied learning a goal-conditioned policy. Our trajectory-conditioned policy could be viewed as a goal-conditioned policy. Similarly to hindsight experience replay (Andrychowicz et al., 2017), our approach samples goal states from past experiences. Compared to conditioning on a single final goal state, the state trajectory includes rich intermediate information leading the\nAlgorithm 1 Diverse Self-Imitation Learning with Trajectory-Conditioned Policy Initialize parameter θ for the trajectory-conditioned policy πθ(at|e≤t, ot, g) Initialize the trajectory buffer D ← ∅ # Store diverse past trajectories Initialize set of transitions in the current episode E ← ∅ # Store current episode trajectory Initialize set of on-policy samples F ← ∅ # Store data for on-policy PPO update Initialize demonstration trajectory g ← ∅ for each iteration i from 1 to I do\nfor each step t do Observe st = {ot, et} and choose an action at ∼ πθ(at|e≤t, ot, g) Execute action at in the environment to get rt, ot+1, et+1 Store transition E ← E ∪ {(ot, et, at, rt)} # Positive reward if agent follows demonstration g # No reward after agent completes g and then takes random exploration Determine rDTSILt by comparing e≤t+1 with g (Eq. 1) Store on-policy sample F ← F ∪ { (ot, et, at, g, r DTSIL t )\n} end for if st+1 is terminal then D ← UpdateBuffer(D, E) (Alg. 2) Clear current episode trajectory E ← ∅ g ← SampleDemo(D, i, I) (Alg. 3) end if θ ← θ − η∇θLRL # Perform PPO update using on-policy samples (Eq. 2) Clear on-policy samples F ← ∅ θ ← θ − η∇θLSL # Perform supervised learning updates using samples from D for J times (Eq. 3)\nend for\nagent to follow a demonstration and reach the goal state even far away from the current state. Our method shares the same motivation as Duan et al. (2017) which uses an attention model over the demonstration but mainly focuses on the block stacking task. However, our architecture is simpler since it does not use an attention model over the current observation and our method is evaluated on various environments." }, { "heading": "3 METHOD", "text": "The main idea of our method is to maintain a buffer of diverse trajectories collected during training and to train a trajectory-conditioned policy by leveraging reinforcement learning and supervised learning to roughly follow demonstrations sampled from the trajectory buffer. The demonstration trajectories cover diverse possible directions in the environment. Therefore, the agent is encouraged to explore beyond various visited states in the environment and gradually push its exploration frontier further. In the meantime, we can train the policy to imitate the best trajectories collected to exploit the past good experiences. We put more weights on exploration in the early stage of training, and then increases the probability of imitating the best trajectories (i.e., exploitation) as training goes on. We name our method as Diverse Trajectory-conditioned Self-Imitation Learning (DTSIL)." }, { "heading": "3.1 BACKGROUND AND NOTATION", "text": "In the standard reinforcement learning setting, at each time step t, an agent observes a state st, selects an action at ∈ A, and receives a reward rt when transitioning to a next state st+1 ∈ S , where S is a set of all states and A is a set of all actions. The goal is to find a policy πθ(a|s) parameterized by θ that maximizes the expected return Eπθ [ ∑T t=0 γ\ntrt], where γ ∈ (0, 1] is a discount factor. In our work, we assume a state st includes the agent’s observation ot (e.g., raw pixel image) and a high-level abstract state embedding et (e.g., the agent’s location in the abstract space). The embedding et may be learnable from o≤t (e.g., ADM (Choi et al., 2018) could localize the agent in Atari games), but in this work, we consider a setting where high-level embedding is provided as a part of st 1. A trajectory-conditioned policy πθ(at|e≤t, ot, g) (which we refer to as πθ(·|g) in shorthand notation) takes a sequence of state embeddings g = {eg1, e g 2, · · · , e g |g|} as input for a demonstration, where |g| is\n1In many important application domains (e.g. the robotics domain), such handcrafted representation is available. Also, learning a good state representation itself is an important open question and extremely challenging especially for hard-exploration and sparse-reward environments, which is not the main focus of this work. Therefore, we assume the availability of the high-level representations as many previous works (Florensa et al., 2017; Liu et al., 2019; Ecoffet et al., 2019; Plappert et al., 2018)\nthe length of the trajectory g. A sequence of the agent’s past state embeddings e≤t = {e1, e2, · · · , et} is provided to determine which part of the demonstration has been followed by the agent. Together with the current observation ot, it helps to determine the correct action at to accurately imitate the demonstration. Our goal here is to find a set of optimal state embedding sequence(s) g∗ and the policy π∗θ(·|g) to maximize the return: g∗, θ∗ , arg maxg,θ Eπθ(·|g)[ ∑T t=0 γ\ntrt]. For robustness we may want to find multiple near-optimal embedding sequences with similar returns and a trajectoryconditioned policy for executing them. In our implementation, we train the trajectory-conditioned policy to imitate the best trajectories. Alternatively, an unconditional stochastic policy could also be trained to imitate the best trajectories, which may further improve generalization and robustness (see Appendix C.1 for more discussion and experiments)." }, { "heading": "3.2 ORGANIZING TRAJECTORY BUFFER", "text": "We maintain a trajectory buffer D = {(e(1), τ (1), n(1)), (e(2), τ (2), n(2)), · · · } of diverse past trajectories. For each embedding-trajectory-count tuple (e(i), τ (i), n(i)), τ (i) is the best trajectory ending with a state with the high-level representation e(i), and n(i) is the number of times the cluster represented by this state embedding e(i) has been visited during training. To maintain a compact buffer, similar state embeddings within the tolerance threshold th can be clustered together, and the existing entry is replaced if an improved trajectory τ (i) ending with a near-identical state is found.\nWhen given a new episode E = {(o0, e0, a0, r0), · · · , (oT , eT , aT , rT )}, all of the state embeddings et(1 ≤ t ≤ T ) in this episode E are considered as follows (similarly to Ecoffet et al. (2019)), because the buffer should maintain all of the possible paths available for future exploration to avoid missing any possibility to find an optimal solution. If the Euclidean distance between et and any state embedding e(i) in the buffer is larger than th (i.e et does not belong to any existing cluster in the buffer), (et, τ≤t, 1) is directly pushed into the buffer, where τ≤t = {(o0, e0, a0, r0), · · · , (ot, et, at, rt)} is the agent’s partial episode ending with et. If there exists e(k) similar to et (i.e., e(k) and et belong to the same cluster within threshold th) and the partial episode τ≤t is better (i.e., higher return or shorter trajectory) than the stored trajectory τ (k), τ (k) is replaced by the current trajectory τ≤t, and e(k) is replaced by et to represent this cluster of state embeddings. The full algorithm in pseudo-code is described in Appendix A.1." }, { "heading": "3.3 SAMPLING DEMONSTRATIONS", "text": "When learning a trajectory-conditioned policy π, demonstration trajectories are sampled from the bufferD. We record the count n(i) of how many times the cluster represented by this state embedding e(i) is visited. In the exploration mode, we set the probability of sampling each trajectory as 1/ √ n(i). This is inspired by the count-based exploration bonus (Strehl & Littman, 2008; Bellemare et al., 2016) and the idea of rank-based prioritization (Schaul et al., 2015b; Ecoffet et al., 2019): we prioritize a trajectory that ends with a less frequently visited state because this leads the agent to reach rarely visited regions in the state space and is more promising for discovering novel states.\nOn the other hand, in the imitation mode, we sample the best trajectories stored in the buffer for imitation learning. These trajectories are used to train the policy to converge to a high-reward behavior (Aytar et al., 2018; Ecoffet et al., 2019). To balance between exploration and exploitation, we decrease the probability of taking the exploration mode and exploit the best experiences more as training goes on. The algorithm is described in Appendix A.2." }, { "heading": "3.4 LEARNING TRAJECTORY-CONDITIONED POLICY", "text": "Imitation Reward Given a demonstration trajectory g = {eg0, e g 1, · · · , e g |g|}, we provide reward signals for imitating g. At the beginning of an episode, the index u of the last visited state embedding in the demonstration is initialized as u = −1. At each step t, if the agent’s new state st+1 has an embedding et+1 and it is the similar enough to any of the next ∆t state embeddings starting from the last visited state embedding egu in the demonstration (i.e., ‖et+1 − e g u′‖ < thwhere u < u′ ≤ u+∆t), then it receives a positive imitation reward rim, and the index of the last visited state embedding in the demonstration is updated as u← u′. This encourages the agent to visit the state embeddings in the demonstration in a soft-order so that the agent could explore around the demonstration and the demonstration plays a role to guide the agent to the region of interest in the state embedding space. To summarize, the agent receives a reward rDTSILt defined as\nrDTSILt =\n{ f(rt) + r\nim if ∃u′, u < u′ ≤ u+ ∆t, such that ‖egu′ − et+1‖ < th 0 otherwise,\n(1)\n<latexit sha1_base64=\"20XvuQowmn/b4wZLNQNpO6yyMXw=\">AAACunicbVFNbxMxEHWWr1K+0nLkYhEhcQjRbotEDxwqwYFjkUhbqYmisXeyseKPxZ4FltX+B34BV/hL/Bu8aQ40y0iWnt+b8YzniVKrQGn6Z5Dcun3n7r29+/sPHj56/GR4cHgeXOUlTqXTzl8KCKiVxSkp0nhZegQjNF6I9btOv/iCPihnP1Fd4txAYdVSSaBILYaHBZ8FZfjMAK0k6OZ9uxiO0km6Cd4H2RaM2DbOFgeDH7PcycqgJakhhKssLWnegCclNbb7sypgCXINBV5FaMFgmDeb4Vv+IjI5XzofjyW+Yf+taMCEUBsRM7sZw67Wkf/VJFiJeqc7LU/mjbJlRWjldfNlpTk53m2H58qjJF1HANKrOD+XK/AgKe7wxuuk1t/bHvPqM3mMP94IHaGV8ODrxiibGyjHnRzanhxWUGKYFOgMkleynwHeu69hzEXsVnhX2TxeomFyzFdOiHrMSxdUZ6qyRedhtutYH5wfTbLjydHH16PTk62be+wZe85esoy9YafsAztjUybZN/aT/WK/k7eJSFSyvk5NBtuap+xGJPQXQeTduw==</latexit>\nwhere f(·) is a monotonically increasing function (e.g., clipping (Mnih et al., 2015)). Figure 2 illustrates the updates of u during an episode when the agent visits a state whose embedding is close to state embeddings in the demonstration g.\nPolicy Architecture For imitation learning with diverse demonstrations, we design a trajectoryconditioned policy πθ(at|e≤t, ot, g) that should imitate any given trajectory g. Inspired by neural machine translation methods (Sutskever et al., 2014; Bahdanau et al., 2014), one can view the demonstration as the source sequence and view the incomplete trajectory of the agent’s state representations as the target sequence. We apply a recurrent neural network (RNN) and an attention mechanism to the sequence data to predict actions that would make the agent follow the demonstration.\nAs illustrated in Figure 3, RNN computes the hidden features hgi for each state embedding e g i (0 ≤ i ≤ |g|) in the demonstration and derives the hidden features ht for the agent’s state representation et. Then the attention weight αt is computed by comparing the current agent’s hidden features ht with the demonstration’s hidden features hgi (0 ≤ i ≤ |g|). The attention readout ct is computed as an attention-weighted summation of the demonstration’s hidden features to capture the relevant information in the demonstration trajectory and to predict the action at. The more details of policy architecture are described in Appendix B.\nReinforcement Learning Objective With the reward defined as rDTSILt (Equation 1), the trajectoryconditioned policy πθ can be trained with a policy gradient algorithm (Sutton et al., 2000):\nLRL = Eπθ [− log πθ(at|e≤t, ot, g)Ât],\nwhere Ât = n−1∑ d=0 γdrDTSILt+d + γ nVθ(e≤t+n, ot+n, g)− Vθ(e≤t, ot, g), (2)\nwhere the expectation Eπθ indicates the empirical average over a finite batch of on-policy samples and n denotes the number of rollout steps taken in each iteration. We use Proximal Policy Optimization (PPO) (Schulman et al., 2017) as an actor-critic policy gradient algorithm for our experiments.\nSupervised Learning Objective To improve trajectory-conditioned imitation learning and to better leverage the past trajectories, we propose a supervised learning objective. We sample a trajectory τ = {(o0, e0, a0, r0), (o1, e1, a1, r1) · · · } ∈ D, formulate the demonstration g = {e0, e1, · · · , e|g|} and assume the agent’s incomplete trajectory is the partial trajectory g≤t = e≤t = {e0, e1, · · · , et} for any 1 ≤ t ≤ |g|. Then at is the ‘correct’ action at step t for the agent to imitate the demonstration. Our supervised learning objective is to maximize the log probability of taking such actions:\nLSL = − log πθ(at|e≤t, ot, g), where g = {e0, e1, · · · , e|g|}. (3)" }, { "heading": "4 EXPERIMENTS", "text": "In the experiments, we aim to answer the following questions: (1) How well does the trajectoryconditioned policy imitate the diverse demonstration trajectories? (2) Does imitation of the past diverse experience enable the agent to further explore more diverse directions and guide the exploration to find the trajectory with a near-optimal total reward? (3) Can our proposed method aid in avoiding myopic behaviors and converge to near-optimal solutions?\n(a) Curve of the average episode reward\n0M 5M 10M 15M 20M 25M Steps\n2\n4\n6\n8\nBe st\nR ew\nar d\n(b) Curve of the best episode reward found during training\n0M 5M 10M 15M 20M 25M Steps\n100\n150\n200\n250\n300\n350\nNu m\nbe r o\nf F ou\nnd S\nta te\n(c) Curve of the number of state embedding clusters found during training\n0M 5M 10M 15M 20M 25M Steps\n0.2\n0.4\n0.6\n0.8\n1.0\nAv er\nag e\nIm ita\ntio n\nSu cc\nes s R\nat io\n(d) Curve of the average imitation success ratio\nFigure 4: Learning curves on Apple-Gold domain averaged over 5 runs, where the curves in dark colors are average over 5 curves in light colors. The x-axis and y-axis correspond to the number of steps and statistics about the performance, respectively. The average reward and average imitation success ratio are the mean values over 40 recent episodes.\nPP O\n+S IL\nD T\nSI L\n0M steps 3.2M steps 6.4M steps 9.6M steps\nFigure 5: Visualization of trajectories stored in the buffer for PPO+SIL and DTSIL (ours) over time. The agent (gray), apple (red) and gold (yellow) are shown as squares for simplicity. The rocky region is in light blue.\nWe compare our method with the following baselines: (1) PPO: Proximal Policy Optimization (Schulman et al., 2017); (2) PPO+EXP: PPO with reward f(rt) + λ/ √ N(et), where λ/ √ N(et) is the count-based exploration bonus, N(e) is the number of times the cluster which the state representation e belongs to was visited during training and λ is the hyper-parameter controlling the weight of exploration term; (3) PPO+SIL: PPO with Self-Imitation Learning (Oh et al., 2018). More details about the implementation can be found in the Appendix." }, { "heading": "4.1 APPLE-GOLD DOMAIN", "text": "The Apple-Gold domain (shown in Figure 1) is a simple grid-world environment with misleading rewards that can lead the agent to local optima. An observation consists of the agent’s location (xt, yt) and binary variables showing whether the agent has gotten the apple or the gold. A state is represented as the agent’s location and the cumulative positive reward: et = (xt, yt, ∑t i=1 max(ri, 0)), indicating the location of the agent and the collected objects.\nAs shown in Figure 4a, PPO, PPO+SIL, and PPO+EXP agents are stuck with the sub-optimal policy of collecting the two apples. In Figure 4b, PPO+EXP agent could occasionally explore further and gather the gold with total reward 8.5. However, the agent does not replicate the good trajectory due to the negative reward along the optimal path and network forgetting about the good experiences. DTSIL marches forward on the right side of the maze and achieves the highest total reward 8.5 within the time limit. Figure 4c shows the number of different state embeddings found during training.\nIn Figure 4d, we show the average success ratio of the imitation during training. It is defined as follows: for a given demonstration g = {eg0, e g 1, · · · , e g |g|}, let u be the index of the last visited state embedding in g when the agent’s current episode terminates, then the success ratio of imitating g is u|g| (i.e., the portion of trajectory imitated). Ideally, we want the success ratio to be 1.0, which indicates that the trajectory-conditioned policy could successfully follow any given demonstration from the buffer. At 5M steps, the trajectories with the optimal total reward 8.5 are found, and our trajectory-conditioned policy eventually imitates them well with a success ratio around 1.0.\nFigure 5 visualizes a learning process. PPO+SIL fails on this task because the agent quickly exploits good experiences of collecting the apples and the buffer is filled with the trajectories in the\nnearby region. On the contrary, DTSIL maintains a buffer of diverse trajectories which are used as demonstrations to guide the agent to explore different regions and discover an optimal behavior." }, { "heading": "4.2 ATARI MONTEZUMA’S REVENGE AND PITFALL", "text": "We evaluate our method on the hard-exploration game Montezuma’s Revenge and Pitfall in the Arcade Learning Environment (ALE) (Bellemare et al., 2013; Machado et al., 2017). The environment setting is the same as Mnih et al. (2015). There is a random initial delay resulting in stochasticity in the environment. The observation is a frame of raw pixel images, and the state representation et = (roomt, xt, yt, ∑t i=1 max(ri, 0)) consists of the agent’s ground truth location (obtained from RAM) and the accumulated positive environment reward, which implicitly indicates the objects the agent has collected2. It is worth noting that even with the ground-truth location of the agent, on these two infamously difficult games, it is highly non-trivial to explore efficiently and avoid local optima without relying on expert demonstrations or being able to reset to arbitrary states. In addition to the agent’s location information, many complicated elements such as moving entities, traps, and the agent’s inventory are included in the state. Therefore, these Atari games with agent’s location information are still much more challenging than the grid world environments. Empirically, as summarized in Table 1, the previous SOTA baselines using the agent’s ground truth location information even fails to achieve high scores.\nUsing the state representation et, we introduce a variant ‘DTSIL+EXP’ that adds a count-based exploration bonus r+t = 1/ √ N(et) to Eq.1 for faster exploration3. As shown in Figure 6a and 6e, in the early stage, the average episode reward of DTSIL+EXP is worse than PPO+EXP because our policy is trained to imitate diverse demonstrations rather than directly maximize the environment reward. Contrary to PPO+EXP, DTSIL+EXP agent is not eager to myopically follow the high-reward path since the path with a relatively low score in the short term might lead to higher rewards in the long term. On Montezuma’s Revenge, for example, with two keys in hand, PPO+EXP agent often opens a nearby door and loses the chance of opening the last two doors of the first level4. As training continues, DTSIL+EXP successfully discovers trajectories to pass the first level with a total reward of more than 20,000, as shown in Figure 6b. While gradually increasing the probability of imitating the best trajectories in the buffer by sampling them as demonstrations, the average episode reward\n2We can also use the number of keys as an element in the state embedding as in (Ecoffet et al., 2019) to reduce the size of the state embedding space and improve the performance, as shown in Appendix H.\n3Note that the existing exploration methods listed in Table 1 already take advantage of count-based exploration bonus (e.g., A2C+CoEX+RAM, SmartHash, DeepCS, and Abstract-HRL). Therefore, combination of DTSIL and the count-based exploration bonus does not introduce unfair advantages over other baselines.\n4Demo videos of the learned policies for both PPO+EXP and DTSIL+EXP are available at https:// sites.google.com/view/diverse-sil. In comparison to DTSIL+EXP, we could see the PPO+EXP agent does not explore enough to make best use of the tools (e.g. sword, key) collected in the game. A map of this level is shown in Figure 16 in Appendix.\ncould increase to surpass 20,000 in Figure 6a. On Pitfall, the positive reward is much sparser and most of the actions yield small negative rewards that would discourage getting a high total reward in the long term. However, our method still stores the trajectories with negative rewards, encourages the agent to visit these novel regions and then discovers good paths with positive rewards as illustrated in Figure 6f. Therefore we are able to eventually reach average episode reward over 0 in Figure 6e, without expert demonstrations.\nTable 1 compares our proposed method with previous works without using any expert demonstration or resetting to an arbitrary state, where our approach significantly outperforms the other approaches which make use of the same information from RAM about the agent’s location. In the Appendix C.3, we present more experimental results on other interesting environments with discrete action space such as Deep Sea (Osband et al., 2019)." }, { "heading": "4.3 MUJOCO", "text": "We evaluate DTSIL on continuous control tasks. We adapt the maze environment introduced in (Duan et al., 2016) to construct a set of challenging tasks, which require the point mass agent to collect the key, open the door with the same color and finally reach the treasure to get a high score. One key cannot be re-used once it was used before to open a door with the same color, which makes the agent to be easily trapped. A visualization of these environments is shown in Figure 7. The agent’s initial location is randomly sampled from a Gaussian distribution as in standard MuJoco tasks (Brockman et al., 2016). The observation is the agent’s location and range sensor reading about nearby objects. The state representation is et = (xt, yt, ∑t i=1 ri).\nAs shown in the first maze of Figure 7, the agent can easily get the blue key near its initial location and open the blue door in the upper part. However, the optimal path is to bring the key to open the blue door in the bottom and obtain the treasure, reaching an episode reward of 9. In the second maze, the agent should bring the blue key and pick up the green key while avoiding opening the blue door in the upper part. Then, the green and blue key can open the two doors at the bottom of the maze, which results in the total reward of 12. The learning curves in Figure 7 show that PPO, PPO+EXP, and PPO+SIL may get stuck at a sub-optimal behavior, whereas our policy eventually converges to the behavior achieving the high episode reward." }, { "heading": "5 CONCLUSION", "text": "This paper proposes to learn diverse policies by imitating diverse trajectory-level demonstrations through count-based exploration over these trajectories. Imitation of diverse past trajectories can guide the agent to rarely visited states and encourages further exploration of novel states. We show that in a variety of environments with local optima, our method significantly improves self-imitation learning (SIL). It avoids prematurely converging to a myopic solution and learns a near-optimal behavior to achieve a high total reward." }, { "heading": "A DETAILED DESCRIPTION OF ALGORITHMS", "text": "A.1 ALGORITHM OF TRAJECTORY BUFFER UPDATE\nIn Algorithm 2, we summarize how to process the collected episode and store the diverse trajectories in the trajectory buffer.\nAlgorithm 2 Update Trajectory Buffer\nInput: the trajectory buffer D = {(e(1), τ (1), n(1)), (e(2), τ (2), n(2)), · · · } Input: the current episode E = {(o0, e0, a0, r0), (o1, e1, a1, r1), · · · , (oT , eT , aT , rT )} Input: the threshold th for high level state embedding\n# Consider all the states in E for each step t do\n# Consider state st and partial episode τ≤t = {(o0, e0, a0, r0), · · · , (ot, et, at, rt)} if there exists (e(k), τ (k), n(k)) ∈ D where ‖e(k) − et‖ < th then\n# Compare partial episode τ≤t with stored trajectory τ (k) if τ≤t has higher total reward or reaches the same total reward with less steps then τ (k) ← τ≤t = {(o0, e0, a0, r0), (o1, e1, a1, r1), · · · , (ot, et, at, rt)} e(k) ← et end if n(k) ← n(k) + 1\nelse D ← D ∪ (et, τ≤t, 1) where τ≤t = {(o0, e0, a0, r0), (o1, e1, a1, r1), · · · , (ot, et, at, rt)}\nend if end for return D\nA.2 ALGORITHM OF SAMPLING DEMONSTRATIONS\nIn Algorithm 3, we summarize how to sample the demonstrations from the trajectory buffer for exploration or imitation. Considering the current iteration i and the total number of iterations I , the probability of sampling demonstration for imitation to learn good behavior is iI and the probability of sampling demonstration from exploration is 1− iI .\nAlgorithm 3 Sample Demonstration Trajectories\nInput: the trajectory buffer D = {e(1), τ (1), n(1)), (e(2), τ (2), n(2)), · · · } Input: current iteration i, total number of iterations I .\n# With probability iI , run the imitation mode; with probability 1− i I , run the exploration mode if random number ∼ U [0, 1] is smaller than iI then # sample the top-K trajectories reaching near-optimal score in the buffer g ← {e0, e1, · · · , e|g|} for all (ot, et, at, rt) ∈ τ best else Calculate probability distribution p← [ 1√\nn(1) , 1√ n(2) , · · · ]\np← p∑ j pj Sample (e, τ, n) ∼ Categorical(D, p) g ← {e0, e1, · · · , e|g|} for all (ot, et, at, rt) ∈ τ\nend if return g" }, { "heading": "B DETAILS OF NETWORK ARCHITECTURE AND TRAINING PROCESS", "text": "In the trajectory-conditioned policy (Figure 3), we first embed the input state et (or e g i ) with a fully-connected layer with 64 units. Next, a RNN with gated recurrent units (GRU) computes the feature ht (or h g i ) with 128 units. The attention weight αt is calculated based on the Bahdanau attention mechanism (Bahdanau et al., 2014). The concatenation of the attention readout ct, the hidden feature of agent’s current state ht, and convolutional features from the observation are used to predict π(at|e≤t, ot, g) with a linear layer. For experiments on the Apple-Key domain, Toy Montezuma’s Revenge, and Mujoco, the features from ot are not required for the policy. However, on the Atari games such as Montezuma’s Revenge, it is necessary to take the raw observation ot as input into policy because the location information in e≤t solely could not let the agent to take temporal context into account (e.g. avoiding moving skulls and passing laser gates). With the raw observation ot with shape 84× 84× 4 as input, three convolutional layers are used to encode ot and then the convolutional feature is flattened.\nDuring training, our algorithm begins with an empty buffer D. We initialize the demonstration as a list of zero vectors. With such an input demonstration, the agent performs random exploration to collect trajectories to fill the buffer D. In practice, the sampled demonstration trajectory g = {eg0, e g 1, · · · , e g |g|} could be lengthy. We present a part of the demonstration as the input into the policy, similarly to translating a paragraph sentence by sentence. Specifically, we first input {eg0, e g 1, · · · , egm} (m ≤ |g|) into the policy. When the index of the agent’s last visited state embedding in the demonstration u belongs to {m−∆t, · · · ,m}, we think that the agent has accomplished this part of the demonstration, and switch to the next part {egu, e g u+1, · · · , e g u+m}. We repeat this process until the last part of the demonstration. If the last part {egu, e g u+1, · · · , e g |g|} is less than m+ 1 steps long, we pad the sequence with zero vectors.\nA reward function f(rt) = rt is used on the Apple-Gold, Deep Sea and MuJoco domain, and f(rt) = 2 · clip(rt, 0, 1) on other environments. rim = 0.1 is the reward to encourage imitation. More details about hyperparameters and the environments can be found in the Appendix D." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "C.1 GENERALIZATION AND ROBUSTNESS IN STOCHASTIC ENVIRONMENTS\nWe evaluate our method on environments with different levels of stochasticity. For Apple-Gold domain, in the environments with random initial location of the agent (Figure 9), or with sticky action (Figure 10), our DTSIL still outperforms the baselines and achieves near-optimal total episode reward.\n0M 8M 16M 24M 32M 40M Steps\n0\n2\n4\n6\n8\nAv er\nag e\nRe wa\nrd\nPPO PPO+SIL PPO+EXP DTSIL\n(a) Curve of the average episode reward\n0M 8M 16M 24M 32M 40M Steps\n2\n4\n6\n8\nBe st\nR ew\nar d\n(b) Curve of the best episode reward found during training\n0M 8M 16M 24M 32M 40M Steps\n100\n150\n200\n250\n300\n350\nNu m\nbe r o\nf F ou\nnd S\nta te\n(c) Curve of the number of state embedding clusters found during training\n0M 8M 16M 24M 32M 40M Steps\n0.2\n0.4\n0.6\n0.8\n1.0\nAv er\nag e\nIm ita\ntio n\nSu cc\nes s R\nat io\n(d) Curve of the average imitation success ratio\nFigure 9: Learning curves on Apple-Gold domain with random initial location of the agent in the lower left corner, where the curves in dark colors are average over 5 curves in light colors. The x-axis and y-axis correspond to the number of steps and statistics about the performance, respectively. The average reward and average imitation success ratio are the mean values over 40 recent episodes.\n0M 8M 16M 24M 32M 40M Steps\n0\n2\n4\n6\n8\nAv er\nag e\nRe wa\nrd\nPPO PPO+SIL PPO+EXP DTSIL\n(a) Curve of the average episode reward\n0M 8M 16M 24M 32M 40M Steps\n2\n4\n6\n8\nBe st\nR ew\nar d\n(b) Curve of the best episode reward found during training\n0M 8M 16M 24M 32M 40M Steps\n150\n200\n250\n300\n350\nNu m\nbe r o\nf F ou\nnd S\nta te\n(c) Curve of the number of state embedding clusters found during training\n0M 8M 16M 24M 32M 40M Steps\n0.2\n0.4\n0.6\n0.8\n1.0\nAv er\nag e\nIm ita\ntio n\nSu cc\nes s R\nat io\n(d) Curve of the average imitation success ratio\nFigure 10: Learning curves on Apple-Gold domain with sticky action of the agent, where the curves in dark colors are average over 5 curves in light colors. The x-axis and y-axis correspond to the number of steps and statistics about the performance, respectively. The average reward and average imitation success ratio are the mean values over 40 recent episodes.\nIn a more challenging scenario when the location of the objects can be random, previous works using expert demonstrations (e.g. Aytar et al. (2018)) would also struggle. Our method can be easily extended to handle this difficulty by ditilling the behavior of good trajectories we collected to train an unconditional policy robust to the stochasticity. For example, on the Apple-Gold domain with pixel observation (Figure 11), the location of the gold could be random in the upper middle part of the maze. We first explore for a sufficiently large number of timesteps (e.g. 10M timesteps) with the trajectory-conditioned policy to collect good trajectories and then train an unconditional policy by distilling the behavior, as shown in Figure 11, to always collect the gold. We could see DTSIL with unconditional policy training is able to generalize in the stochastic environment.\n00 40 80 120 160 200 6teSs\n0\n2\n4\n6\n8\nAv er\nDg e\nRe w\nDr d\nPP2 PP2+6IL PP2+EXP DT6IL\n(a) Curve of the average episode reward\n00 40 80 120 160 200 6teSs\n2\n4\n6\n8\nBe st\nR ew\nar d\n(b) Curve of the best episode reward found during training\n00 40 80 120 160 200 6teSs\n100\n150\n200\n250\n300\n350\n1u m\nbe r o\nf F ou\nnd 6\nta te\n(c) Curve of the number of state embedding clusters found during training\n00 40 80 120 160 200 6teSs\n0.2\n0.4\n0.6\n0.8\n1.0\nAv er\nag e\nIm ita\ntiR n\n6u cc\nes s\nRa tiR\n(d) Curve of the average imitation success ratio\nFigure 11: Learning curves on Apple-Gold domain with stochastic location of the gold in the upper middle part of the maze, where the curves in dark colors are average over 5 curves in light colors. The x-axis and y-axis correspond to the number of steps and statistics about the performance, respectively. The average reward and average imitation success ratio are the mean values over 40 recent episodes.\nC.2 EXPERIMENTS ON TOY MONTEZUMAREVENGE\nWe evaluate our method on a more challenging domain, Toy Montezuma’s Revenge (Roderick et al., 2018), which requires a more sophisticated strategy to explore the environment. As shown in Figure 12, there are 24 rooms similar to the layout of the first level of Atari Montezuma’s Revenge, with a discrete grid for each room. The agent should navigate the labyrinth to locate the keys, unlock the doors and reach the goal (the treasure room). The observation is represented by the agent’s location and cumulative episode reward. The state representation et = (roomt, xt, yt, ∑t i=1 ri) is the same as the observation.\nThe learning curve of the averaged episode reward in Figure 13 shows that PPO, PPO+SIL, and PPO+EXP could not learn a policy to reach the goal. The PPO+EXP agent occasionally finds a trajectory with the total reward of 11,200 reaching the treasure room, but fails to exploit this experience. On the other hand, our method learns a good behavior of not only reaching the goal room, but also collecting all of the keys to achieve an optimal total reward of 11,600.\nC.3 EXPERIMENTS ON DEEP SEA\nAs introduced in Osband et al. (2019), the deep sea problem is implemented as an N ×N grid with a one-hot encoding for state. The agent begins each episode in the top left corner of the grid and descends one row per timestep. Each episode terminates after N steps, when the agent reaches the bottom row. In each state there is a random but fixed mapping between actions A = {0, 1} and the transitions ‘left’ and ‘right’. At each timestep there is a small cost r = −0.01/N of moving right, and r = 0 for moving left. However, should the agent transition right at every timestep of the episode it will be rewarded with an additional reward of +1. This presents a particularly challenging exploration problem for two reasons. First, following the ‘gradient’ of small intermediate rewards leads the agent away from the optimal policy. Second, a policy that explores with actions uniformly at random has probability 2−N of reaching the rewarding state in any episode.\nWe compare DTSIL and baselines on deep sea environments with 10× 10 grid and and 30× 30 grid. The state embedding we use here is exactly the observation. The result is shown in Figure 14. On the first environment, it is easy for all of the methods to converge to the optimal behavior. The second one is much more challenging to find the optimal trajectory maximizing total reward. Therefore, PPO and PPO+SIL fails at such environment due to the hard exploration. PPO+EXP could not always explore to find the good behavior and exploit it efficiently within 12M timesteps. DTSIL successfully discovers the right way and imitate to converge to the optimal behavior." }, { "heading": "D HYPERPARAMETERS", "text": "The hyper-parameters for our proposed method used in each experiment are listed in Table 2. On Mujoco environment, RL loss alone worked well so we did not include SL loss for behavior cloning. On the other environments when action prediction in behavior cloning is poor, we set a large J for quickly learning to imitate demonstrations. When action prediction is accurate enough, we de-emphasize behavior cloning to enhance exploration around the demonstration. ∆t influences how flexibly the demonstration should be followed. In our experiment, we have ∆t < m due to the limit of the length m of the input demonstration part. When the demonstration is longer and harder to follow, we would want larger ∆t to generously provide imitation reward (More detailed ablation study of the hyper-parameter ∆t is in Appendix J). On Atari games, there are much more different trajectories stored in the buffer, so we sample top-100 trajectories as demonstration for imitation of best experiences. On the other environments, the total number of trajectories is much smaller, so we only take top-10 or top-1." }, { "heading": "E ENVIRONMENT SETTING", "text": "For each experiment we conducted, we list the detailed environment setting in Table 3. There is stochasticity in the environments of Apple-Gold domain, Toy MontezumaRevenge, Atari, and Mujoco. On Atari games, we use setting of the random initial delay introduced in (Mnih et al., 2015). On Mujoco domain, the agent’s initial location is randomly sampled from a Gaussian distribution, as in standard MuJoco tasks in OpenAI Gym (Brockman et al., 2016)." }, { "heading": "F COMPARISON WITH LEARNING DIVERSE POLICIES BY SVPG", "text": "While the code for the Stein variational policy gradient (SVPG) in Gangwani et al. (2018) has not yet been released, we replicate the method in Gangwani et al. (2018) to learn diverse policies. Their experiments focus on continuous control tasks with relatively simple observation spaces with limited local optimal branches in the state space. We learn 8 diverse policies in parallel following their method on our Apple-Gold domain with discrete action space. Figure 15 shows a visualization of the learning progress: the 8 policies learn to cover different regions of the environment. The method explores better than PPO+SIL, but the exploration of each individual agent is not strong enough to find the optimal path to achieve the highest episode reward." }, { "heading": "G MAP OF ATARI MONTEZUMA’S REVENGE AT THE FIRST LEVEL", "text": "On Montezuma’s Revenge, there are multiple levels and each level consists of 24 rooms. A map of Atari Montezuma’s Revenge at the first level is shown in Figure 16. It is challenging to bring two keys to open the two doors in room 17 behind the treasure in room 15, where the agent can pass to the next level." }, { "heading": "H STUDY OF EXPLORATION EFFICIENCY", "text": "To evaluate the efficiency of exploration, we compare our method with the “exploration phase” in the Go-Explore algorithm (Ecoffet et al., 2019). The idea behind Go-Explore is to reset the agent to any interesting state sampled from the buffer of state embeddings, and then explore further using random actions. To study the exploration efficiency of our method, we modify the Go-Explore code such that we could not reset to any arbitrary states in the environment. Similarly to (Ecoffet et al., 2019), we use the state representation (levelt, roomt, xt, yt, kt) where kt is the number of keys the agent holds and (xt, yt) is in a 9× 9 grid division of the frame, and the sampling weight 1√\nn(i) to sample goal\nstates from the buffer (It is worth noting that the state representation and goal-state sampling function recommended in Go-Explore paper is more complicated than this setting).\nIn the Go-Explore method without using the direct ‘reset’ function and with a perfect goal-conditioned policy to visit any state sampled from the buffer, the agent could precisely advance to the goal state by following the stored trajectory. The total steps taken in the environment are counted by summing the number of steps taken to follow the stored trajectories and the number of steps taken to explore.\nIn Figure 18, we show the average number of rooms found and the number of different state representations found during training. Even if we assume that there is a perfect goal-conditioned policy in Go-Explore to guide the agent to follow the stored trajectory exactly and visit the goal state, the learning curves demonstrate that our method is more efficient for exploring diverse state representations and consequently visits several rooms. This is because our method uses the countbased exploration bonus to encourages the exploration around and beyond the stored trajectories and the imitation reward allows the agent to follow the demonstrations in a soft-order.\nIn addition, we notice that, comparing with the state embedding (roomt, xt, yt,\n∑t\ni=1 ri), the state embedding (levelt, roomt, xt, yt, kt) makes the size of embedding space smaller so that the exploration could be more efficient. Such state representation conflates similar states while not conflating states that are meaningfully different. Therefore, our method could reach a higher average score around 29,817. Here, the baseline PPO+Exp is essentially the CoEX method introduced in Choi et al. (2018) with state embedding extracted from RAM, therefore DTSIL performs better than PPO+CoEX+RAM no matter the state embedding is (roomt, xt, yt, ∑t i=1 ri) or (levelt, roomt, xt, yt, kt)." }, { "heading": "I STUDY OF ADVANTAGE IN EXPLOITATION", "text": "We compared our method DTSIL with PPO+EXP in the main text. PPO+EXP encourages exploration to novel states by providing auxiliary rewards to the agent, while our method rewards the agent when it successfully follow the demonstrations which leads to novel states. In order to understand more about the difference between these two mechanism, we propose a variant of our method denoted as “DTSIL-combine”. In this variant, we do not separate the exploration mode and imitation mode. However, we always sample the top-K best trajectories with highest total reward ∑|τ | t=0(f(rt) + λ√ N(et) ). The PPO+EXP baseline directly optimize the objective ∑|τ | t=0(f(rt) + λ√ N(et) ) via the reinforcement learning algorithm while this variant indirectly optimizes such objective by imitating the best trajectories with highest value of ∑|τ | t=0(f(rt) +\nλ√ N(et) ). We investigate the different\nperformance of these two methods on the Apple-Gold domain.\nWith λ = 10 which is the best hyper-parameter we found after searching λ = 5, 10, 20, 50 for PPO+EXP on Apple-Gold domain, in Figure 19c we could notice both PPO+EXP agent and DTSILcombine agent have 3 out of 5 runs finding the optimal trajectory with episode rewards 8.5 and the other 2 runs get stuck at the sub-optimal behavior. However, in Figure 19b, DTSIL-combine is better at optimizing the objective ∑|τ | t=0 f(rt) +\n10√ N(et) averaging over the agent’s recent episodes and\ntherefore it achieves higher environment environment reward as training goes on. As shown in Figure 19a, DTSIL-combine agent reproduce the good trajectories to collect the gold in 3 out of 5 runs while PPO+EXP agent is trapped at the behavior collecting the apples. The main reason might be that DTSIL-combine agent never forgets the good experience of collecting gold, and always select such good trajectories as demonstrations to guide the agent, while PPO+EXP might forget the good trajectories occasionally found or fails to exploit them before the exploration bonus vanishes. The importance of exploitation of the good experience to help the agent reproduce high-reward trajectories is also discussed in Oh et al. (2018).\nJ ABLATION STUDY OF HYPER-PARAMETER ∆t\nWe study the effect of the hyper-parameter ∆t in the various environments. At each step, we provide m = 10 (considering the computational burden, we selected the value of m as 10 for all experiments) steps of state embeddings from the demonstration trajectory as input into the trajectory-conditioned policy. Then we evaluate whether the agent has visited any of the next ∆t state embeddings from the lastly visited state embedding in the demonstration. It worth to note that the last visited state embedding must be included in the part of policy input. Thus, we should only compare the agent’s current state embedding with the state embeddings in the demonstration segment provided into the policy for imitation reward. Therefore, the proper value of ∆t should be less than m = 10 and we consider ∆t = 2, 4, 8.\nOn the simple environments such as Apple-Gold and Deep Sea, we could see in Figure 20a and 20b the different values of ∆t do not influence the performance a lot.\nAs shown in Figure 20c and 20d, on more challenging environments where the demonstration trajectory is longer and it’s harder to learn to imitate the demonstration, it’s better to set a larger value of ∆t and we can provide imitation reward to the agent and encourage the imitation learning more generously. In general, allowing the agent some flexibility of imitation by setting ∆t close to m works well.\nIn summary, ∆t = 8 is a proper value for all of our primary experiment environments." }, { "heading": "K STUDY OF THE STOCHASTICITY IN THE ENVIRONMENT", "text": "As listed in Table 3, we consider stochastic environments for all the primary experiments, including the Apple-Gold domain, Montezuma’s Revenge, Pitfall, and Mujoco maze. We set the initial state of the agent to be random. On the Apple-Gold domain, the agent takes 3 random steps in the left bottom part of the maze before the episode starts. On Montezuma’s Revenge, the mechanism of random initial no-ops is one of the standard ways to introduce stochasticity in the environment (Machado et al., 2017), as in previous work (Mnih et al., 2015).\nTo show the difficulty in policy learning introduced by the stochastic environments, we tried to memorize and repeat the action sequence in the demonstration trajectory and check whether the agent could successfully visit the state of interest by following the demonstration. In Figure 21c and 22c, it is clear that the success ratio in imitation is much lower than DTSIL, because the agent could not successfully follow the demonstration by just repeating the action sequence step by step. Especially on Montezuma’s Revenge, with random initial no-ops, the state of the enemy and electricity beam when the agent starts moving is randomized. Thus it could not successfully avoid death by just repeating the stored action sequence.\nObviously, when the environment is completely deterministic, repeating the action sequence could perfectly guide the agent to the state of interest. However, with the stochasticity of the random initial state, just memorizing the action sequence is sufficient to make the agent revisit novel regions. Thus, the agent could not revisit the novel regions as efficiently as DTSIL to discover better trajectories and converge to a better total episode reward, as shown in Figure 21a, 21b,22a, 22b ." }, { "heading": "L RANDOM EXPLORATION WITH EPSILON-GREEDY POLICY", "text": "In this section, we investigate an additional baseline method, random exploration with epsilon-greedy policy, which is a traditional exploration method in reinforcement learning problems. We consider combining epsilon-greedy policy with PPO or DQN framework and run experiments on Apple-Gold, Deep Sea, and Montezuma’s Revenge. Our implementation is based on OpenAI baselines (Dhariwal et al., 2017).\nAs shown in Figure 23, 24, 25, we searched different values of the hyper-parameters for the scheduling of the epsilon, though the performance is not better than DTSIL. Especially, random exploration performs poorly on Montezuma’s Revenge. The average score is less than 100, which is consistent with the experimental results from previous works (Mnih et al., 2015; Schulman et al., 2017)." } ]
2,019
null
SP:6b29ca414857bbf1cb0dbf01e67520b37155f3a4
[ "This paper works on neural architecture search for object detection. Two search directions are proposed: 1) searching the number of conv blocks at each resolution (or \"stage\"). 2) searching the dilations for each conv block. A greedy neighbor-based search algorithm is adopted. The results show healthy improvements among different network architectures. And the searched architecture also performs well on other tasks or datasets. ", "The paper attempts to apply neural architecture search (NAS) to re-arrange, or re-allocate the network backbone blocks and the convolution filters for object detection. The search space is two-fold: 1) the network is allowed to search over allocation of different number of blocks in the backbone (e.g. ResNet, MobileNet); 2) the network is allowed to choose the dilation of each of the block. A one-shot NAS method is adopted for efficient search. After search, the model is shown to have 1) better AP results; and 2) more balanced effective receptive field (ERF). " ]
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different feature resolution and spatial position diectly on the target detection dataset. A two-level reallocation space is proposed for both stage and spatial reallocation. A novel hierarchical search procedure is adopted to cope with the complex search space. We apply CR-NAS to multiple backbones and achieve consistent improvements. Our CR-ResNet50 and CRMobileNetV2 outperforms the baseline by 1.9% and 1.7% COCO AP respectively without any additional computation budget. The models discovered by CR-NAS can be equiped to other powerful detection neck/head and be easily transferred to other dataset, e.g. PASCAL VOC, and other vision tasks, e.g. instance segmentation. Our CR-NAS can be used as a plugin to improve the performance of various networks, which is demanding.
[ { "affiliations": [], "name": "Feng Liang" }, { "affiliations": [], "name": "Chen" }, { "affiliations": [], "name": "Ronghao Guo" }, { "affiliations": [], "name": "Ming Sun" }, { "affiliations": [], "name": "Wei Wu" }, { "affiliations": [], "name": "Junjie Yan" }, { "affiliations": [], "name": "Wanli Ouyang" } ]
[ { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Zhaowei Cai", "Nuno Vasconcelos" ], "title": "Cascade r-cnn: Delving into high quality object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yukang Chen", "Tong Yang", "Xiangyu Zhang", "Gaofeng Meng", "Chunhong Pan", "Jian Sun" ], "title": "Detnas: Neural architecture search on object detection", "venue": null, "year": 1903 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Mark Everingham", "SM Ali Eslami", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes challenge: A retrospective", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Golnaz Ghiasi", "Tsung-Yi Lin", "Quoc V Le" ], "title": "Nas-fpn: Learning scalable feature pyramid architecture for object detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ross Girshick" ], "title": "Fast r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": null, "year": 1904 }, { "authors": [ "Ryuhei Hamaguchi", "Aito Fujita", "Keisuke Nemoto", "Tomoyuki Imaizumi", "Shuhei Hikosaka" ], "title": "Effective use of dilated convolutions for segmenting small object instances in remote sensing imagery", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Yanghao Li", "Yuntao Chen", "Naiyan Wang", "Zhaoxiang Zhang" ], "title": "Scale-aware trident networks for object detection", "venue": "arXiv preprint arXiv:1901.01892,", "year": 2019 }, { "authors": [ "Zeming Li", "Chao Peng", "Gang Yu", "Xiangyu Zhang", "Yangdong Deng", "Jian Sun" ], "title": "Detnet: A backbone network for object detection", "venue": "arXiv preprint arXiv:1804.06215,", "year": 2018 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Xin Lu", "Buyu Li", "Yuxin Yue", "Quanquan Li", "Junjie Yan" ], "title": "Grid r-cnn", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Wenjie Luo", "Yujia Li", "Raquel Urtasun", "Richard Zemel" ], "title": "Understanding the effective receptive field in deep convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yang Gao", "Chunhua Shen" ], "title": "Nas-fcos: Fast neural architecture search for object detection", "venue": null, "year": 1906 }, { "authors": [ "Chao Peng", "Tete Xiao", "Zeming Li", "Yuning Jiang", "Xiangyu Zhang", "Kai Jia", "Gang Yu", "Jian Sun" ], "title": "Megdet: A large mini-batch object detector", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Junran Peng", "Ming Sun", "Zhaoxiang Zhang", "Tieniu Tan", "Junjie Yan" ], "title": "Efficient neural architecture transformation searchin channel-level for object detection", "venue": null, "year": 1909 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Panqu Wang", "Pengfei Chen", "Ye Yuan", "Ding Liu", "Zehua Huang", "Xiaodi Hou", "Garrison Cottrell" ], "title": "Understanding convolution for semantic segmentation", "venue": "IEEE winter conference on applications of computer vision (WACV),", "year": 2018 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Sandler" ], "title": "Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Object detection is one of the fundamental tasks in computer vision. The backbone feature extractor is usually taken directly from classification literature (Girshick, 2015; Ren et al., 2015; Lin et al., 2017a; Lu et al., 2019). However, comparing with classification, object detection aims to know not only what but also where the object is. Directly taking the backbone of classification network for object detectors is sub-optimal, which has been observed in Li et al. (2018). To address this issue, there are many approaches either manually or automatically modify the backbone network. Chen et al. (2019) proposes a neural architecture search (NAS) framework for detection backbone to avoid expert efforts and design trails. However, previous works rely on the prior knowledge for classification task, either inheriting the backbone for classification, or designing search space similar to NAS on classification. This raises a natural question: How to design an effective backbone dedicated to detection tasks?\nTo answer this question, we first draw a link between the Effective Receptive Field (ERF) and the computation allocation of backbone. The ERF is only small Gaussian-like factor of theoretical receptive field (TRF), but it dominates the output (Luo et al., 2016). The ERF of image classification task can be easily fulfilled, e.g. the input size is 224×224 for the ImageNet data, while the ERF of object detection task need more capacities to handle scale variance across the instances, e.g. the input size is 800×1333 and the sizes of objects vary from 32 to 800 for the COCO dataset. Lin et al. (2017a) allocates objects of different scales into different feature resolutions to capture the appropriate ERF in each stage. Here we conduct an experiment to study the differences between the ERF of several FPN features. As shown in Figure 1, we notice the allocation of computation across different resolutions has a great impact on the ERF. Furthermore, appropriate computation allocation across spacial position (Dai et al., 2017) boost the performance of detector by affecting the ERF.\nBased on the above observation, in this paper, we aim to automatically design the computation allocation of backbone for object detectors. Different from existing detection NAS works (Ghiasi et al., 2019; Ning Wang & Shen, 2019) which achieve accuracy improvement by introducing higher computation complexity, we reallocate the engaged computation cost in a more efficient way. We propose computation reallocation NAS (CR-NAS) to search the allocation strategy directly on the detection task. A two-level reallocation space is conducted to reallocate the computation across different resolution and spatial position. In stage level, we search for the best strategy to distribute the computation among different resolution. In operation level, we reallocate the computation by introducing a powerful search space designed specially for object detection. The details about search space can be found in Sec. 3.2. We propose a hierarchical search algorithm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable search space to reduce stage-level searching cost and adapt different computational requirements.\nExtensive experiments show the effectiveness of our approach. Our CR-NAS offers improvements for both fast mobile model and accurate model, such as ResNet (He et al., 2016), MobileNetV2 (Sandler et al., 2018), ResNeXt (Xie et al., 2017). On the COCO dataset, our CR-ResNet50 and CR-MobileNetV2 can achieve 38.3% and 33.9% AP, outperforming the baseline by 1.9% and 1.7% respectively without any additional computation budget. Furthermore, we transfer our CR-ResNet and CR-MobileNetV2 into the another ERF-sensitive task, instance segmentation, by using the Mask RCNN (He et al., 2017) framework. Our CR-ResNet50 and CR-MobileNetV2 yields 1.3% and 1.2% COCO segmentation AP improvement over baseline.\nTo summarize, the contributions of our paper are three-fold:\n• We propose computation reallocation NAS(CR-NAS) to reallocate engaged computation resources. To our knowledge, we are the first to dig inside the computation allocation across different resolution.\n• We develop a two-level reallocation space and hierarchical search paradigm to cope with the complex search space. Typically in stage reallocation, we exploit a reusable model to reduce stage-level searching cost and adapt different computational requirements.\n• Our CR-NAS offers significant improvements for various types of networks. The discovered models show great transferablity over other detection neck/head, e.g. NAS-FPN (Cai & Vasconcelos, 2018), other dataset, e.g. PASCAL VOC (Everingham et al., 2015) and other vision tasks, e.g. instance segmentation (He et al., 2017)." }, { "heading": "2 RELATED WORK", "text": "Neural Architecture Search(NAS) Neural architecture search focus on automating the network architecture design which requires great expert knowledge and tremendous trails. Early NAS approaches (Zoph & Le, 2016; Zoph et al., 2018) are computational expensive due to the evaluating of each candidate. Recently, weight sharing strategy (Pham et al., 2018; Liu et al., 2018; Cai et al., 2018; Guo et al., 2019) is proposed to reduce searing cost. One-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019) build a directed acyclic graph G (a.k.a. supernet) to subsume all architectures in the search space and decouple the weights training with architectures searching. NAS works only search for operation in the certain layer. our work is different from them by searching for the computation allocation across different resolution. Computation allocation across feature resolutions is an obvious issue that has not been studied by NAS. We carefully design a search space that facilitates the use of existing search for finding good solution.\nNAS on object detection. There are some work use NAS methods on object detection task (Chen et al., 2019; Ning Wang & Shen, 2019; Ghiasi et al., 2019). Ghiasi et al. (2019) search for scalable feature pyramid architectures and Ning Wang & Shen (2019) search for feature pyramid network and the prediction heads together by fixing the architecture of backbone CNN. These two work both introduce additional computation budget. The search space of Chen et al. (2019) is directly inherited from the classification task which is suboptimal for object detection task. Peng et al. (2019) search for dilated rate on channel level in the CNN backbone. These two approaches assume the fixed number of blocks in each resolution, while we search the number of blocks in each stage that is important for object detection and complementary to these approaches." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 BASIC SETTINGS", "text": "Our search method is based on the Faster RCNN (Ren et al., 2015) with FPN (Lin et al., 2017a) for its excellent performance. We only reallocate the computation within the backbone, while fix other components for fair comparison.\nFor more efficient search, we adopt the idea of one-shot NAS method (Brock et al., 2017; Bender et al., 2018; Guo et al., 2019). In one-shot NAS, a directed acyclic graph G (a.k.a. supernet) is built to subsume all architectures in the search space and is trained only once. Each architecture g is a subgraph of G and can inherit weights from the trained supernet. For a specific subgraph g ∈ G, its corresponding network can be denoted as N (g, w) with network weights w." }, { "heading": "3.2 TWO-LEVEL ARCHITECTURE SEARCH SPACE", "text": "We propose Computation Reallocation NAS (CR-NAS) to distribute the computation resources in two dimensions: stage allocation in different resolution, convolution allocation in spatial position." }, { "heading": "3.2.1 STAGE REALLOCATION SPACE", "text": "The backbone aims to generate intermediate-level features C with increasing downsampling rates 4×, 8×, 16×, and 32×, which can be regarded as 4 stages. The blocks in the same stage share the same spatial resolution. Note that the FLOPs of a single block in two adjacent spatial resolutions remain the same because a downsampling/pooling layer doubles the number of channels. So given the number of total blocks of a backbone N , we can reallocate the number of blocks for each stage while keep the total FLOPs the same. Figure 2 shows our stage reallocation space. In this search space, each stage contains several branches, and each branch has certain number of blocks. The numbers of blocks in different branches are different, corresponding to different computational budget for the stage. For example, there are 5 branches for the stage 1 in Figure 2, the numbers of blocks for these 5 branches are, respectively, 1, 2, 3, 4, and 5. We consider the whole network as a supernet T = {T1, T2, T3, T4}, where Ti at the ith stage hasKi branches, i.e. Ti = {tki |k = 1...Ki}. Then an allocation strategy can be represented as τ = [τ1, τ2, τ3, τ4], where τi denote the number of blocks in the ith branch. All blocks in the same stage have the same structure. ∑4 i=1 τi = N for a network with N blocks. For example, the original ResNet101 has τ = [3, 4, 23, 3] and N = 33\nresidual blocks. We make a constraint that each stage at least has one convolutional block. We would like to find the best allocation strategy of ResNet101 is among the ( 32 3 ) possible choices. Since validating a single detection architecture requires hundreds of GPU-hours, it not realist to find the optimal architecture by human trails.\nOn the other hand, we would like to learn stage reallocation strategy for different computation budgets simultaneously. Different applications require CNNs of different numbers of layers for achieving different latency requirements. This is why we have ReseNet18, ReseNet50, ReseNet101, etc. We build a search space to cover all the candidate instances in a certain series, e.g. ResNet series. After considering the trade off between granularity and range, we set the numbers of blocks for T1 and T2 as {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, and set the numbers of blocks for T3 as {2, 3, 5, 6, 9, 11, 14, 17, 20, 23}, for T4 as {2, 3, 4, 6, 7, 9, 11, 13, 15, 17} for the ResNet series. The stage reallocation space of MobileNetV2 (Sandler et al., 2018) and ResNeXt (Xie et al., 2017) can be found in Appendix A.2." }, { "heading": "3.2.2 CONVOLUTION REALLOCATION SPACE", "text": "To reallocate the computation across spatial position, we utilize dilated convolution Li et al. (2019), Li et al. (2018). Dilated convolution effects the ERF by performing convolution at sparsely sampled locations. Another good feature of dilated convolution is that dilation introduce no extra parameter and computation. We define a choice block to be a basic unit which consists of multiple dilations and search for the best computation allocation. For ResNet Bottleneck, we modify the center 3 × 3 convolution. For ResNet BasicBlock, we only modify the second 3 × 3 convolution to reduce search space and searching time. We have three candidates in our operation set O: {dilated convolution 3 × 3 with dilation rate i|i = 1, 2, 3}. Across the entire ResNet50 search space, there are therefore 316 ≈ 4× 107 possible architectures." }, { "heading": "3.3 HIERARCHICAL SEARCH FOR OBJECT DETECTION", "text": "We propose a hierarchical search procedure to cope with the complex reallocation space. Firstly, the stage space is explored to find the best computation allocation for different resolution. Then, the operation space is explored to further improve the architecture with better spatial allocation." }, { "heading": "3.3.1 STAGE REALLOCATION SEARCH", "text": "To reduce the side effect of weights coupling, we adopt the uniform sampling in supernet training(a.k.a single-path one-shot) (Guo et al., 2019). After the supernet training, we can validate the allocation strategies τ ∈ T directly on the task detection task. Model accuracy(COCO AP) is defined as APval(N (τ, w)). We set the block number constraint N . We can find the best allocation strategy in the following equation:\nτ∗ = argmax∑4 i=1 τi=N APval(N (τ, w)). (1)" }, { "heading": "3.3.2 BLOCK OPERATION SEARCH", "text": "Algorithm 1: Greedy operation search algorithm Input: Number of blocks B; Possible operations set of each blocks O = {Oi | i = 1, 2, ..., B}; Supernet with trained weights N (O,W ∗); Dataset for validation Dval; Evaluation metric APval;. Output: Best architecture o∗ Initialize top K partial architecture p = Ø for i = 1, 2, ..., B do\npextend = p×Oi . × denotes Cartesian product result = {(arch,AP ) | arch ∈ pextend, AP = evaluate(arch)} p = choose topK(result)\nend Output: Best architecture o∗ = choose top1(p).\nBy introducing the operation allocation space as in Sec. 3.2.2, we can reallocate the computation across spatial position. Same as stage reallocation search, we train an operation supernet adopting random sampling in each choice block (Guo et al., 2019). For architecture search process, previous one-shot works use random search (Brock et al., 2017; Bender et al., 2018) or evolutionary search (Guo et al., 2019). In our approach, We propose a greedy algorithm to make sequential decisions to obtain the final result. We decode network architecture o as a sequential of choices [o1, o2, ..., oB ]. In each choice step, the top K partial architectures are maintained to shrink the search space. We evaluate each candidate operation from the first choice block to the last. The greedy operation search algorithm is shown in Algorithm 1.\nThe hyper-parameter K is set equal to 3 in our experiment. We first extend the partial architecture in the first block choice which contains three partial architectures in pextend. Then we expand the top 3 partial architectures into the whole length B, which means that there are 3 × 3 = 9 partial architectures in other block choice. For a specific partial architecture arch, we sample the operation of the unselected blocks uniformly for c architectures where c denotes mini batch number of Dval. We validate each architecture on a mini batch and combine the results to generate evaluate(arch). We finally choose the best architecture to obtain o∗." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "" }, { "heading": "4.1 DATASET AND IMPLEMENTATION DETAILS", "text": "Dataset We evaluate our method on the challenging MS COCO benchmark (Lin et al., 2014). We split the 135K training images trainval135 into 130K images archtrain and 5K images archval. First, we train the supernet using archtrain and evaluate the architecture using archval. After the architecture is obtained, we follow other standard detectors (Ren et al., 2015; Lin et al., 2017a) on using ImageNet (Russakovsky et al., 2015) for pre-training the weights of this architecture. The final model\nis fine-tuned on the whole COCO trainval135 and validated on COCO minival. Another detection dataset VOC (Everingham et al., 2015) is also used. We use VOC trainval2007+trainval2012 as our training dataset and VOC test2007 as our vaildation dataset.\nImplementation details The supernet training setting details can be found in Appendix A.1. For the training of our searched models, the input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. For fair comparison, all our models are trained for 13 epochs, known as 1× schedule (Girshick et al., 2018). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 8 and 11 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted for both baselines and our searched models." }, { "heading": "4.2 MAIN RESULTS", "text": "" }, { "heading": "4.2.1 COMPUTATION REALLOCATION PERFORMANCE", "text": "We denote the architecture using our computation reallocation by prefix ’CR-’, e.g. CR-ResNet50. Our final architectures have the almost the same FLOPs as the original network(the negligible difference in FLOPs is from the BatchNorm layer and activation layer). As shown in Table 1, our CR-ResNet50 and CR-ResNet101 outperforms the baseline by 1.9% and 1.6% respectively. It is worth mentioning that many mile-stone backbone improvements also only has around 1.5% gain. For example, the gain is 1.5% from ResNet50 to ResNeXt50-32x4d as indicated in Table 4. In addition, we run the baselines and searched models under longer 2× setting (results shown in Appendix A.4). It can be concluded that the improvement from our approach is consistent.\nOur CR-ResNet50 and CR-ResNet101 are especially effective for large objects(3.5%, 4.8% improvement for APl). To understand these improvements, we depict the architecture sketches in Figure 4. We can find in the stage-level, our Stage CR-ResNet50 reallocate more capacity in deep stage. It reveals the fact that the budget in shallow stage is redundant while the resources in deep stage is limited. This pattern is consistent with ERF as in Figure 1. In operation-level, dilated convolution with large rates tends to appear in the deep stage. We explain the shallow stage needs more dense sampling to gather exact information while deep stage aims to recognize large object by more sparse sampling. The dilated convolutions in deep stage further explore the network potential to detect large objects, it is an adaptive way to balance the ERF. For light backbone, our CR-ResNet18 and CR-MobileNetV2 both improves 1.7% AP over the baselines with all-round APs to APl improvements. For light network, it is a more efficient way to allocate the limited capacity in the deep stage for the discriminative feature captured in the deep stage can benefit the shallow small object by the FPN top-down pathway." }, { "heading": "4.2.2 TRANSFERABILITY VERIFICATION", "text": "Different dataset We transfer our searched model to another object detection dataset VOC (Everingham et al., 2015). Training details can be found in Appendix A.3. We denote the VOC metric mAP@0.5 as AP50 for consistency. As shown in Table 2, our CR-ResNet50 and CR-ResNet101 achieves AP50 improvement 1.0% and 0.7% comparing with the already high baseline.\nDifferent task Segmentation is another task that is highly sensitive to the ERF (Hamaguchi et al., 2018; Wang et al., 2018). Therefore, we transfer our computation reallocation network into the instance segmentation task by using the Mask RCNN (He et al., 2017) framework. The experimental results on COCO are shown in Table 3. The instance segmentation AP of our CR-MobileNetV2, CR-ResNet50 and CR-ResNet101 outperform the baseline respectively by 1.2%, 1.3% and 1.1% absolute AP. We also achieve bounding box AP improvement by 1.5%, 1.5% and 1.8% respectively.\nDifferent head/neck Our work is orthogonal to other improvements on object detection. We exploit the SOTA detector Cascade Mask RCNN (Cai & Vasconcelos, 2018) for further verification. The detector equipped with our CR-Res101 can achieve 44.5% AP, better than the regular Res101 43.3% baseline by a significant 1.2% gain. Additionally, we evaluate replacing the original FPN with a searched NAS-FPN (Ghiasi et al., 2019) neck to strength our results. The Res50 with NASFPN neck can achieve 39.6% AP while our CR-Res50 with NAS-FPN can achieve 41.0% AP using the same 1× setting. More detailed results can be found in Appendix A.4.\nTable 4: COCO minival AP (%) evaluating stage reallocation performance for different networks. Res50 denotes ResNet50, similarly for Res101. ReX50 denotes ResNeXt50, similarly for ReXt101.\nMbileNetV2 Res18 Res50 Res101 ReX50-32×4d ReX101-32×4d Baseline AP 32.2 32.1 36.4 38.6 37.9 40.6 Stage-CR AP 33.5 33.4 37.4 39.5 38.9 41.5\n100 120 140 160 180 200 220 240 260 280 FLOPs(G)\n30.0\n32.0\n34.0\n36.0\n38.0\n40.0\n42.0\nAP SCR-Backbone(stage) ResNet SCR-ResNet ResNeXt SCR-ResNeXt MobileNetV2 SCR-MobileNetV2\nFigure 5: Detector FLOPs(G) versus AP on COCO minival. The bold lines and dotted lines are the baselines and our stage computation reallocation models(SCR-) respectively.\n75.0 75.5 76.0 76.5 77.0 77.5 78.0 Top1 Acc.\n36.0\n37.0\n38.0\n39.0\n40.0\n41.0\nAP (76.5, 38.3)\n(77.3, 40.2)\nFLOPs equivalent R50 FLOPs R50 FLOPs (best) R101 FLOPs R101 FLOPs (best)\nFigure 6: Top1 accuracy on ImageNet validation set versus AP on COCO minival. Each dot is a model which has equivalent FLOPs as the baseline." }, { "heading": "4.3 ANALYSIS", "text": "" }, { "heading": "4.3.1 EFFECT OF STAGE REALLOCATION", "text": "Our design includes two parts, stage reallocation search and block operation search. In this section, we analyse the effectiveness of stage reallocation search alone. Table 4 shows the performance comparison between the baseline and the baseline with our stage reallocation search. From light MobileNetV2 model to heavy ResNeXt101, our stage reallocation brings a solid average 1.0% AP improvement. Figure 5 shows that our Stage-CR network series yield overall improvements over baselines with negligible difference in computation. The stage reallocation results for more models are shown in Appendix A.2. There is a trend to reallocate the computation from shallow stage to deep stage. The intuitive explanation is that reallocating more capacity in deep stage results in a balanced ERF as Figure 1 shows and can enhance the ability to detect medium and large object." }, { "heading": "4.3.2 CORRELATIONS BETWEEN CLS. AND DET. PERFORMANCE", "text": "Often, a large AP increase could be obtained by simply replacing backbone with stronger network, e.g. from ResNet50 to ResNet101 and then to ResNeXt101. The assumption is that strong network can perform well on both classification and detection tasks. We further explore the performance correlation between these two tasks by a lot of experiments. We draw ImageNet top1 accuracy versus COCO AP correlation in Figure 6 for different architectures of the same FLOPS. Each dot is a single network architecture. We can easily find that although the performance correlation between these two tasks is basically positive, better classification accuracy may not always lead to better detection accuracy. This study further shows the gap between these two tasks." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present CR-NAS (Computation Reallocation Neural Architecture Search) that can learn computation reallocation strategies across different resolution and spatial position. We design\na two-level reallocation space and a novel hierarchical search procedure to cope with the complex search space. Extensive experiments show the effectiveness of our approach. The discovered model has great transfer-ability to other detection neck/head, other dataset and other vision tasks. Our CRNAS can be used as a plugin to other detection backbones to further booster the performance under certain computation resources." }, { "heading": "A APPENDIX", "text": "A.1 SUPERNET TRAINING\nBoth stage and operation supernets use exactly the same setting. The supernet training process adopt the ’pre-training and fine-tuning’ paradigm. For ResNet and ResNeXt, the supernet channel distribution is [32, 64, 128, 256].\nSupernet pre-training. We use ImageNet-1k for supernet pre-training. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. The supnet are trained for 150 epochs with the batch size 1024. To smooth the jittering in the training process, we adopt the cosine learning rate decay (Loshchilov & Hutter, 2016) with the initial learning rate 0.4. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.\nSupernet fine-tuning. We fine tune the pretrained supernet on archtrain. The input images are resized to have a short side of 800 pixels or a long side of 1333 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. Supernet is trained for 25 epochs (known as 2× schedule (Girshick et al., 2018)). We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.\nA.2 REALLOCATION SETTINGS AND RESULTS\nstage allocation space For ResNeXt, the stage allocation space is exactly the same as ResNet series. For MobileNetV2, original block numbers in Sandler et al. (2018) is defined by n=[1, 1, 2, 3, 4, 3, 3, 1, 1, 1]. We build our allocation space on the the bottleneck operator by fixing stem and tail components. A architecture is represented as m = [1, 1,m1,m2,m3,m4,m5, 1, 1, 1]. The allocation space is M = [M1,M2,M3,M4,M5]. M1,M2 = {1, 2, 3, 4, 5}, M3 = {3, 4, 5, 6, 7}, M4,M5 = {2, 3, 4, 5, 6}. It’s worth to mention the computation cost in different stage of m is not exactly the same because of the abnormal channels. We format the weight as [1.5, 1, 1, 0.75, 1.25] for [m1,m2,m3,m4,m5].\ncomputation reallocation results We propose our CR-NAS in a sequential way. At first we reallocate the computation across different resolution. The Stage CR results is shown in Table A.2\nThen we search for the spatial allocation by adopting the dilated convolution with different rates. the operation code as. we denote our final model as\n[0 ] dilated conv with rate 1(normal conv) [1 ] dilated conv with rate 2 [2 ] dilated conv with rate 3\nOur final model can be represnted as a series of allocation codes.\nA.3 IMPLEMENTATION DETAILS OF VOC\nWe use the VOC trainval2007+trainval2012 to server as our whole training set. We conduct our results on the VOC test2007. The pretrained model is apoted. The input images are resized to have a short side of 600 pixels or a long side of 1000 pixels. We use stochastic gradient descent (SGD) as optimizer with 0.9 momentum and 0.0001 weight decay. We train for 18 whole epochs for all models. We use multi-GPU training over 8 1080TI GPUs with total batch size 16. The initial learning rate is 0.00125 per image and is divided by 10 at 15 and 17 epochs. Warm-up and synchronized BatchNorm (SyncBN) (Peng et al., 2018) are adopted to help convergence.\nA.4 MORE EXPERIMENTS\nlonger schedule 2× schedule means training totally 25 epochs as indicated in Girshick et al. (2018). The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 22 epochs. Other training settings is exactly the same as in 1×.\nPowerful detector The Cascade Mask RCNN (Cai & Vasconcelos, 2018) is a SOTA multi-stage object detector. The detector is trained for 20 epochs. The initial learning rate is 0.00125 per image and is divided by 10 at 16 and 19 epochs. Warming up and synchronized-BN (Peng et al., 2018) are adopted to help convergence.\nPowerful searched neck NAS-FPN (Ghiasi et al., 2019) is a powerful scalable feature pyramid architecture searched for object detection. We reimplement NAS-FPN (7 @ 384) in Faster RCNN (The original paper is implemented in RetinaNet (Lin et al., 2017b)). The detector is training under 1× setting as described in 4.1." } ]
2,020
null
SP:49ef0331201083490748c1dbcd12d130cb0a68d4
[ "This work studies the head size <--> head number tradeoff in multihead attention. It argues and formally establishes that (1) the expressivity of an attention head is determined by its dimension and (b) fixing the head dimension, one gains additional expressive power by using more heads. In response to such observations, the paper proposes Fixed Multihead Attention, where the constraint that `head_size * number_of_heads = embedding_size` in standard multihead attention is lifted; and it allows for using more attention heads without making each head smaller. One can control the total amount of parameters by using smaller embedding sizes, making it comparable (in terms of #parameters) to standard multihead attention. Empirical results on language modeling and NLI tasks confirms the arguments. ", "This work discusses how to set the projection size for each head (head size) in multi-head attention module, especially Transformer. Theorem 1 is interesting, which points out a lower bound for the head size. The proposed method is to decouple the dependency between the head size and the embedding size. The experiments show that the proposed method is able to achieve comparable performance to BERT with fewer training cost." ]
Attention based Transformer architecture has enabled significant advances in the field of natural language processing. In addition to new pre-training techniques, recent improvements crucially rely on working with a relatively larger embedding dimension for tokens. Unfortunately, this leads to models that are prohibitively large to be employed in the downstream tasks. In this paper we identify one of the important factors contributing to the large embedding size requirement. In particular, our analysis highlights that the scaling between the number of heads and the size of each head in the existing architectures gives rise to this limitation, which we further validate with our experiments. As a solution, we propose a new way to set the projection size in attention heads that allows us to train models with a relatively smaller embedding dimension, without sacrificing the performance.
[]
[ { "authors": [ "Ciprian Chelba", "Tomas Mikolov", "Mike Schuster", "Qi Ge", "Thorsten Brants", "Phillipp Koehn" ], "title": "One billion word benchmark for measuring progress in statistical language modeling", "venue": "CoRR, abs/1312.3005,", "year": 2013 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Chung-Cheng Chiu", "Tara N Sainath", "Yonghui Wu", "Rohit Prabhavalkar", "Patrick Nguyen", "Zhifeng Chen", "Anjuli Kannan", "Ron J Weiss", "Kanishka Rao", "Ekaterina Gonina" ], "title": "State-of-the-art speech recognition with sequence-to-sequence models", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Dzmitry Bahdanau", "Yoshua Bengio" ], "title": "On the properties of neural machine translation: Encoder–decoder approaches", "venue": "In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation,", "year": 2014 }, { "authors": [ "Gonçalo M Correia", "Vlad Niculae", "André FT Martins" ], "title": "Adaptively sparse transformers", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Yuxi Li" ], "title": "Deep reinforcement learning: An overview", "venue": "arXiv preprint arXiv:1701.07274,", "year": 2017 }, { "authors": [ "Peter J Liu", "Mohammad Saleh", "Etienne Pot", "Ben Goodrich", "Ryan Sepassi", "Lukasz Kaiser", "Noam Shazeer" ], "title": "Generating wikipedia by summarizing long sequences", "venue": "arXiv preprint arXiv:1801.10198,", "year": 2018 }, { "authors": [ "Paul Michel", "Omer Levy", "Graham Neubig" ], "title": "Are sixteen heads really better than one", "venue": "arXiv preprint arXiv:1905.10650,", "year": 2019 }, { "authors": [ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ], "title": "Scaling neural machine translation", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "Technical report,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "Squad: 100,000+ questions for machine comprehension of text", "venue": "arXiv preprint arXiv:1606.05250,", "year": 2016 }, { "authors": [ "Hao Sun", "Xu Tan", "Jun-Wei Gan", "Hongzhi Liu", "Sheng Zhao", "Tao Qin", "Tie-Yan Liu" ], "title": "Token-level ensemble distillation for grapheme-to-phoneme conversion", "venue": null, "year": 1904 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Samy Bengio", "Eugene Brevdo", "Francois Chollet", "Aidan N. Gomez", "Stephan Gouws", "Llion Jones", "Łukasz Kaiser", "Nal Kalchbrenner", "Niki Parmar", "Ryan Sepassi", "Noam Shazeer", "Jakob Uszkoreit" ], "title": "Tensor2tensor for neural machine", "venue": "translation. CoRR,", "year": 2018 }, { "authors": [ "Elena Voita", "David Talbot", "Fedor Moiseev", "Rico Sennrich", "Ivan Titov" ], "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "venue": null, "year": 1905 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann N Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": null, "year": 1901 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": null, "year": 1906 }, { "authors": [ "Yang You", "Jing Li", "Jonathan Hseu", "Xiaodan Song", "James Demmel", "Cho-Jui Hsieh" ], "title": "Reducing bert pre-training time from 3 days to 76", "venue": null, "year": 1904 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart" ], "title": "Relational deep reinforcement learning", "venue": "arXiv preprint arXiv:1806.01830,", "year": 2018 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "arXiv preprint arXiv:1805.08318,", "year": 2018 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Rich Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Attention based architectures, such as Transformers, have been effective for sequence modelling tasks such as machine translation (Gehring et al., 2017; Vaswani et al., 2017), question answering, sentence classification (Radford et al., 2018; Devlin et al., 2018) and document generation (Liu et al., 2018). These models have emerged as better alternatives to the recurrent models - RNNs (Sutskever et al., 2014), LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Cho et al., 2014). This is mainly due to their feed forward structure, which removes the sequential processing bottleneck for sequence data, making them easier to train compared to the recurrent models. Self attention models also have found applications in vision (Wang et al., 2018), adversarial networks (Zhang et al., 2018), reinforcement learning (Zambaldi et al., 2018; Li, 2017) and speech recognition (Chiu et al., 2018).\nRecent advances in using the self attention models in natural language tasks have been made by first using a language modeling task to pre-train the models and then fine tuning the learned models on specific downstream tasks. Radford et al. (2018) and Devlin et al. (2018) used Transformers to pre-train a language model and showed that the fine tuned model outperforms LSTMs on many natural language understanding and question answering tasks. For example, BERT (Devlin et al., 2018), a 24 layer transformer model, is shown to achieve the state of the art performance on several NLP tasks, including on the SQuAD dataset. These advances, in addition to novel pre-training tasks, relied on bigger models with a larger embedding size. BERT model uses an embedding size of 1024 (Devlin et al., 2018); GPT-2 uses models with embedding size up to 1600 (Radford et al., 2019).\nA single Transformer block consists of two key components: a multi-head self attention layer followed by a feed forward layer (Vaswani et al., 2017). A single head in a multi-head attention layer, computes self attention between the tokens in the input sequence, which it then uses to compute a weighted average of embeddings for each token. To keep the number of parameters fixed in the attention layer, each head projects the data into a lower dimensional subspace, dimension of which scales as 1/(number of heads), and computes the self attention in this subspace. This projection size for each head is commonly known as the head size.\nDespite the advances in using Transformer models for various tasks, their functioning and design choices still remain mysterious and are not well understood. Can the attention layer learn arbitrary contextual representations? What is the role of the feed forward layer in the Transformer block? Do we need such a large embedding size to capture the context of all the tokens? Answering these questions requires an understanding of the representation power of the units in the Transformer.\nIn this paper we take some of the first steps towards developing such an understanding of the Transformer. In particular, we focus on the representation power of the multi-head self attention layer.\nFirst, we analyze the representation power of a single self attention unit and show that it crucially depends on the projection sizes used to compute the dot product attention.\nWe next study the advantage of having multiple heads in the attention layer. It is generally believed that increasing the number of heads helps by allowing the heads to compute context from different representation subspaces at different positions. However, increasing the number of heads decreases the head size, decreasing the expressive power of individual heads. We show that reducing the head size to a value below the input sequence length hurts the representation power of each head. This is because a smaller head size introduces a rank constraint on the projection matrices in each head, and limits their representation power. We indeed notice this effect in practice: while the performance improves with increasing the number of heads in the beginning (Devlin et al., 2018), we notice a drop in the performance once the number of heads increases beyond a certain threshold, as seen in Table 1 and Fig. 1 (see also Table 4(A) in Vaswani et al. (2017)).\nThis heuristic of scaling the head size inversely with the number of heads was proposed initially in Vaswani et al. (2017) and has become the standard way of using multi-head attention (Radford et al., 2018; Devlin et al., 2018). In order to avoid hurting the performance, the existing Transformer models allow for multiple heads by increasing the embedding size, which in turn increases the head size. However, larger embedding size, in addition to increasing the number of parameters, makes it expensive to use the model and the learned embeddings in downstream tasks, as the downstream model sizes scale with the embedding size of the tokens. For example, the inference time and memory required in retrieval tasks increases linearly with the embedding size.\nBased on these observations, we propose a new way to set the projection size in the attention heads, in which each head has a fixed head size that is independent of both the number of heads and the embedding size of the model. This allows us to train models with a relatively smaller embedding size without affecting the head size. It also allows us to increase the number of heads per layer to improve the performance. Another advantage of the fixed head size Transformer is, unlike the standard Transformer, which requires the number of heads to be a factor of the embedding size, we are free to set arbitrary number of heads as required for the task.\nWe evaluate this fixed head size Transformer on language modeling (LM1B dataset), natural language inference (MNLI dataset) and question answering tasks (SQuAD dataset). We show that the modified Transformer trained with an embedding size of 512 can match the performance of the BERTLARGE(Devlin et al., 2018), a Transformer with an embedding size of 1024 (see Fig. 2). We further present experimental results evaluating the effect of different choices of the head size and the embedding size in the Section 4.\nThe contributions of this paper are summarized below.\n• We analyze the representation power of the multi-head self attention layer and show the limitation the embedding size places on the number of heads.\n• We propose a new way to set the head size, and show the proposed fixed head size layers are strictly better than the standard multi-head attention layers in terms of their expressive power. This modification allows us to both increase the number of heads per layer and decrease the embedding size, without hurting the performance.\n• We experimentally show that the fixed head size Transformer can be trained with a smaller embedding size and more heads on three standard NLP tasks." }, { "heading": "1.1 RELATED WORKS", "text": "Given the significance of self attention models, there has been work trying to both improve the performance and speedup the computation in Transformers. Ott et al. (2018) and You et al. (2019) reduce precision and use large batch training to reduce the training time of the attention models. Child et al. (2019) propose sparse self attention models to speed up the computation in the attention layer for long sequence data generation tasks. They show that these sparse attention models can be trained on tasks with sequence length greater than 10k without sacrificing the accuracy. Dehghani et al. (2018) propose a depth recurrent Transformer network that reuses the parameters across layers. They show that this modification makes the Transformer networks Turing complete even with finite precision weights. Yang et al. (2019) propose a new way to increase the effective sequence length that the Transformer attends to, by reusing the intermediate embeddings across sequences. They show that the modified architecture performs better on tasks that require computing context over longer sequence lengths. We note that most of these modifications rely on the multi-head self attention, the same building block of the Transformers. Our work is studying this basic multi-head attention layer, and suggesting a new way to set the head size, which can be easily used along with any of the above architectural modifications.\nWu et al. (2019) propose to replace the self-attention layer with lightweight dynamic convolutions and show improved performance on machine translation and language modeling. Even though the resulting model has faster inference time, it still needs to use a large embedding size (1024), as big as the original attention models. We believe the techniques in this paper can be combined with these results to realize both smaller embedding size and faster inference time.\nSun et al. (2019) perform neural architecture search using evolutionary methods on sequence to sequence models and find an evolved transformer architecture, which in addition to multi-head attention units, has convolution filter and gated linear units. Our proposed modifications stay closer to Transformers in spirit and can be used as seed units for this architecture search.\nVoita et al. (2019); Michel et al. (2019) study the importance of different heads in an attention layer. They observe that, during inference, many of the heads in each layer can be pruned away with a little effect on the prediction. However, they still need multiple heads during the training.\nChild et al. (2019); Correia et al. (2019) impose sparsity structure on the attention layer during training to improve both interpretability and performance. Fixing the head size will in fact make it easier to learn such sparsity patterns, as a low rank constraint does not allow a head to express all possible sparsity patterns. Combining these techniques can hence potentially enable training of sparse attention models with a smaller embedding size." }, { "heading": "2 TRANSFORMER ARCHITECTURE AND ANALYSIS", "text": "In this section we present the Transformer architecture and analyze the representation power of the multi-head self attention, a key component of the Transformer block.\nThe input to a Transformer network is a sequence of n tokens. Typically, each token is converted into a token embedding of dimension d by an embedding layer. We let X ∈ Rd×n be the embedding matrix corresponding to the n tokens in the input sequence." }, { "heading": "2.1 SINGLE-HEAD ATTENTION", "text": "The transformer block is a combination of a self attention layer followed by a feed forward layer (Vaswani et al., 2017). Both layers have a skip connection and use Layer Normalization (LN) (Ba et al., 2016). In particular, for token embeddings X, the dot product attention is computed as follows.\nAttention(X) = WvX · Softmax [ (WkX)\nT (WqX)√ dk\n] = WvX ·P. (1)\nHere Wq ∈ Rdq×d, Wk ∈ Rdk×d and Wv ∈ Rdv×d represent the projection matrices associated with the query, key and value respectively in an attention unit (Vaswani et al., 2017). For a single-head attention unit, we have dq = dk = dv = d. In the dot-product attention (cf. (1)), P aims to capture the context of the input for a given token based on the remaining tokens in the input sequence.\nSubsequently, the output of the attention layer takes the following form. LN (X+Wo · Attention(X)) , (2) where LN(·) represents the layer-normalization operation. Given the attention module, as defined in (1), it is natural to question its ability to represent arbitrary contexts P for a given input sequence X.\nTowards this, we show that for a large enough projection dimension d, the unit has enough capacity to represent arbitrary contexts over a given input sequence. In the following result we establish that for a large enough projection size an attention unit can represent any data pair (X,P). We also show that the model cannot represent arbitrary context when d is smaller than n. Theorem 1 (Representation Theorem). If dq = dk = d ≥ n, then given any full column rank matrix X ∈ Rd×n and an arbitrary n× n positive column stochastic matrix P, there always exists d× d projection matrices Wq and Wk such that\nSoftmax\n[ (WkX)\nT (WqX)√ dk\n] = P. (3)\nIf dq = dk = d < n, there exist X and P such that (3) does not hold for all Wq and Wk.\nThe proof of Theorem 1 is provided in the supplementary material. This result shows that the projection dimension dq = dk = d needs to be larger than the sequence length n for the attention unit to be able to represent any desired context P. Even though this result describes a single example sequence case, it highlights a fundamental property of the model architecture that increasing the projection size increases the capacity of the attention heads." }, { "heading": "2.2 MULTI-HEAD ATTENTION", "text": "As discussed in Section 2.1, an attention unit updates the embedding of an input token based on a weighted average of the embeddings of all the tokens in the sequence, using the context P (cf. (1)). Vaswani et al. (2017) proposed Multi-Head attention mechanism that increases the representation power of an attention layer, where multiple attention units operate on different low dimensional projections of the input, with each attention unit being referred to as a head. This is followed by concatenation of the outputs from different heads. In particular, the computation inside a Multi-Head attention with h heads takes the following form:\nhead(X)i = WivX · Softmax [ (WikX) T (WiqX)/ √ d h ] ∈ R dh×n\nMultiHead(X) = Concat[head(X)1, · · · , head(X)h] ∈ Rd×n. The output of the Multi-head attention layer then becomes\nZ = LN (X+Wo ·MultiHead(X)) , (4) where Wo ∈ Rd×d. For a model with h heads, the query, key and value projection matrices {Wiq}, {Wik} and {Wiv} are dh × d matrices. Therefore, each head projects the input onto a d h -dimensional subspace to compute the context, and keeps the number of parameters fixed per layer. This has been observed empirically to increase the expressive power of the attention layer (Vaswani et al., 2017)." }, { "heading": "2.3 DEPENDENCE OF THE NUMBER OF HEADS ON THE EMBEDDING SIZE", "text": "While increasing the number of heads seemingly gives the model more expressive power, at the same time we are reducing the head size, which can decrease the expressive power. When the number of heads h is larger than dn , the attention unit inside each head projects onto a dimension smaller than n, and looses its ability to represent arbitrary context vectors (cf. Theorem 1). Since the sequence length is fixed from the data/task at hand, the only remaining way to increase the number of heads, without loosing the expressiveness, is by increasing the embedding size d. This corresponds to a fundamental limitation of the model architecture that one needs to increase the embedding size in order to support more heads.\nUnfortunately, increasing the embedding size leads to higher computation and memory requirements to train and store the model. Further, since it is common to use learned embeddings from Transformer based models for downstream tasks (Devlin et al., 2018), larger embedding size increases the model size and computation required for all the downstream tasks as well. Given the widespread success of attention mechanism, this highlights the need for a modified model architecture that can leverage the advantages of MultiHead without suffering from the requirement of large embedding sizes." }, { "heading": "3 CONCISE MULTI-HEAD TRANSFORMER", "text": "In this section we propose a different way of setting the head size of the Transformer, which allows us to enjoy the advantage of higher expressive power of multiple heads without requiring the embedding size to be large. The key is to decouple the dependency between the projection size in a head and the embedding size of the model. The projection matrices now project onto subspaces of a fixed dimension dp irrespective of the number of heads h. This approach where dp is independent of d and h leads to the following attention mechanism.\nfixedhead(X)i = VivX · Softmax [ (VikX) T (ViqX)/ √ dp ] ∈ Rdp×n\nFixedMultiHead(X) = Concat[fixedhead(X)1, · · · ,fixedhead(X)a] ∈ Rdp·h×n.\nNote that the projection matrices used here {Viq}, {Vik} and {Viv} are dp × d matrices. With Vo ∈ Rd×h·dp , the output of this new multi-head attention layer takes the following form.\nZ = LN (X+Vo · FixedMultiHead(X)) .\nThis modification makes each attention head more similar to a hidden unit in a feed forward network or a filter in a convolutional network, and allows us to vary the number of heads without the worry of reducing the representation power per head. The downside is, unlike the standard MultiHead, the number of parameters per layer increase with the number of heads. However, this modification allows us to train a model with a smaller embedding size, ultimately allowing us to reduce the total number of parameters in the model.\nChoice of the head size. Our proposed modification introduces head size dp as a new model hyperparameter. We choose head size to be 128 for our BERT experiments, as most of the pre-training is done with 128 sequence length data. While we have ablation studies (cf. Table 2(B)) showing bigger head size improves the performance, there is a tradeoff between increasing the head size vs number of heads vs layers. We found that having sufficient head size matching the pre-training sequence length, is better than having a larger embedding size (cf. Section 4)." }, { "heading": "3.1 MULTIHEAD VS. FIXEDMULTIHEAD ATTENTION", "text": "Given a MultiHead layer, we can always represent it using a FixedMultiHead layer, whenever we have the head size dp ≥ d/h. While this shows that increasing the number of heads h beyond d/dp makes individual heads of the FixedMultiHead as expressive as the ones in the MultiHead, it is not clear if FixedMultiHead is strictly more expressive. Can the FixedMultiHead layer represent functions that the standard MultiHead layer can not represent? In this subsection we show that indeed, in the multi-head regime, the FixedMultiHead layer is strictly better than the standard MultiHead layer in terms of expressive power.\nConsider the standard multi-head attention units in (4).\nfW(X) = Wo ·MultiHead(X). We denote the collection of all parameter matrices as W. Similarly, consider the function represented by the fixed head size attention units:\ngV(X) = Vo · FixedMultiHead(X). Let V be the collection of all these parameter matrices. We define F and G to be the class of functions fW(·) and gV(·), respectively. As noted above, if dp ≥ d/h, we have F ⊂ G. The following theorem shows that even for simple examples in G, functions in F fail to approximate them beyond certain accuracy; this shows that F is a strict subset of G. Theorem 2. Given n ≥ 2 and d ≥ dp, let h > d/dp. Consider a fixed head size attention layer gV(·) with parameters that satisfy the following conditions:\nVo × V 1 v\n... Vhv is full rank, and (Vik)TViq = U for all i = 1, . . . , h, where U is a rank-dp matrix. Then, for any fW ∈ F , there exists X ∈ Rd×n such that fW(X) 6= gV(X).\nBecause ‖fW(X)− gV(X)‖ is a continuous function of X, existence of such an X implies that the integral of the norm of difference (i.e., approximation error) is strictly positive.\nThis theorem shows that the expressive power of the FixedMultiHead attention function class is strictly superior to the standard MultiHead attention function class. Hence the heuristic of reducing the head size with the number of heads is limiting the expressive power of MultiHead, whereas using the fixed head size Transformers will increase the expressive power of the attention layers." }, { "heading": "4 EXPERIMENTS", "text": "In this section we present our experiments on three standard NLP tasks, language modeling (LM1B), question answering (SQuAD), and sentence entailment (MNLI), to demonstrate: 1) Increasing the number of heads beyond certain point hurts the performance of the standard Transformer, but always helps with our proposed modification; 2) Decoupling the head size from embedding size allows us to train models with a smaller embedding size; and 3) Setting the head size appropriately in the Transformers allows us to train models with a better performance scaling. We first describe our experimental setup followed by our results and ablation studies on the proposed modifications." }, { "heading": "4.1 SETUP AND DATASETS", "text": "For the language modeling task we use the one billion word benchmark dataset (LM1B) (Chelba et al., 2013). This dataset has around 30M training examples and around 300k examples in the test set. We use a sub-word tokenizer with 32k vocab and cap the input to 256 sequence length. We train a 6 layer Transformer model with the ADAM optimizer using the tensor2tensor library (Vaswani et al., 2018). The detailed experimental setting is presented in Section C.\nMulti-Genre Natural Language Inference (MNLI) is a sentence level entailment task, designed to test natural language understanding (Williams et al., 2018). Given a premise sentence and a hypothesis sentence, the goal is to predict whether hypothesis entails, contradicts or is neutral to the premise. We report the classification accuracy for this task. Stanford Question Answering Dataset (SQuAD) is a question answering dataset, where given a paragraph and a question, the goal is to predict the sequence of words in the paragraph that constitute the answer to the question (Rajpurkar et al., 2016). This is a harder word level task, compared to the sentence classification task. We report both Exact Match (EM) and F1 scores for this task. All results in this section are reported on the Dev set, which has not been used in any experimental choices in this paper.\nFor these latter two tasks, we follow the two stage approach of first pre-training on a language modeling task and then fine-tuning the models on the task data. We follow the same experimental setup for both pre-training and fine-tuning as BERT (Devlin et al., 2018), and use their codebase1. We first pre-train our models using the masked language model and the next sentence prediction objectives, and then fine tune the pre-trained model for individual tasks (Devlin et al., 2018). For pre-training we use English Wikipedia and BooksCorpus dataset (Zhu et al., 2015). The input to the models is tokenized using the WordPiece representation with 30000 tokens in the vocabulary. We present the key experiment choices in Section C, and refer the reader to Devlin et al. (2018) for a complete description of the setup." }, { "heading": "4.2 RESULTS", "text": "For our first set of experiments we want to see if the fixed head size Transformer with a smaller embedding size can match the performance of standard Transformers with a larger embedding size. As a baseline for the language modeling task, we train Transformers with the embedding size increasing from 256 to 512 with different number of heads. We train the fixed head size Transformers with a\n1https://github.com/google-research/bert\nfixed embedding size of 256 and a head size of 32, with an increasing number of heads from 4 to 70. We notice that the fixed head size models with an embedding size of 256 can match the performance of standard Transformers with an embedding size of 512 (see Fig. 1). Further this provides a better performance scaling than the standard Transformers. We repeat the similar experiment on the other two tasks, where for baseline we train BERTLARGE, a 24 layer, 16 head Transformer architecture, with embedding sizes from 512 to 1024. We compare it with the modified model, with an embedding size of 512 and a head size of 128, with an increasing number of heads from 8 to 32. We again notice that the fixed head size model with 512 embedding size can match the performance of BERTLARGE (see Fig. 2).\nNote that simply trying to increase the head size of the standard Transformers by decreasing the number of heads, does not improve the performance, as decreasing the number of heads reduces the expressive power of the model (see Fig. 4). Hence, both the head size and the number of heads needs to be set high enough for better performance." }, { "heading": "4.3 ABLATION", "text": "Increasing heads. From Table 1 and Fig. 1a we can see that increasing the number of heads hurts the performance of the Transformer after a certain number. We repeat the same experiments with the fixed head size Transformer, and present the results in Table 2(A) and Fig. 3a. The results show that the performance of the modified model improves monotonically as the number of heads increase. This is because the model capacity (a function of the head size) is no longer reduced with the increasing number of heads.\nIncreasing head size. In Table 2(B) and Fig. 3b, we present comparisons between models with different head sizes. This shows that the gains in the performance of the fixed head size models indeed come from adjusting the head size of the query, key and value layers in the attention unit. The table shows a clear trend of better performance with a larger head size, suggesting that it indeed is an important factor in the performance of the attention models." }, { "heading": "5 CONCLUSION", "text": "In this paper we studied the representation power of the multi-head self attention models and showed that the larger embedding size used in the current models is a consequence of the limitations of the current multi-head attention formulation. We propose a modified way to set the head size that allows us to increase the number of heads without increasing the embedding size. As a consequence we\nare able to train Transformers with a smaller embedding size and fewer parameters, without hurting the performance. In the future, it will be interesting to experiment with varying head sizes within an attention block and across layers. This requires further understanding of the role of each layer in computing the context, which is an interesting direction for the future work." }, { "heading": "B PROOFS", "text": "Proof of Theorem 1. d ≥ n case. To prove the first part of the result, we present an explicit construction of Wk and Wq which allows us to generate P from X using the dot product attention. Since X has full column rank, there exists a left inverse X† = (XTX)−1XT ∈ Rn×d such that X†X = In. Let Wk = W̃kX† and Wq = W̃qX†. Then\nXTWTk WqX = X T (X†)TW̃Tk W̃qX †X = In · W̃Tk W̃q · In = W̃Tk W̃q = W̃kq (5) Now that the above choice of Wq and Wk has handled the dependence on X, we will choose a W̃kq depending on P and finish the construction. Below we express the Softmax operation on the query and key inner products. Note that the Softmax here is a columnwise operator computing the attention scores for each query. By using (5), we obtain that\nSoftmax\n[ (WkX)\nT (WqX)√ dk\n] = Softmax [ W̃kq√ dk ] = exp ( W̃kq√ dk ) ·D−1 W̃kq ,\nwhere DW̃kq is an n× n diagonal matrix such that\n(DW̃kq )ii = n∑ j=1 exp\n( (W̃kq)ji√\ndk\n) = ( 1T exp ( (W̃kq)√\ndk )) i .\nHence, we can establish the desired result by showing that there always exists a W̃kq that satisfies the following fixed point equation.\nexp ( W̃kq√ dk ) = P ·DW̃kq . (6)\nGiven P, to construct such a W̃kq , we pick an arbitrary positive diagonal matrix D0, and set W̃kq = √ dk · log (P ·D0) . (7)\nSince P is a positive matrix, such a W̃kq always exists. Next, we verify that this construction indeed satisfies the fixed point equation (cf. (6)). Note that\nDW̃kq = Diag\n( 1T exp ( (W̃kq)√\ndk\n)) = Diag ( 1TP ·D0 ) = D0. (8)\nThe last equation follows from the fact that P is a column stochastic matrix. Now, using (7) and (8),\nexp ( W̃kq√ dk ) = P ·D0 = P ·DW̃kq .\nThis completes the first part of the proof.\nd < n case. Consider the case of d = 1 and n = 2. Then X ∈ R1×2 and Wq and Wk ∈ R1×1. Let X = [1, 0]. Then\nSoftmax\n[ (WkX)\nT (WqX)√ dk\n] = Softmax [ [1, 0]TWTk Wq[1, 0]√\ndk\n] = Softmax [[ WkWq 0\n0 0\n]] .\nThis matrix clearly cannot be used to generate P that have distinct elements in the second column , e.g., P = [ 0.5 0.75 0.5 0.25 ] .\nProof of Theorem 2. First let us rewrite the MultiHead and FixedMultiHead layers as follows. The MultiHead layer can be rewritten as\nfW(X) = Wo ·MultiHead(X) = h∑\ni=1\nWioW i vX · Softmax\n[ (WikX) T (WiqX)/ √\nd h\n] ,\nwhere Wio are d × d/h matrices and Wiv, Wik, and Wiq are d/h × d matrices. We denote the collection of all parameter matrices as W.\nSimilarly, rewrite the fixed head size attention layer as\ngV(X) = Vo · FixedMultiHead(X) = h∑\ni=1\nVioV i vX · Softmax\n[ (VikX) T (ViqX)/ √\ndp\n] ,\nwhere Vio ∈ Rd×dp , and Viv,Vik,Viq ∈ Rdp×d. Let V be the collection of all these matrices. The outline of the proof is basically a case analysis: we divide possible values of W into three categories, and show in each case that there exists a X such that fW(X) 6= gV(X). Here are the three cases:\n• Case 1: ∑h\ni=1 W i oW i v 6= ∑h i=1 V i oV i v .\n• Case 2: ∑h\ni=1 W i oW i v = ∑h i=1 V i oV i v, and there exists i ∈ {1, . . . , h} such that U/ √ dp −\n(Wik) T (Wiq)/\n√ d/h is not skew-symmetric.\n• Case 3: ∑h\ni=1 W i oW i v = ∑h i=1 V i oV i v, and all U/ √ dp − (Wik)T (Wiq)/ √ d/h are skew-\nsymmetric.\nCase 1. In the first case, we can choose any v such that ( ∑h\ni=1 W i oW i v − ∑h i=1 V i oV i v)v 6= 0.\nChoose X = v1T = [v v . . . v]. Then, note that for any column stochastic matrix P, we have XP = X. Therefore,\nh∑ i=1 WioW i vX · Softmax [ (WikX) T (WiqX)/ √ d/h ] − h∑ i=1 VioV i vX · Softmax [ (VikX) T (ViqX)/ √ dp ] =\nh∑ i=1 WioW i vX− h∑ i=1 VioV i vX = ( h∑ i=1 WioW i v − h∑ i=1 VioV i v)v1 T 6= 0.\nCase 2. In cases where ∑h\ni=1 W i oW i v = ∑h i=1 V i oV i v , since ∑h i=1 V i oV i v is full rank by assump-\ntion and each WioW i v is at most rank d/h, it follows that all columns in W i o ∈ Rd×d/h must be linearly independent. Therefore, for any v 6= 0, {WioWivv, i = 1, . . . , h} is a set of linearly independent vectors, because each WioW i vv is a linear combination of d/h column vectors of W i o that are linearly independent of other column vectors in Wjo, j 6= i. Now consider any v ∈ Rd, and X = veT1 , where e1 = (1, 0, . . . , 0) ∈ Rn. Define φ(t) = exp(t)/(exp(t) + n− 1). Then, we have\ngV(X) = h∑ i=1 VioV i vX · Softmax [ XTUX/ √ dp ] = h∑ i=1 VioV i vX · Softmax vTUv√ dp 0 . . . 0 0 0 . . . 0 ... ... . . .\n... 0 0 . . . 0 = ( h∑\ni=1\nVioV i v\n)[ φ ( vTUv√\ndp\n) v vn . . . v n ] = ( h∑\ni=1\nWioW i v\n)[ φ ( vTUv√\ndp\n) v vn . . . v n ] .\nSimilarly, we can calculate\nfW(X) = h∑ i=1 WioW i vX · Softmax [ (WikX) T (WiqX)/ √ d/h ]\n= h∑ i=1 WioW i v [ φ ( vT (Wik) TWiqv√ d/h ) v vn . . . v n ] .\nNotice that all the columns of fW(X) and gV(X), from the second columns to the last ones, are the same. We now compare the first columns:\nfW(X):,1 − gV(X):,1 = h∑\ni=1\n( φ ( vT (Wik)\nTWiqv√ d/h\n) − φ ( vTUv√\ndp\n)) WioW i vv.\nRecall that for any v 6= 0, WioWivv are linearly independent, so fW(X):,1 − gV(X):,1 = 0 if and only if all φ ( vT (Wik) TWiqv√\nd/h\n) − φ ( vTUv√\ndp\n) are zero. However, since there exists i ∈ {1, . . . , h}\nsuch that U/ √ dp − (Wik)T (Wiq)/ √ d/h is not skew-symmetric, we can choose v to be one that satisfies vT (Wik)\nTWiqv√ d/h 6= v TUv√ dp\n, hence making φ ( vT (Wik) TWiqv√\nd/h\n) − φ ( vTUv√\ndp\n) 6= 0, therefore\nfW(X):,1 − gV(X):,1 6= 0.\nCase 3. Now consider any X = [v1 v2 0 . . . 0], where v1 and v2 will be chosen later. Define φ1(t1, t2) = exp(t1)/(exp(t1)+exp(t2)+n−2), φ2(t1, t2) = exp(t2)/(exp(t1)+exp(t2)+ n− 2). Then, we have\ngV(X) = h∑ i=1 VioV i vX · Softmax vT1 Uv1√ dp vT1 Uv2√ dp 0 . . . 0 vT2 Uv1√ dp vT2 Uv2√ dp 0 . . . 0 0 0 0 . . . 0 ... ... ... . . . ...\n0 0 0 . . . 0\n .\nTherefore, the first column of gV(X) can be written as\ngV(X):,1 =\n( h∑\ni=1\nWioW i v )[ φ1 ( vT1 Uv1√\ndp , vT2 Uv1√ dp\n) v1 + φ2 ( vT1 Uv1√\ndp , vT2 Uv1√ dp\n) v2 ] .\nSimilarly, the first column of fW(X) is\nfW(X):,1 = h∑ i=1 WioW i v\n[ φ1 ( vT1 (W i k)\nTWiqv1√ d/h , vT2 (W i k) TWiqv1√ d/h\n) v1+\nφ2\n( vT1 (W i k)\nTWiqv1√ d/h , vT2 (W i k) TWiqv1√ d/h\n) v2 ] .\nSince U/ √ dp − (W1k)T (W1q)/ √ d/h is skew-symmetric by assumption, we have\nvT1 ( U√ dp − (W 1 k) T (W1q)√ d/h ) v1 = 0 for all v1. Recall that U is rank-dp by assumption, so\nU/ √ dp − (W1k)T (W1q)/ √ d/h is at least rank dp − d/h ≥ 1, so we can choose any v1 such that(\nU√ dp − (W\n1 k) T (W1q)√ d/h\n) v1 6= 0.\nIf both U√ dp v1 and (W1k) T (W1q)√ d/h v1 are nonzero, We can always choose ṽ2 such that ṽT2 ( U√ dp ) v1 >\n0 and ṽT2\n( (W1k)\nT (W1q)√ d/h\n) v1 < 0. This means that if we choose v2 = αṽ2 and scale α→∞,\nφ1\n( vT1 Uv1√\ndp , vT2 Uv1√ dp\n) → 0, φ2 ( vT1 Uv1√\ndp , vT2 Uv1√ dp\n) → 1,\nφ1\n( vT1 (W 1 k)\nTW1qv1√ d/h , vT2 (W 1 k) TW1qv1√ d/h\n) →\nexp(vT1 (W 1 k) TW1qv1/ √ d/h)\nexp(vT1 (W 1 k) TW1qv1/ √ d/h) + n− 2 ,\nφ2\n( vT1 (W 1 k)\nTW1qv1√ d/h , vT2 (W 1 k) TW1qv1√ d/h\n) → 0.\nThen, consider the difference fW(X):,1 − gV(X):,1. Recall that for any v, W1oW1vv is independent of {WioWivv, i 6= 1}. This means that, to show fW(X):,1 − gV(X):,1 6= 0, it suffices to show that[\nφ1\n( vT1 (W 1 k)\nTW1qv1√ d/h , vT2 (W 1 k) TW1qv1√ d/h\n) − φ1 ( vT1 Uv1√\ndp , vT2 Uv1√ dp\n)] W1oW\n1 vv1+[\nφ2\n( vT1 (W 1 k)\nTW1qv1√ d/h , vT2 (W 1 k) TW1qv1√ d/h\n) − φ2 ( vT1 Uv1√\ndp , vT2 Uv1√ dp\n)] W1oW 1 vv2 6= 0.\nIf we scale v2 = αṽ2 with large enough α, the second term will dominate the first term and the first term will never be able to cancel the second one. Thus, by choosing large enough α > 0, we can make sure that the sum is nonzero.\nEven in case where one of U√ dp v1 and (W1k) T (W1q)√ d/h v1 is zero (say (W1k) T (W1q)√ d/h v1 = 0), we can choose ṽ2 = U√ dp v1 and use a similar scaling argument. By choosing large enough α > 0 and v2 = αṽ2, one can show that the difference fW(X):,1 − gV(X):,1 is nonzero." }, { "heading": "C EXPERIMENTAL SETTINGS", "text": "For our experiments with the language modeling (LM1B dataset), we train 6 layer transformer models. We use a batch size of 4096 and train for 250k steps. We use a learning rate of 0.1 with a linear warm up for the first 10k steps. We decay the learning rate with the square root of the number of steps. We train the standard transformers with the embedding dimension varying from 256 to 512. We fix the width of the feed forward layer in the Transformer to be 1024. In addition, we use weight decay of 0.01 and dropout with probability of 0.1 on all the layers.\nFor our experiments with BERT, we follow the same experimental settings as in (Devlin et al., 2018). We present the key details here and refer the reader to (Devlin et al., 2018). We train with a batch size of 1024 for 450k steps with inputs of sequence length n = 128 followed by 50k steps with inputs of sequence length 512. In contrast the BERT paper uses a batch size of 512, and does the pre-training for 900K steps with 128 sequence length inputs and 100k steps with 512 sequence length inputs. We train using ADAM with a learning rate of 1e-4, and a linear warmup and decay schedule as in BERT. We use 5k warmup steps for the first stage, and a re-warmup of 3k steps for the second stage (You et al., 2019). Again, we use weight decay of 0.01 and dropout with probability of 0.1 on all the layers.\nFor the language modeling task, training is performed on 4 TPUv2 chips for a couple of hours. For BERT models training is performed on 16 TPUv3 chips in the first stage and 64 TPUv3 chips for the second stage. Pre-training with this configuration takes between 2 to 3 days. We did not attempt to find the optimal hyper-parameters for the fixed head size architecture, and use the same hyper-parameters as used for training the standard Transformer.\nD ADDITIONAL EXPERIMENTAL RESULTS" } ]
2,019
null
SP:8214a2ec3d58d4fef82265c8f99e1cbb830873aa
[ "This paper studies the role of audio in object and action perception, as well as how auditory information can help learning forward and inverse dynamics models. To do this, the authors built a 'tilt-bot', which tilts a box and the object within to collect data (sound & vision) of object interactions. The authors then tested how audio embeddings help object recognition and forward model prediction. ", "This paper presents audio-visual object classification and motion prediction work on a novel dataset of 60 different objects rolling around in a bin tilted to and fro by a robot, with video and 4-channel audio recordings of the object impacts. The data is rather novel, is large enough to do ML (around 17 hours of eventful audio/video) and is to be publicly released. The model architectures are not of theoretical novelty. However, the experiments are somewhat interesting. It was found that the audio contains significant object classification information. The audio was also good for predicting the trajectory of the object. This might not be surprising since the microphones are geometrically arranged and may contain directional information along with information about velocity and/or distance traveled. Overall the experiments are rather thin with only a few experimental results. A more thorough undertaking might be expected for ICLR papers, with more novel theoretical development and more extensive experiments. " ]
Truly intelligent agents need to capture the interplay of all their senses to build a rich physical understanding of their world. In robotics, we have seen tremendous progress in using visual and tactile perception; however we have often ignored a key sense: sound. This is primarily due to lack of data that captures the interplay of action and sound. In this work, we perform the first large-scale study of the interactions between sound and robotic action. To do this, we create the largest available sound-action-vision dataset with 15,000 interactions on 60 objects using our robotic platform Tilt-Bot. By tilting objects and allowing them to crash into the walls of a robotic tray, we collect rich four-channel audio information. Using this data, we explore the synergies between sound and action, and present three key insights. First, sound is indicative of fine-grained object class information, e.g., sound can differentiate a metal screwdriver from a metal wrench. Second, sound also contains information about the causal effects of an action, i.e. given the sound produced, we can predict what action was applied on the object. Finally, object representations derived from audio embeddings are indicative of implicit physical properties. We demonstrate that on previously unseen objects, audio embeddings generated through interactions can predict forward models 24% better than passive visual embeddings.
[]
[ { "authors": [ "Pulkit Agrawal", "Ashvin Nair", "Pieter Abbeel", "Jitendra Malik", "Sergey Levine" ], "title": "Learning to poke by poking: Experiential learning of intuitive physics", "venue": null, "year": 2016 }, { "authors": [ "Brandon Amos", "Laurent Dinh", "Serkan Cabi", "Thomas Rothörl", "Sergio Gómez Colmenarejo", "Alistair Muldal", "Tom Erez", "Yuval Tassa", "Nando de Freitas", "Misha Denil" ], "title": "Learning awareness models", "venue": "arXiv preprint arXiv:1804.06318,", "year": 2018 }, { "authors": [ "Relja Arandjelovic", "Andrew Zisserman" ], "title": "Look, listen and learn", "venue": "CoRR, abs/1705.08168,", "year": 2017 }, { "authors": [ "Yusuf Aytar", "Carl Vondrick", "Antonio Torralba" ], "title": "Soundnet: Learning sound representations from unlabeled video", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Berk Calli", "Aaron Walsman", "Arjun Singh", "Siddhartha Srinivasa", "Pieter Abbeel", "Aaron M Dollar" ], "title": "Benchmarking in manipulation research: The ycb object and model set and benchmarking protocols", "venue": "arXiv preprint arXiv:1502.03143,", "year": 2015 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Samuel Clarke", "Travers Rhodes", "Christopher G. Atkeson", "Oliver Kroemer" ], "title": "Learning audio feedback for estimating amount and flow of granular material", "venue": "Proceedings of The 2nd Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Akansel Cosgun", "Tucker Hermans", "Victor Emeli", "Mike Stilman" ], "title": "Push planning for object placement on cluttered table surfaces", "venue": "In 2011 IEEE/RSJ international conference on intelligent robots and systems,", "year": 2011 }, { "authors": [ "Ingrid Daubechies" ], "title": "The wavelet transform, time-frequency localization and signal analysis", "venue": "IEEE transactions on information theory,", "year": 1990 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Mehmet R Dogar", "Siddhartha S Srinivasa" ], "title": "A planning framework for non-prehensile manipulation under clutter and uncertainty", "venue": "Autonomous Robots,", "year": 2012 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Deep visual foresight for planning robot motion", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Dhiraj Gandhi", "Lerrel Pinto", "Abhinav Gupta" ], "title": "Learning to fly by crashing", "venue": "CoRR, abs/1704.05588,", "year": 2017 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "Mikael Henaff", "William F Whitney", "Yann LeCun" ], "title": "Model-based planning with discrete and continuous actions", "venue": "arXiv preprint arXiv:1705.07177,", "year": 2017 }, { "authors": [ "Abhishek Kar", "Shubham Tulsiani", "João Carreira", "Jitendra Malik" ], "title": "Category-specific object reconstruction from a single image", "venue": "In Computer Vision and Pattern Regognition (CVPR),", "year": 2015 }, { "authors": [ "Oussama Khatib" ], "title": "A unified approach for motion and force control of robot manipulators: The operational space formulation", "venue": "IEEE J. Robotics and Automation,", "year": 1987 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": null, "year": 2016 }, { "authors": [ "Sergey Levine", "Peter Pastor", "Alex Krizhevsky", "Deirdre Quillen" ], "title": "Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection", "venue": "ISER,", "year": 2016 }, { "authors": [ "Adithyavairavan Murali", "Yin Li", "Dhiraj Gandhi", "Abhinav Gupta" ], "title": "Learning to grasp without seeing", "venue": "CoRR, abs/1805.04201,", "year": 2018 }, { "authors": [ "Richard M Murray" ], "title": "A mathematical introduction to robotic manipulation", "venue": "CRC press,", "year": 2017 }, { "authors": [ "Andrew Owens", "Phillip Isola", "Josh McDermott", "Antonio Torralba", "Edward H Adelson", "William T Freeman" ], "title": "Visually indicated sounds", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Lerrel Pinto", "Abhinav Gupta" ], "title": "Learning to push by grasping: Using multiple tasks for effective learning", "venue": "arXiv preprint arXiv:1609.09025,", "year": 2016 }, { "authors": [ "Lerrel Pinto", "Abhinav Gupta" ], "title": "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot", "venue": "hours. ICRA,", "year": 2016 }, { "authors": [ "Lerrel Pinto", "Dhiraj Gandhi", "Yuanfeng Han", "Yong-Lae Park", "Abhinav Gupta" ], "title": "The curious robot: Learning visual representations via physical interactions", "venue": null, "year": 2016 }, { "authors": [ "Alexander Schneider", "Jürgen Sturm", "Cyrill Stachniss", "Marco Reisert", "Hans Burkhardt", "Wolfram Burgard" ], "title": "Object identification with tactile sensors using bag-of-features", "venue": "In IROS,", "year": 2009 }, { "authors": [ "Arda Senocak", "Tae-Hyun Oh", "Jun-Sik Kim", "Ming-Hsuan Yang", "In So Kweon" ], "title": "Learning to localize sound source in visual scenes", "venue": "CoRR, abs/1803.03849,", "year": 2018 }, { "authors": [ "Daniel M Wolpert", "Mitsuo Kawato" ], "title": "Multiple paired forward and inverse models for motor control", "venue": "Neural networks,", "year": 1998 }, { "authors": [ "Yu Xiang", "Alexandre Alahi", "Silvio Savarese" ], "title": "Learning to track: Online multi-object tracking by decision making", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kuan-Ting Yu", "Maria Bauza", "Nima Fazeli", "Alberto Rodriguez" ], "title": "More than a million ways to be pushed. a high-fidelity experimental dataset of planar pushing", "venue": "In 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS),", "year": 2016 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Andrew Rouditchenko", "Carl Vondrick", "Josh McDermott", "Antonio Torralba" ], "title": "The sound of pixels", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Wenxuan Zhou", "Lerrel Pinto", "Abhinav Gupta" ], "title": "Environment probing interaction policies", "venue": null, "year": 2019 }, { "authors": [ "Zoran Zivkovic" ], "title": "Improved adaptive gaussian mixture model for background subtraction", "venue": "In Proceedings of the 17th International Conference on Pattern Recognition,", "year": 2004 } ]
[ { "heading": "1 INTRODUCTION", "text": "Imagine the opening of a champagne bottle! Most vivid imaginations not only capture the celebratory visuals but also the distinctive ‘pop’ sound created by the act. Our world is rich and feeds all of our five senses – vision, touch, smell, sound and taste. Of these, the sense of vision, touch and sound play a critical role in our rich physical understanding of objects and actions. A truly intelligent agent would need to capture the interplay of all the three senses to build a physical understanding of the world. In robotics, where the goal is to perform physical task, vision has always played a central role. Vision is used to infer the geometric shape (Kar et al. (2015)), track objects (Xiang et al. (2015)), infer object categories (Krizhevsky et al. (2012)) and even direct control (Levine et al. (2016a)). In recent years, the sense of touch has also received increasing attention for recognition (Schneider et al. (2009)) and feedback control (Murali et al. (2018)). But what about sound? From the squeak of a door, to the rustle of a dried leaf, sound captures rich object information that is often imperceptible through visual or force data. Microphones (sound sensors) are also inexpensive and robust; yet we haven’t seen sound data transform robot learning. There hardly exists any systems, algorithms or datasets that exploit sound as a vehicle to build physical understanding. Why is that? Why does sound appear to be second-class citizen among perceptual faculties?\nThe key reason lies at the heart of sound generation. Sound generated through an interaction, say a robot striking an object, depends on the impact of the strike, the structure of the object, and even the location of the microphone. This intricate interplay that generates rich data, also makes it difficult to extract information that is useful for robotics. Although recent work has used sound to determine the amount of granular material in a container (Clarke et al. (2018)), we believe there lies much more information in the sound of interactions. But what sort of information can be extracted from this sound?\nIn this paper, we explore the synergy between sound and action to gain insight into what sound can be used for. To begin this exploration we will first need a large and diverse dataset that contains both sound and action data. However, most existing sound datasets do not contain information about action, while most action datasets do not contain information about sound. To solve this, we create\nthe largest sound-action-vision dataset available with 15,000 interactions on over 60 objects with our Tilt-Bot robot Figure 1. Each object is placed in a tray mounted on a robot arm that is tilted with a random action until the object hits the walls of the tray and make a sound. This setup allows us to robustly collect sound and action data over a diverse set of objects. But how is this data useful? Through Tilt-Bot’s data, we present three key insights about the role of sound in action.\nThe first insight is that sound is indicative of fine-grained object information. This implies that just from the sound an object makes, a learned model can identify the object with 79.2% accuracy from set of diverse 60 objects, which includes 30 YCB objects (Calli et al. (2015)). Our second insight is that sound is indicative of action. This implies that just from hearing the sound of an object, a learned model can predict what action was applied to the object. On a set of 30 previously unseen objects, we achieve a 0.027 MSE error which is 42% better than learning from only visual inputs. Our final insight is that sound is indicative of physical properties of object. This implies that just from hearing the sound an object makes, a learned model can infer the implicit physical properties of the object. To test this implicit physics, we show that a learned audio-conditioned forward model achieves a L1 error of 0.193 on previously unseen objects, which is 24% lesser than forward models trained using visual information. This further indicates that audio embeddings, generated from a previous interaction, can capture information about the physics of an object significantly better than visual embeddings. One could envision using these features to learn policies that first interact to create sound and then use the inferred audio embeddings to perform actions (Zhou et al. (2019)).\nIn summary, we present three key contributions in this paper: (a) we create the largest sound-actionvision robotics dataset; (b) we demonstrate that we can perform fine grained object recognition using only sound; and (c) we show that sound is indicative of action, both for post-interaction prediction, and pre-interaction forward modeling. Tilt-Bot’s sound-action-vision data, along with audio embeddings can be accessed here: https://sites.google.com/view/iclr2020-sound-action." }, { "heading": "2 THE TILT-BOT SOUND DATASET", "text": "To study the relationship between sound and actions, we first need to create a dataset with sound and action. In this section, we describe our data collection setup and other design decisions.\nThe Tilt-Bot Setup: A framework to collect large-scale data needs three key abilities: (a) to precisely control the actions; (b) to be able to interact with a diverse set of objects; (c) to record rich and diverse sound; and (d) requires little to no manual resets. To do this, we present Tilt-Bot (Figure 1). Tilt-Bot is a robotic tray mounted on a Sawyer robot’s end-effector. This allows us to precisely control the movements of the tray by applying rotational and translational actions on objects inside it. The tray has dimensions of 30× 30 cm and a payload of 1 Kg allowing us to place a large range of common day objects in it. To collect audio information, four contact microphones (mic) are attached on the four sides of the tray. This allows for the creation of rich audio information from the\ninteractions of objects with each other and the tray. To collect visual information, an Intel Realsense Camera (cam) is mounted on the top of the tray to give RGB and Depth information of the object in the tray. Our current setup allows us to collect four-channel audio at 44,100Hz, RGB and Depth at 6Hz, and tray state information (rotation and translation) at 100Hz. Rotational and translational action commands can be sent at 100Hz.\nData Collection Procedure: Our dataset consists of sound-action-vision data on 60 objects; 30 of which belong to the YCB object dataset (Calli et al. (2015)), and 30 are common household objects. Details of these objects can be found in Appendix A. For each object, data is collected by first placing it in the center of the tray. Then, Tilt-Bot applies randomly generated rotational actions to the object for 1 hour. We do not apply translational action since we notice minimal motion of the object with it. The rotational actions cause the tray to tilt and make the object slide and hit the walls of the tray. The sound from the four microphones, along with the visual data are continually recorded. Furthermore, using a simple background subtraction technique (Zivkovic (2004)), we can track the location of the object as it collides with the walls of the tray. For every contact made with the tray’s wall, which is detected by peaks in the audio stream, we segment a four second interaction centered around this contact. This amounts to around 15000 interactions over 60 robotic hours of data collection. Each of these interactions contain the sound, the RGB+Depth, and the tracking location of the object during the interaction. Examples of the data can be seen in Figure 2. All of our data and pre-processing will be open-sourced, and can be accessed on our website: https://sites.google.com/view/iclr2020-sound-action." }, { "heading": "3 LEARNING WITH AUDIO", "text": "To understand and study the synergies between sound and action, we focus on three broad categories of learning tasks: (a) fine-grained classification (or instance recognition) , (b) inverse-model learning (or action regression) , and (c) forward-model learning . In this section, we will describe our experiments along with insights to better understand the role of sound with action in the context of learning." }, { "heading": "3.1 PROCESSING AUDIO DATA", "text": "Before using audio data for learning, we first need to convert it into a canonical form. Since we will use audio in conjunction with images for several experiments, we build on the representation\nproposed by Zhao et al. (2018). Here the key idea is to convert the high dimensional raw audio (705600 for a 4 second audio recorded at 44.1KHz for 4 audio channels) to a smaller dimensional image. This is first done by subsampling each audio channel from 44.1KHz to 11KHz. Then, a Short-time Fourier transform (STFT) (Daubechies (1990)) with a FFT window size of 510 and hop length of 128 is applied on the subsampled and clipped audio data. For each channel this results in a 64×64 representation. Stacking the 4 channel audio, we get a 64×64×4 representation. We further apply a log transformation and clip the representation to between [−5, 5]. This representation allows us to treat audio as an image and now effectively run 2D convolutions on audio data, which can capture the temporal correlations from a single audio channel along with the correlations between multiple audio channels. Visualization of this representation can be seen in Figure 2, where the first three channels of audio data (64× 64× 3) are converted to an RGB image." }, { "heading": "3.2 FINE-GRAINED OBJECT CLASSIFICATION", "text": "Classically, the goal of recognition is to identify which object is being perceived. This task is generally done using visual images as input, and is done to test the robustness of visual feature extractors. In our case, we use this task to study what type of object-centric information is contained in sound. For the 60 objects in our TiltBot dataset we first create a training set with 80% of the data and a testing set with the remaining 20%. Then, we train a simple CNN (Krizhevsky et al. (2012)), that only takes the audio information as input and outputs the instance label of the object that generated the sound. This architecture is similar to top part of Figure 3(a).\nOn our heldout testing set, this trained model achieves a classification accuracy of 76.1%. Note that a random classifier gets a 1.67% accuracy. This shows that audio data contains fine-grained information about objects. Although Owens et al. (2016) demonstrate that audio information can be used to classify broad categories like wood, metal etc., our results show for the first time (to our knowledge) that audio information generated through action gives instance-level information like screwdriver, scissor, tennis ball etc.. To further understand what information sound gives us, we study the top classification errors of our model. In Figure 4 we see that there are two main modes of errors. The first is if instances only differ visually. For example, a green cube cannot be distinguished from a blue cube solely from the sound information. The second error mode is the generated sound is too soft. If the action causes the object to only move a little and not make too much sound, information about the object is masked away and causes classification errors." }, { "heading": "3.3 INVERSE-MODEL LEARNING", "text": "The goal of learning inverse models is to identify what action was applied, given observations before and after the action. From a biological perspective, learning inverse-models implies an understanding of cause and effect, and is often necessary for efficient motor-learning (Wolpert & Kawato\n(1998)). In the general setting of this problem, a model takes as input the observations before and after an interaction, and outputs the action applied during the interaction. In our case, we want to study if sound contains cause-effect information about actions. Moreover, since inverse-model learning can be evaluated on previously unseen objects, we can test the generalization of audio features not only on objects seen in training, but to novel objects as well.\nTo demonstrate this, we split our TiltBot objects into two sets: set A and set B, where both sets contain 30 objects with 15 objects from the YCB dataset. Using an architecture similar to the bottom part of Figure 3(a), an inverse model is trained on set A to regress the action. The input into this inverse model is an image of the object before the interaction, and the sound generated during the interaction. Note that the image of the object after the interaction is not given as input. The action that needs to be output is the 2D projection of the rotation vector on the planar tray surface. We evaluate the performance of the inverse model using normalized ([−1, 1]) mean squared error (MSE), where lower is better. Testing this model on held-out set A objects, we get a MSE of 0.008, while a random model give a MSE of 0.4. If we use the image of the object after the interaction as input instead of audio, we get a MSE of 0.043. This shows that for these physical interactions using audio information is not just better than random, but in fact better than using visual observations. This insight holds true even when tested on previously unseen set B objects. With set B testing, audio inverse models give a MSE of 0.027, which indicates some amount of overfitting on set A objects. However, this is significantly better than the 0.047 MSE we get from using purely visual inverse models. Sample evaluations of our inverse model can be seen in Figure 5." }, { "heading": "3.4 MULTI-TASK AUDIO EMBEDDING LEARNING", "text": "In the previous two sections we have seen how sound contains information about both fine-grained instances of objects and causal effects of action. But what is the right loss function to train an audio embedding that generalizes to multiple downstream tasks? One way would be to train the embedding on the instance recognition task on Tilt-Bot data, while another option would be to train it on the inverse-model task. Both of these tasks encode different forms of information, with classification encoding identifiable properties of the object and inverse model encoding the physical properties of the object. Inspired from work in multi-task learning (Caruana (1997); Pinto & Gupta (2016a)), we take the best of both worlds and train a joint embedding that can simultaneously encode both classification and action information.\nAs seen in Figure 3(a), the audio embedding eA is trained jointly using the classification and the inverse model loss according to Ltotal = (1−λ)Lclass+λLinv . Note that when λ = 0, the embedding captures only classification information, while λ = 1 captures only inverse-model information. We report the performance of joint learning on held-out data in Table 1. Here, training is performed on set A objects, while testing is done on set A held-out interactions and unseen set B objects. For classification, we find that joint learning improves performance from 73.8% on the 30 set A objects to 78.6%. When trained both set A and set B objects, classification performance improves from 76.1%( Section 3.2) to 79.5%. On inverse-model learning, we notice that joint learning does not improve performance on set A. However, on novel set B objects, we see a significant improvement from 0.027 MSE to 0.020 MSE. Again, this performance is also much better than learning directly from visual inverse-models at 0.043 MSE.\nAnother way to understand the information captured in our audio embeddings is to look at the top three nearest instance categories given an input object. In Figure 6 we show a few of these object retrievals. Interestingly, these features capture object shapes like matching the long screwdriver to the long butterknife and matching the yellow cube to other colored cubes." }, { "heading": "3.5 DOWNSTREAM TASK: FORWARD MODEL PREDICTION", "text": "Our previous experiments demonstrate the importance of using audio perception. In this section, we investigate if we can use sound to extract physical properties of an object before physically interacting with it. This use case is inspired from recent work on EPI (Zhou et al. (2019)) where probing interactions are used to understand latent factors before implementing the real policy. Here the sound generated through probing interactions would serve as latent parameters for representing the object.\nTo evaluate the use of audio features for downstream tasks, we perform forward prediction (See Figure 3(b)). Here given an object, a random interaction is performed on it and a sound is generated from this interaction. The embedding network trained using multi-task learning is then used to extract the audio embedding, which will serve as our object’s representation. Then, given this object’s representation, we can train a forward model that takes as additional input the location of the object and action applied on the object, and outputs the location of the object after the interaction. To learn this forward model, the network has to understand the dynamics of the object. Note that the only object specific information is given through the audio embedding.\nAs seen in Table 2, we report significant improvements in forward model prediction from 0.258 L1 error using visual features to 0.220 L1 error when using the audio embedding feature on objects seen during forward model training. This trend continues for novel set B objects, where both the embedding and forward model was trained on set A objects. Here we see an even larger improvement with visual features giving 0.256 L1 error while audio features giving 0.193 L1 error. This shows that audio embedding information is better able to capture implicit physics of the object as compared to visual features. Moreover, these features are significantly more robust than visual features and also generalize to previously unseen objects." }, { "heading": "4 RELATED WORK", "text": "" }, { "heading": "4.1 MULTI-MODAL LEARNING WITH SOUND", "text": "Recently, there has been a growing interest to use sound in conjunction with vision, either for generating sound for mute videos, or to localize the part of image that produces sound, or to learn better visual and audio features. Owens et al. (2015) collected hundreds of videos of people hitting, scratching, and prodding objects with a drumstick. This data was then used to train a recurrent neural network which synthesizes sound from silent videos. We collect audio of interactions between objects and a tray. However, instead of relying on humans, which is a huge bottleneck for data collection, we use a robotic platform to collect data. Aytar et al. (2016) use the natural synchronization between vision and sound to learn an acoustic representation using unlabelled two-million videos. Similarly, Arandjelovic & Zisserman (2017a) looks at raw unconstrained videos to learn visual and audio representations that perform on par with state-of-the-art self-supervised approaches. In a similar spirit, we also learn audio representations, albeit through action, to be used for down stream tasks. Arandjelovic & Zisserman (2017b); Senocak et al. (2018) further explores the audio-visual\ncorrespondence in videos to localize the object that sounds in an image, given the audio signal. Zhao et al. (2018) has taken this idea one step further. Given the audio signal, they separate it into a set of components that represents the sound from each pixel. In contrast to these works, we look at obtaining a richer representation of sound by studying its interactions with action." }, { "heading": "4.2 LEARNING FORWARD MODELS", "text": "Standard model-based methods, based on the estimated physical properties and known laws of physics, tries to calculate a sequence of control actions to achieve the goal (Khatib (1987); Murray (2017); Dogar & Srinivasa (2012)). Although this has been widely used for object manipulation tasks in robotics (Cosgun et al. (2011)), manipulating an unknown object is still a daunting task for such methods. This is mainly due to the difficulties in estimating and modeling the novel physical world (Yu et al. (2016)). Given the challenges in predicting the physical properties of novel environment, Deisenroth & Rasmussen (2011); Gal et al.; Amos et al. (2018); Henaff et al. (2017) try to learn dynamic models based on their interactions with the environment. However, when we need to use these learned models on previously unseen objects, these models also fail to generalize. This is because they often do not contain object-specific information. One way to get object-specific information is to use raw visual observations instead of object state (Agrawal et al. (2016); Hafner et al. (2018); Finn & Levine (2017)). In these methods, given the observation of a scene and an action taken by the agent, a visual forward model predicts the future visual observation. These forward models can then be used to plan robotic motions. In our work, we show that instead of using visual information, audio embeddings generated from a previous interaction can be used to improve these forward models." }, { "heading": "4.3 MULTI-MODAL DATASETS", "text": "Alongside algorithmic developments, large scale datesets have enabled the application of machine learning to solve numerous robotic tasks. Several works like Pinto & Gupta (2016b); Levine et al. (2016b); Agrawal et al. (2016); Gandhi et al. (2017) collect large scale visual robotic data for learning manipulation and navigation skills. Apart from visual data, some works Murali et al. (2018); Pinto et al. (2016) have also looked at collecting large-scale tactile data. This tactile or force data can then be used to recover object properties like softness or roughness. Although these datasets contain visual information and action data, they ignore a key sensory modality: sound.\nUnderstanding what information can be obtained from sound requires a large-scale sound dataset. Early attempts (Owens et al. (2015)) collect sound data by recording people interacting with objects. Although this dataset contains large amounts of sound data, it does not contain information about action. In our work, we show that action information not only helps regularize object classification, but also helps in understanding the implicit physics of objects. Prior to our work, Clarke et al. (2018) has shown that sound information is indeed helpful for state-estimation tasks like measuring the amount of granular material in a container. Here, they exploit the mechanical vibrations of granular material and the structure around it for accurate estimation. In our work, instead of a single type of object, we collect audio data across 60 different objects. This allows us to learn generalizable audio features that transfer to previously unseen objects on a variety of tasks like action regression and forward-model learning." }, { "heading": "5 CONCLUSION", "text": "In this work, we perform one of the first studies on the interactions between sound and action. Through our sound-action-vision dataset collected using our Tilt-Bot robot, we present several insights into what information can be extracted from sound. From fine-grained object recognition, to inverse-model learning, we demonstrate that sound can provide valuable information that can be used in downstream motor-control or robotic tasks. In some domains like forward model learning, we show that sound infact provides more information that can be obtained from visual information alone. We hope that the Tilt-Bot dataset along with our findings inspire future work in the soundaction domain, especially in robotic settings where visual data is hard to obtain." }, { "heading": "APPENDIX A DATASET DETAILS", "text": "Figure 8 shows image of objects for which the data was collected using Tilt-Bot. Moreover, Table 3 and Table 4 shows the number of interactions collected for each object in set A (seen objects) and set B (novel objects) respectively." } ]
2,019
null
SP:fa64a906a5800aa62bed00e9c2b29b9fcffd0412
[ "In this paper the authors present a new generative model for audio in the frequency domain to capture better the global structure of the signal. For this, they use an autoregressive procedure combined with a multiscale generative model for two-dimensional time-frequency visual representation (STFT spectrogram). The proposed method is tested across a diverse set of audio generation tasks", "The authors introduce MelNet, an autoregressive model of Mel-frequency scaled spectrograms. They convert audio into high resolution spectrograms to reduce the audio artifacts introduced by inverting spectrograms (here they use gradient-based inversion over Griffin-Lim). To improve modeling of long term dependencies, they perform multi-scale splitting of the spectrograms and maximize the likelihood at each scale (avoiding dominance of noise at higher resolutions). They condition generation at finer scales from coarser scales, enabling sampling through an ancestral process. The authors also highlight the difference between temporal and frequency dimensions, creating different conditioning stacks for the past in time vs. the \"past\" in frequency (lower frequencies), and mixing conditioning between the two stacks through layers of the network. Multilayer RNNs are used throughout the network and external conditioning is incorporated at the input. " ]
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation.
[]
[ { "authors": [ "Sercan Ö Arık", "Heewoo Jun", "Gregory Diamos" ], "title": "Fast spectrogram inversion using multi-head convolutional neural networks", "venue": "IEEE Signal Processing Letters,", "year": 2019 }, { "authors": [ "Christopher M Bishop" ], "title": "Mixture density networks", "venue": "Technical report, Citeseer,", "year": 1994 }, { "authors": [ "Xi Chen", "Nikhil Mishra", "Mostafa Rohaninejad", "Pieter Abbeel" ], "title": "Pixelsnail: An improved autoregressive generative model", "venue": "arXiv preprint arXiv:1712.09763,", "year": 2017 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Joon Son Chung", "Arsha Nagrani", "Andrew Zisserman" ], "title": "Voxceleb2: Deep speaker recognition", "venue": "arXiv preprint arXiv:1806.05622,", "year": 2018 }, { "authors": [ "Ryan Dahl", "Mohammad Norouzi", "Jonathon Shlens" ], "title": "Pixel recursive super resolution", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Rémi Decorsière", "Peter L Søndergaard", "Ewen N MacDonald", "Torsten Dau" ], "title": "Inversion of auditory spectrograms, traditional spectrograms, and other envelope representations", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2015 }, { "authors": [ "Sander Dieleman", "Aaron van den Oord", "Karen Simonyan" ], "title": "The challenge of realistic music generation: modelling raw audio at scale", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chris Donahue", "Julian McAuley", "Miller Puckette" ], "title": "Synthesizing audio with generative adversarial networks", "venue": "arXiv preprint arXiv:1802.04208,", "year": 2018 }, { "authors": [ "Jesse Engel", "Kumar Krishna Agrawal", "Shuo Chen", "Ishaan Gulrajani", "Chris Donahue", "Adam Roberts" ], "title": "Gansynth: Adversarial neural audio synthesis", "venue": null, "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "Alex Graves", "Jürgen Schmidhuber" ], "title": "Offline handwriting recognition with multidimensional recurrent neural networks. In Advances in neural information processing", "venue": null, "year": 2009 }, { "authors": [ "Alex Graves", "Santiago Fernández", "Jürgen Schmidhuber" ], "title": "Multi-dimensional recurrent neural networks", "venue": "In International conference on artificial neural networks,", "year": 2007 }, { "authors": [ "Daniel Griffin", "Jae Lim" ], "title": "Signal estimation from modified short-time fourier transform", "venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing,", "year": 1984 }, { "authors": [ "Curtis Hawthorne", "Andriy Stasyuk", "Adam Roberts", "Ian Simon", "Cheng-Zhi Anna Huang", "Sander Dieleman", "Erich Elsen", "Jesse Engel", "Douglas Eck" ], "title": "Enabling factorized piano music modeling and generation with the maestro dataset", "venue": "arXiv preprint arXiv:1810.12247,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Cheng-Zhi Anna Huang", "Ashish Vaswani", "Jakob Uszkoreit", "Noam Shazeer", "Curtis Hawthorne", "Andrew M Dai", "Matthew D Hoffman", "Douglas Eck" ], "title": "Music transformer: Generating music with long-term structure", "venue": "arXiv preprint arXiv:1809.04281,", "year": 2018 }, { "authors": [ "Nal Kalchbrenner", "Ivo Danihelka", "Alex Graves" ], "title": "Grid long short-term memory", "venue": "arXiv preprint arXiv:1507.01526,", "year": 2015 }, { "authors": [ "Nal Kalchbrenner", "Erich Elsen", "Karen Simonyan", "Seb Noury", "Norman Casagrande", "Edward Lockhart", "Florian Stimberg", "Aaron van den Oord", "Sander Dieleman", "Koray Kavukcuoglu" ], "title": "Efficient neural audio synthesis", "venue": "arXiv preprint arXiv:1802.08435,", "year": 2018 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Simon King" ], "title": "The blizzard challenge", "venue": null, "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jinyu Li", "Abdelrahman Mohamed", "Geoffrey Zweig", "Yifan Gong" ], "title": "Exploring multidimensional lstms for large vocabulary asr", "venue": "In Acoustics, Speech and Signal Processing (ICASSP),", "year": 2016 }, { "authors": [ "Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio" ], "title": "Samplernn: An unconditional end-to-end neural audio generation model", "venue": "arXiv preprint arXiv:1612.07837,", "year": 2016 }, { "authors": [ "Jacob Menick", "Nal Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "arXiv preprint arXiv:1812.01608,", "year": 2018 }, { "authors": [ "Wei Ping", "Kainan Peng", "Andrew Gibiansky", "Sercan O Arik", "Ajay Kannan", "Sharan Narang", "Jonathan Raiman", "John Miller" ], "title": "Deep voice 3: Scaling text-to-speech with convolutional sequence learning", "venue": "arXiv preprint arXiv:1710.07654,", "year": 2017 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "Waveglow: A flow-based generative network for speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Scott Reed", "Aäron van den Oord", "Nal Kalchbrenner", "Sergio Gómez Colmenarejo", "Ziyu Wang", "Dan Belov", "Nando de Freitas" ], "title": "Parallel multiscale autoregressive density estimation", "venue": "arXiv preprint arXiv:1703.03664,", "year": 2017 }, { "authors": [ "Tara N Sainath", "Bo Li" ], "title": "Modeling time-frequency patterns with lstm vs. convolutional architectures for lvcsr tasks", "venue": "In INTERSPEECH,", "year": 2016 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Jonathan Shen", "Ruoming Pang", "Ron J Weiss", "Mike Schuster", "Navdeep Jaitly", "Zongheng Yang", "Zhifeng Chen", "Yu Zhang", "Yuxuan Wang", "Rj Skerrv-Ryan" ], "title": "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Jose Sotelo", "Soroush Mehri", "Kundan Kumar", "Joao Felipe Santos", "Kyle Kastner", "Aaron Courville", "Yoshua Bengio" ], "title": "Char2wav: End-to-end speech synthesis", "venue": null, "year": 2017 }, { "authors": [ "Yaniv Taigman", "Lior Wolf", "Adam Polyak", "Eliya Nachmani" ], "title": "Voiceloop: Voice fitting and synthesis via a phonological", "venue": null, "year": 2018 }, { "authors": [ "Lucas Theis", "Matthias Bethge" ], "title": "Generative image modeling using spatial lstms", "venue": "In Advances in Neural Information Processing Systems, pp. 1927–1935,", "year": 2015 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural networks for machine learning,", "year": 2012 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals" ], "title": "Neural discrete representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Francesco Visin", "Kyle Kastner", "Kyunghyun Cho", "Matteo Matteucci", "Aaron Courville", "Yoshua Bengio" ], "title": "Renet: A recurrent neural network based alternative to convolutional networks", "venue": "arXiv preprint arXiv:1505.00393,", "year": 2015 }, { "authors": [ "Yuxuan Wang", "RJ Skerry-Ryan", "Daisy Stanton", "Yonghui Wu", "Ron J Weiss", "Navdeep Jaitly", "Zongheng Yang", "Ying Xiao", "Zhifeng Chen", "Samy Bengio" ], "title": "Tacotron: Towards end-to-end speech synthesis", "venue": "arXiv preprint arXiv:1703.10135,", "year": 2017 } ]
[ { "heading": null, "text": "Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation." }, { "heading": "1 INTRODUCTION", "text": "Audio waveforms have complex structure at drastically varying timescales, which presents a challenge for generative models. Local structure must be captured to produce high-fidelity audio, while longrange dependencies spanning tens of thousands of timesteps must be captured to generate audio which is globally consistent. Existing generative models of waveforms such as WaveNet (van den Oord et al., 2016a) and SampleRNN (Mehri et al., 2016) are well-adapted to model local dependencies, but as these models typically only backpropagate through a fraction of a second, they are unable to capture high-level structure that emerges on the scale of several seconds.\nWe introduce a generative model for audio which captures longer-range dependencies than existing end-to-end models. We primarily achieve this by modelling 2D time-frequency representations such as spectrograms rather than 1D time-domain waveforms (Figure 1). The temporal axis of a spectrogram is orders of magnitude more compact than that of a waveform, meaning dependencies that span tens of thousands of timesteps in waveforms only span hundreds of timesteps in spectrograms. In practice, this enables our spectrogram models to generate unconditional speech and music samples with consistency over multiple seconds whereas time-domain models must be conditioned on intermediate features to capture structure at similar timescales.\nModelling spectrograms can simplify the task of capturing global structure, but can weaken a model’s ability to capture local characteristics that correlate with audio fidelity. Producing high-fidelity audio has been challenging for existing spectrogram models, which we attribute to the lossy nature of spectrograms and oversmoothing artifacts which result from insufficiently expressive models. To reduce information loss, we model high-resolution spectrograms which have the same dimensionality as their corresponding time-domain signals. To limit oversmoothing, we use a highly expressive autoregressive model which factorizes the distribution over both the time and frequency dimensions.\nModelling both fine-grained details and high-level structure in high-dimensional distributions is known to be challenging for autoregressive models. To capture both local and global structure in spectrograms with hundreds of thousands of dimensions, we employ a multiscale approach which generates spectrograms in a coarse-to-fine manner. A low-resolution, subsampled spectrogram that captures high-level structure is generated initially, followed by an iterative upsampling procedure that adds high-resolution details.\nCombining these representational and modelling techniques yields a highly expressive and broadly applicable generative model of audio. Our contributions are are as follows:\n• We introduce MelNet, a generative model for spectrograms which couples a fine-grained autoregressive model and a multiscale generation procedure to jointly capture local and global structure. • We show that MelNet is able to model longer-range dependencies than existing time-domain\nmodels. Additionally, we include an ablation to demonstrate that multiscale modelling is essential for modelling long-range dependencies. • We demonstrate that MelNet is broadly applicable to a variety of audio generation tasks,\nincluding unconditional speech and music generation. Furthermore, MelNet is able to model highly multimodal data such as multi-speaker and multilingual speech." }, { "heading": "2 PRELIMINARIES", "text": "We briefly present background regarding spectral representations of audio. Audio is represented digitally as a one-dimensional, discrete-time signal y = (y1, . . . , yn). Existing generative models for audio have predominantly focused on modelling these time-domain signals directly. We instead model spectrograms, which are two-dimensional time-frequency representations which contain information about how the frequency content of an audio signal varies through time. Spectrograms are computed by taking the squared magnitude of the short-time Fourier transform (STFT) of a time-domain signal, i.e. x = ‖STFT(y)‖2. The value of xij (referred to as amplitude or energy) corresponds to the squared magnitude of the jth element of the frequency response at timestep i. Each slice xi,∗ is referred to as a frame. We assume a time-major ordering, but following convention, all figures are displayed transposed and with the frequency axis inverted.\nTime-frequency representations such as spectrograms highlight how the tones and pitches within an audio signal vary through time. Such representations are closely aligned with how humans perceive audio. To further align these representations with human perception, we convert the frequency axis to the Mel scale and apply an elementwise logarithmic rescaling of the amplitudes. Roughly speaking, the Mel transformation aligns the frequency axis with human perception of pitch and the logarithmic rescaling aligns the amplitude axis with human perception of loudness.\nSpectrograms are lossy representations of their corresponding time-domain signals. The Mel transformation discards frequency information and the removal of the STFT phase discards temporal information. When recovering a time-domain signal from a spectrogram, this information loss manifests as distortion in the recovered signal. To minimize these artifacts and improve the fidelity of generated audio, we model high-resolution spectrograms. The temporal resolution of a spectrogram can be increased by decreasing the STFT hop size, and the frequency resolution can be increased by\nincreasing the number of Mel channels. Generated spectrograms are converted back to time-domain signals using classical spectrogram inversion algorithms. We experiment with both Griffin-Lim (Griffin & Lim, 1984) and a gradient-based inversion algorithm (Decorsière et al., 2015), and ultimately use the latter as it generally produced audio with fewer artifacts." }, { "heading": "3 PROBABILISTIC MODEL", "text": "We use an autoregressive model which factorizes the joint distribution over a spectrogram x as a product of conditional distributions. Given an ordering of the dimensions of x, we define the context x<ij as the elements of x that precede xij . We default to a row-major ordering which proceeds through each frame xi,∗ from low to high frequency, before progressing to the next frame. The joint density is factorized as\np(x) = ∏ i ∏ j p(xij | x<ij ; θij), (1)\nwhere θij parameterizes a univariate density over xij . We model each factor distribution as a Gaussian mixture model with K components. Thus, θij consists of 3K parameters corresponding to means {µijk}Kk=1, standard deviations {σijk}Kk=1, and mixture coefficients {πijk}Kk=1. The resulting factor distribution can then be expressed as\np(xij | x<ij ; θij) = K∑ k=1 πijk N (xij ; µijk, σijk). (2)\nFollowing the work on Mixture Density Networks (Bishop, 1994) and their application to autoregressive models (Graves, 2013), θij is modelled as the output of a neural network and computed as a function of the context x<ij . Precisely, for some network f with parameters ψ, we have θij = f(x<ij ; ψ). A maximum-likelihood estimate for the network parameters is computed by minimizing the negative log-likelihood via gradient descent.\nTo ensure that the network output parameterizes a valid Gaussian mixture model, the network first computes unconstrained parameters {µ̂ijk, σ̂ijk, π̂ijk}Kk=1 as a vector θ̂ij ∈ R3K , and enforces constraints on θij by applying the following transformations:\nµijk = µ̂ijk (3) σijk = exp(σ̂ijk) (4) πijk = exp(π̂ijk)∑K k=1 exp(π̂ijk) . (5)\nThese transformations ensure the standard deviations σijk are positive and the mixture coefficients πijk sum to one." }, { "heading": "4 NETWORK ARCHITECTURE", "text": "To model the distribution in an autoregressive manner, we design a network which computes the distribution over xij as a function of the context x<ij . The network architecture draws inspiration\nfrom existing autoregressive models for images (Theis & Bethge, 2015; van den Oord et al., 2016c;b; Chen et al., 2017; Salimans et al., 2017; Parmar et al., 2018; Child et al., 2019). In the same way that these models estimate a distribution pixel-by-pixel over the spatial dimensions of an image, our model estimates a distribution element-by-element over the time and frequency dimensions of a spectrogram. A noteworthy distinction is that spectrograms are not invariant to translation along the frequency axis, making 2D convolution less desirable than other 2D network primitives which do not assume invariance. Utilizing multidimensional recurrence instead of 2D convolution has been shown to be beneficial when modelling spectrograms in discriminative settings (Li et al., 2016; Sainath & Li, 2016), which motivates our use of an entirely recurrent architecture.\nSimilar to Gated PixelCNN (van den Oord et al., 2016b), the network has multiple stacks of computation. These stacks extract features from different segments of the input to collectively summarize the full context x<ij :\n• The time-delayed stack computes features which aggregate information from all previous frames x<i,∗. • The frequency-delayed stack utilizes all preceding elements within a frame, xi,<j , as well\nas the outputs of the time-delayed stack, to summarize the full context x<ij .\nThe stacks are connected at each layer of the network, meaning that the features generated by layer l of the time-delayed stack are used as input to layer l of the frequency-delayed stack. To facilitate the training of deeper networks, both stacks use residual connections (He et al., 2016). The outputs of the final layer of the frequency-delayed stack are used to compute the unconstrained parameters θ̂." }, { "heading": "4.1 TIME-DELAYED STACK", "text": "The time-delayed stack utilizes multiple layers of multidimensional RNNs to extract features from x<i,∗, the two-dimensional region consisting of all frames preceding xij . Each multidimensional RNN is composed of three one-dimensional RNNs: one which runs forwards along the frequency axis, one which runs backwards along the frequency axis, and one which runs forwards along the time axis. Each RNN runs along each slice of a given axis, as shown in Figure 2. The output of each layer of the time-delayed stack is the concatenation of the three RNN hidden states.\nWe denote the function computed at layer l of the time-delayed stack (three RNNs followed by concatenation) as F tl . At each layer, the time-delayed stack uses the feature map from the previous layer, ht[l−1], to compute the subsequent feature mapF tl ( ht[l−1] ) which consists of the three concatenated RNN hidden states. When using residual connections, the computation of ht[l] from ht[l−1] becomes\nhtij [l] =W t l F tl\n( ht[l − 1] ) ij + htij [l − 1]. (6)\nTo ensure the output htij [l] is only a function of frames which lie in the context x<ij , the inputs to the time-delayed stack are shifted backwards one step in time: htij [0] =W t 0xi−1,j ." }, { "heading": "4.2 FREQUENCY-DELAYED STACK", "text": "The frequency-delayed stack is a one-dimensional RNN which runs forward along the frequency axis. Much like existing one-dimensional autoregressive models (language models, waveform models, etc.), the frequency-delayed stack operates on a one-dimensional sequence (a single frame) and estimates the distribution for each element conditioned on all preceding elements. The primary difference is that it is also conditioned upon the outputs of the time-delayed stack, allowing it to use the full two-dimensional context x<ij .\nWe denote the function computed by the frequency-delayed stack as Ffl . At each layer, the frequencydelayed stack takes two inputs: the the previous-layer outputs of the frequency-delayed stack, hfij [l−1], and the current-layer outputs of the time-delayed stack htij [l]. These inputs are summed and used as input to a one-dimensional RNN to produce the output feature map Ffl ( hf [l − 1], ht[l] ) which consists of the RNN hidden state:\nhfij [l] =W f l F f l\n( hf [l − 1], ht[l] ) ij + hfij [l − 1]. (7)\n+\n+\nFigure 3: Computation graph for a single layer of the network. F tl and F f l are the functions computed by the time-delayed stack and frequency-delayed stack, respectively. The outputs of these functions are projected (by the matrices W tl and W f l ) and summed with the layer inputs to form residual blocks.\nTo ensure that hfij [l] is computed using only elements in the context x<ij , the inputs to the frequencydelayed stack are shifted backwards one step along the frequency axis: hfij [0] =W f 0 xi,j−1. At the final layer, layer L, a linear map is applied to the output of the frequency-delayed stack to produce the unconstrained Gaussian mixture model parameters, i.e. θ̂ij =Wθh f ij [L]." }, { "heading": "4.3 CONDITIONING", "text": "To incorporate conditioning information into the model, conditioning features z are simply projected onto the input layer along with the inputs x:\nhtij [0] =W t 0xi−1,j +W t zzij (8)\nhfij [0] =W f 0 xi,j−1 +W f z zij . (9)\nReshaping, upsampling, and broadcasting can be used as necessary to ensure the conditioning features have the same time and frequency shape as the input spectrogram, e.g. a one-hot vector representation for speaker ID would first be broadcast along both the time and frequency axes." }, { "heading": "5 MULTISCALE MODELLING", "text": "To improve audio fidelity, we generate high-resolution spectrograms which have the same dimensionality as their corresponding time-domain representations. Under this regime, a single training example has several hundreds of thousands of dimensions. Capturing global structure in such high-dimensional distributions is challenging for autoregressive models, which are biased towards capturing local dependencies. To counteract this, we utilize a multiscale approach which effectively permutes the autoregressive ordering so that a spectrogram is generated in a coarse-to-fine order.\nThe elements of a spectrogram x are partitioned into G tiers x1, . . . , xG, such that each successive tier contains higher-resolution information. We define x<g as the union of all tiers which precede xg , i.e. x<g = (x1, . . . , xg−1). The distribution is factorized over tiers:\np(x; ψ) = ∏ g p(xg | x<g; ψg), (10)\nand the distribution of each tier is further factorized element-by-element as described in Section 3. We explicitly include the parameterization by ψ = (ψ1, . . . , ψG) to indicate that each tier is modelled by a separate network." }, { "heading": "5.1 TRAINING", "text": "During training, the tiers are generated by recursively partitioning a spectrogram into alternating rows along either the time or frequency axis. We define a function split which partitions an input into even and odd rows along a given axis. The initial step of the recursion applies the split function to a spectrogram x, or equivalently x<G+1, so that the even-numbered rows are assigned to xG and the odd-numbered rows are assigned to x<G. Subsequent tiers are defined similarly in a recursive manner:\nxg, x<g = split(x<g+1). (11)\nAt each step of the recursion, we model the distribution p(xg | x<g; ψg). The final step of the recursion models the unconditional distribution over the initial tier p(x1; ψ1).\nTo model the conditional distribution p(xg | x<g; ψg), the network at each tier needs a mechanism to incorporate information from the preceding tiers x<g. To this end, we add a feature extraction network which computes features from x<g which are used condition the generation of xg . We use a multidimensional RNN consisting of four one-dimensional RNNs which run bidirectionally along slices of both axes of the context x<g. A layer of the feature extraction network is similar to a layer of the time-delayed stack, but since the feature extraction network is not causal, we include an RNN which runs backwards along the time axis and do not shift the inputs. The hidden states of the RNNs in the feature extraction network are used to condition the generation of xg . As each tier doubles the resolution, the features extracted from x<g have the same time and frequency shape as xg , allowing the conditioning mechanism described in section 4.3 to be used straightforwardly." }, { "heading": "5.2 SAMPLING", "text": "To sample from the multiscale model we iteratively sample a value for xg conditioned on x<g using the learned distributions defined by the estimated network parameters ψ̂ = (ψ̂1, . . . , ψ̂G). The initial tier, x1, is generated unconditionally by sampling from p(x1; ψ̂1) and subsequent tiers are sampled from p(xg | x<g; ψ̂g). At each tier, the sampled xg is interleaved with the context x<g:\nx<g+1 = interleave(xg, x<g). (12)\nThe interleave function is simply the inverse of the split function. Sampling terminates once a full spectrogram, x<G+1, has been generated. A spectrogram generated by a multiscale model is shown in Figure 4 and the sampling procedure is visualized schematically in Figure 5." }, { "heading": "6 EXPERIMENTS", "text": "To demonstrate the MelNet is broadly applicable as a generative model for audio, we train the model on a diverse set of audio generation tasks (single-speaker speech generation, multi-speaker speech generation, and music generation) using three publicly available datasets. Generated audio samples for each task are available on the accompanying web page https://audio-samples.github.io. We include samples generated using the priming and biasing procedures described by Graves (2013). Biasing lowers the temperature of the predictive distribution and priming seeds the model state with a given sequence of audio prior to sampling. Hyperparameters for all experiments are available in Appendix A.\nSpeech and music have rich hierarchies of latent structure. Speech has complex linguistic structure (phonemes, words, syntax, semantics, etc.) and music has highly compositional musical structure (notes, chords, melody and rhythm, etc.). The presence of these latent structures in generated samples can be used as a proxy for how well a generative model has learned dependencies at various timescales. As such, a qualitative analysis of unconditional samples is an insightful method of evaluating generative models of audio. To facilitate such a qualitative evaluation, we train MelNet on each of the three unconditional generation tasks and include samples on the accompanying web page. For completeness, we briefly provide some of our own qualitative observations regarding the generated samples (Sections 6.1, 6.2, and 6.3). In addition to qualitative analysis, we conduct a human evaluation experiment to quantitatively compare how well WaveNet and MelNet capture high-level structure (Section 6.4). Lastly, we ablate the impact of the multiscale generation procedure on MelNet’s ability model long-range dependencies (Section 6.5)." }, { "heading": "6.1 SINGLE-SPEAKER SPEECH", "text": "To test MelNet’s ability to model a single speaker in a controlled environment, we utilize the Blizzard 2013 dataset (King, 2011), which consists of audiobook narration performed in a highly animated manner by a professional speaker. We find that MelNet frequently generates samples that contain coherent words and phrases. Even when the model generates incoherent speech, the intonation, prosody, and speaking style remain consistent throughout the duration of the sample. Furthermore, the model learns to produce speech using a variety of character voices and learns to generate samples which contain elements of narration and dialogue. Biased samples tend to contain longer strings of comprehensible words but are read in a less expressive fashion. When primed with a real sequence of audio, MelNet is able to continue sampling speech which has consistent speaking style and intonation." }, { "heading": "6.2 MULTI-SPEAKER SPEECH", "text": "Audiobook data is recorded in a highly controlled environment. To demonstrate MelNet’s capacity to model distributions with significantly more variation, we utilize the VoxCeleb2 dataset (Chung et al., 2018). The VoxCeleb2 dataset consists of over 2,000 hours of speech data captured with real world noise including laughter, cross-talk, channel effects, music and other sounds. The dataset is also multilingual, with speech from speakers of 145 different nationalities, covering a wide range of accents, ages, ethnicities and languages. When trained on the VoxCeleb2 dataset, we find that MelNet is able to generate unconditional samples with significant variation in both speaker characteristics (accent, language, prosody, speaking style) as well as acoustic conditions (background noise and recording quality). While the generated speech is often not comprehensible, samples can often be identified as belonging to a specific language, indicating that the model has learned distinct modalities for different languages. Furthermore, it is difficult to distinguish real and fake samples which are spoken in foreign languages. For foreign languages, semantic structures are not understood by the listener and cannot be used to discriminate between real and fake. Consequently, the listener must rely largely on phonetic structure, which MelNet is able to realistically model." }, { "heading": "6.3 MUSIC", "text": "To show that MelNet can model audio modalities other than speech, we apply the model to the task of unconditional music generation. We utilize the MAESTRO dataset (Hawthorne et al., 2018), which consists of over 172 hours of solo piano performances. The samples demonstrate that MelNet learns musical structures such as melody and harmony. Furthermore, generated samples often maintain consistent tempo and contain interesting variation in volume, timbre, and rhythm.\nWaveNet MelNet\nBlizzard 0.0% 100.0% VoxCeleb2 0.0% 100.0% MAESTRO 4.2% 95.8%\n(a) Comparison between MelNet and WaveNet. Both models are trained in an entirely unsupervised manner.\nWave2Midi2Wave MelNet\nMAESTRO 37.7% 62.3%\n(b) Comparison between MelNet and Wave2Midi2Wave. Wave2Midi2Wave is a two-stage model consisting of a Music Transformer trained on labelled MIDI followed by a conditional WaveNet model. The MelNet model, on the other hand, is trained without any intermediate supervision.\nTable 1: Selection rates of human evaluators when asked to identify which model generates samples with longer-term structure. Results show that MelNet captures long-range structure better than WaveNet. Furthermore, MelNet outperforms a two-stage model which conditions WaveNet on generated MIDI." }, { "heading": "6.4 HUMAN EVALUATION", "text": "Making quantitative comparisons with existing generative models such as WaveNet is difficult for various reasons and previous works have ultimately relied on largely empirical evaluations by the reader (Dieleman et al., 2018). To allow the reader to make these judgements for themselves, we provide samples from both WaveNet and MelNet for each of the tasks described in the previous sections. Furthermore, in an effort to provide quantitative metrics to support the claim that MelNet generates samples with improved long-range structure in comparison to WaveNet, we conduct a human experiment whereby participants are presented anonymized samples from both models and asked to select which sample exhibits longer-term structure. We resort to such evaluations since standard metrics for evaluation of generative models such as density estimates cannot be used to compare WaveNet and MelNet as that these models operate on different representations.\nThe methodology for this experiment is as follows. For each of the three unconditional audio generation tasks, we generated 50 samples from WaveNet and 50 samples from MelNet. Participants were shown an anonymized, randomly-drawn sample from each model and instructed to “select the sample which has more coherent long-term structure.” We collected 50 evaluations for each task. Results, shown in Table 1a, show that evaluators overwhelmingly agreed that samples generated by MelNet had more coherent long-range structure than samples from WaveNet across all tasks.\nIn addition to comparing MelNet to an unconditional WaveNet model for music generation, we also compare to a two-stage Wave2Midi2Wave model (Hawthorne et al., 2018) which conditions WaveNet on MIDI generated by a separately-trained Music Transformer (Huang et al., 2018). The two-stage Wave2Midi2Wave model has the advantage of directly modelling labelled musical notes which distill much of the salient, high-level structure in music into a compact symbolic representation. Despite this, as shown by the results in Table 1b, the two-stage model does not capture long-range structure as well as a MelNet model that is trained without access to any intermediate representations." }, { "heading": "6.5 ABLATION: MULTISCALE MODELLING", "text": "To isolate the impact of multiscale modelling procedure described in Section 5, we train models with varying numbers of tiers and evaluate the long-term coherence of their respective samples. As noted before, long-term coherence is difficult to quantify and we provide samples on the accompanying web page so that the reader can make their own judgements. We believe the samples clearly demonstrate that increasing the number of tiers results in samples with more coherent high-level structure. We note that our experiment varies the number of tiers from two to five. Training a single-tier model on full-resolution spectrograms was prohibitively expensive in terms of memory consumption. This highlights another benefit of multiscale modelling—large, deep networks can be allocated to learning complex distributional structure in the initial tiers while shallower networks can be used for modelling the relatively simple, low-entropy distributions in the upsampling tiers. This allows multiscale models to effectively allocate network capacity in proportion to the complexity of the modelling task." }, { "heading": "7 RELATED WORK", "text": "The predominant line of research regarding generative models for audio has been directed towards modelling time-domain waveforms with autoregressive models (van den Oord et al., 2016a; Mehri et al., 2016; Kalchbrenner et al., 2018). WaveNet is a competitive baseline for audio generation, and as such, is used for comparison in many of our experiments. However, we note that the contribution of our work is in many ways complementary to that of WaveNet. MelNet is more proficient at capturing high-level structure, whereas WaveNet is capable of producing higher-fidelity audio. Several works have demonstrated that time-domain models can be used to invert spectral representations to highfidelity audio (Shen et al., 2018; Prenger et al., 2019; Arık et al., 2019), suggesting that MelNet could be used in concert with time-domain models such as WaveNet.\nDieleman et al. (2018) and van den Oord et al. (2017) capture long-range dependencies in waveforms by utilizing a hierarchy of autoencoders. This approach requires multiple stages of models which must be trained sequentially, whereas the multiscale approach in this work can be parallelized over tiers. Additionally, these approaches do not directly optimize the data likelihood, nor do they admit tractable marginalization over the latent codes. We also note that the modelling techniques devised in these works can be broadly applied to autoregressive models such as ours, making their contributions largely complementary to ours.\nRecent works have used generative adversarial networks (GANs) (Goodfellow et al., 2014) to model both waveforms and spectral representations (Donahue et al., 2018; Engel et al., 2018). As with image generation, it remains unclear whether GANs capture all modes of the data distribution. Furthermore, these approaches are restricted to generating fixed-duration segments of audio, which precludes their usage in many audio generation tasks.\nGenerating spectral representations is common practice for end-to-end text-to-speech models (Ping et al., 2017; Sotelo et al., 2017; Wang et al., 2017; Taigman et al., 2018). However, these models use probabilistic models which are much less expressive than the fine-grained autoregressive model used by MelNet. Consequently, these models are unsuitable for modelling high-entropy, multimodal distributions such as those involved in tasks like unconditional music generation.\nThe network architecture used for MelNet is heavily influenced by recent advancements in deep autoregressive models for images. Theis & Bethge (2015) introduced an LSTM architecture for autoregressive modelling of 2D images and van den Oord et al. (2016c) introduced PixelRNN and PixelCNN and scaled up the models to handle the modelling of natural images. Subsequent works in autoregressive image modelling have steadily improved state-of-the-art for image density estimation (van den Oord et al., 2016b; Salimans et al., 2017; Parmar et al., 2018; Chen et al., 2017; Child et al., 2019). We draw inspiration from many of these models, and ultimately design a recurrent architecture of our own which is suitable for modelling spectrograms rather than images. We note that our choice of architecture is not a fundamental contribution of this work. While we have designed the architecture particularly for modelling spectrograms, we did not experimentally validate whether it outperforms existing architectures and make no such claims to this effect.\nWe use a multidimensional recurrence in both the time-delayed stack and the upsampling tiers to extract features from two-dimensional inputs. Our multidimensional recurrence is effectively ‘factorized’ as it independently applies one-dimensional RNNs across each dimension. This approach differs from the tightly coupled multidimensional recurrences used by MDRNNs (Graves et al., 2007; Graves & Schmidhuber, 2009) and GridLSTMs (Kalchbrenner et al., 2015) and more closely resembles the approach taken by ReNet (Visin et al., 2015). Our approach allows for efficient training as we can extract features from an M ×N grid in max(M,N) sequential recurrent steps rather than the M + N sequential steps required for tightly coupled recurrences. Additionally, our approach enables the use of highly optimized one-dimensional RNN implementations.\nVarious approaches to image generation have succeeded in generating high-resolution, globally coherent images with hundreds of thousands of dimensions (Karras et al., 2017; Reed et al., 2017; Kingma & Dhariwal, 2018). The methods introduced in these works are not directly transferable to waveform generation, as they exploit spatial properties of images which are absent in one-dimensional audio signals. However, these methods are more straightforwardly applicable to two-dimensional representations such as spectrograms. Of particular relevance to our work are approaches which combine autoregressive models with multiscale modelling (van den Oord et al., 2016c; Dahl et al.,\n2017; Reed et al., 2017; Menick & Kalchbrenner, 2018). Our work demonstrates that the benefits of a multiscale autoregressive model extend beyond the task of image generation, and can be used to generate high-resolution, globally coherent spectrograms." }, { "heading": "8 CONCLUSION & FUTURE WORK", "text": "We have introduced MelNet, a generative model for spectral representations of audio. MelNet combines a highly expressive autoregressive model with a multiscale modelling scheme to generate high-resolution spectrograms with realistic structure on both local and global scales. In comparison to previous works which model time-domain signals directly, MelNet is particularly well-suited to model long-range temporal dependencies. Experiments show promising results across a diverse set of audio generation tasks.\nFurthermore, we believe MelNet provides a foundation for various directions of future work. Two particularly promising directions are text-to-speech synthesis and representation learning:\n• Text-to-Speech Synthesis: MelNet utilizes a more flexible probabilistic model than existing end-to-end text-to-speech models, making it well-suited to model expressive, multi-modal speech data. • Representation Learning: MelNet is able to uncover salient structure from large quantities\nof unlabelled audio. Large-scale, pre-trained autoregressive models for language modelling have demonstrated significant benefits when fine-tuned for downstream tasks. Likewise, representations learned by MelNet could potentially aid downstream tasks such as speech recognition." }, { "heading": "A APPENDIX", "text": "A.1 HYPERPARAMETERS & TRAINING DETAILS\nAll RNNs use LSTM cells (Hochreiter & Schmidhuber, 1997). All models are trained with RMSProp (Tieleman & Hinton, 2012) with a learning rate of 10−4 and momentum of 0.9. The initial values for all recurrent states are trainable parameters. A single hyperparameter controls the width of the network—all hidden sizes (RNN state size, residual connections, etc.) are defined by a single value, denoted hidden size in table 2.\nTable 2: MelNet hyperparameters.\nBlizzard MAESTRO VoxCeleb2\nTiers 6 4 5 Layers (Initial Tier) 12 16 16 Layers (Upsampling Tiers) 5-4-3-2-2 6-5-4 6-5-4-3 Hidden Size 512 512 512 GMM Mixture Components 10 10 10 Batch Size 32 16 128 Sample Rate (Hz) 22,050 22,050 16,000 Max Sample Duration (s) 10 6 6 Mel Channels 256 256 180 STFT Hop Size 256 256 180 STFT Window Size 6 · 256 6 · 256 6 · 180\nA.2 WAVENET BASELINE\nThe human evaluation experiments require samples from a baseline WaveNet model. For the Blizzard and VoxCeleb2 datasets, we use our own reimplementation. Our WaveNet model uses 8-bit µ-law encoding and models each sample with a discrete distribution. Each model is trained for 150,000 steps. We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.001 and batch size of 32. Additional hyperparameters are reported in Table 3.\nWe do not use our WaveNet implementation for human evaluation on the MAESTRO dataset. The authors that introduce this dataset provide roughly 2 minutes of audio samples on their website for both unconditional WaveNet and Wave2Midi2Wave models. We generate 50 random 10 second slices from these 2 minutes and directly use them for the human evaluations." } ]
2,019
null
SP:0f0e048d70d90c7b55524e88954e71efb168cee9
[ "The paper proposes an interpretable architecture for image classification based on a scattering transform and sparse dictionary learning approach. The scattering transform acts as a pre-trained interpretable feature extractor that does not require data. A sparse dictionary on top of this representation (the scattering coefficients) is learnt to minimize the classification error. The authors cast the dictionary learning as a classical CNN learning approach and implement an efficient solution via homotopy learning (given that some assumptions are fulfilled). The scattering transform approach is not new (as the authors mention in the paper, it was published in Oyallon et al., 2019). The main novelty comes from applying a previously published dictionary learning approach (as the authors mention in the paper, it was published in Jiao et al., 2017) on top to boost the performance. As a second contribution, the authors extend the exponential convergence proof of ISTC (Jiao et al., 2017) and ALISTA (Liu et al., 2019). In the experiments, they show that the proposed architecture, despite its simplicity, outperform AlexNet in the ImageNet classification problem.", "The paper proposes a network architecture composed of three interpretable components followed by a simple MLP classifier. It first applies a scattering transform followed by a learned linear projection (to reduce dimensionality). A sparse representation of these coefficients is then obtained using dictionary learning. The projection, dictionary and MLP classifier are jointly trained to minimize the classification loss. Results show that the model outperforms AlexNet on the Imagenet benchmark." ]
We introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet 2012 dataset. The network first applies a scattering transform that linearizes variabilities due to geometric transformations such as translations and small deformations. A sparse ` dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework that includes ALISTA. Classification results are analyzed on ImageNet.
[ { "affiliations": [], "name": "John Zarka" }, { "affiliations": [], "name": "Louis Thiry" }, { "affiliations": [], "name": "Tomás Angles" } ]
[ { "authors": [ "M. Andreux", "T. Angles", "G. Exarchakis", "R. Leonarduzzi", "G. Rochette", "L. Thiry", "J. Zarka", "S. Mallat", "J. Andén", "E. Belilovsky", "J. Bruna", "V. Lostanlen", "M.J. Hirn", "E. Oyallon", "S. Zhang", "C.E. Cella", "M. Eickenberg" ], "title": "Kymatio: Scattering transforms in python", "venue": "URL http: //arxiv.org/abs/1812.11214", "year": 2018 }, { "authors": [ "A. Beck", "M. Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "SIAM J. Imaging Sciences,", "year": 2009 }, { "authors": [ "J. Bruna", "S. Mallat" ], "title": "Invariant scattering convolution networks", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "E.J. Candes", "J. Romberg", "T. Tao" ], "title": "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information", "venue": "IEEE Transactions on Information Theory,", "year": 2006 }, { "authors": [ "P.L. Combettes", "JC. Pesquet" ], "title": "Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering", "venue": null, "year": 2011 }, { "authors": [ "I. Daubechies", "M. Defrise", "C. De Mol" ], "title": "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint", "venue": "Communications on Pure and Applied Mathematics,", "year": 2004 }, { "authors": [ "G. Davis", "S. Mallat", "M. Avellaneda" ], "title": "Adaptive greedy approximations", "venue": "Constr. Approx.,", "year": 1997 }, { "authors": [ "D.L. Donoho", "M. Elad" ], "title": "On the stability of the basis pursuit in the presence of noise", "venue": "Signal Processing,", "year": 2006 }, { "authors": [ "D.L. Donoho", "Y. Tsaig" ], "title": "Fast solution of l1-norm minimization problems when the solution may be sparse", "venue": "IEEE Trans. Information Theory,", "year": 2008 }, { "authors": [ "K. Gregor", "Y. LeCun" ], "title": "Learning fast approximations of sparse coding", "venue": "In ICML, pp", "year": 2010 }, { "authors": [ "Y. Jiao", "B. Jin", "X. Lu" ], "title": "Iterative soft/hard thresholding with homotopy continuation for sparse recovery", "venue": "IEEE Signal Processing Letters,", "year": 2017 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "J. Liu", "X. Chen", "Z. Wang", "W. Yin" ], "title": "ALISTA: Analytic weights are as good as learned weights in LISTA", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "S. Mahdizadehaghdam", "A. Panahi", "H. Krim", "L. Dai" ], "title": "Deep dictionary learning: A parametric network approach", "venue": "IEEE Transactions on Image Processing,", "year": 2019 }, { "authors": [ "J. Mairal", "J. Ponce", "G. Sapiro", "A. Zisserman", "F. Bach" ], "title": "Supervised dictionary learning", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "J. Mairal", "F. Bach", "J. Ponce" ], "title": "Task-driven dictionary learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2011 }, { "authors": [ "S. Mallat" ], "title": "Group invariant scattering", "venue": "Comm. Pure Appl. Math.,", "year": 2012 }, { "authors": [ "S. Mallat" ], "title": "Understanding deep convolutional networks", "venue": "Phil. Trans. of Royal Society A,", "year": 2016 }, { "authors": [ "S. Mallat", "Z. Zhang" ], "title": "Matching pursuits with time-frequency dictionaries", "venue": "Trans. Sig. Proc.,", "year": 1993 }, { "authors": [ "M.R. Osborne", "B. Presnell", "B.A" ], "title": "Turlach. A new approach to variable selection in least squares problems", "venue": "IMA journal of numerical analysis,", "year": 2000 }, { "authors": [ "E. Oyallon", "S. Mallat" ], "title": "Deep roto-translation scattering for object classification", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "E. Oyallon", "E. Belilovsky", "S. Zagoruyko" ], "title": "Scaling the scattering transform: Deep hybrid networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "E. Oyallon", "S. Zagoruyko", "G. Huang", "N. Komodakis", "S. Lacoste-Julien", "M. Blaschko", "E. Belilovsky" ], "title": "Scattering networks for hybrid representation learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "V. Papyan", "Y. Romano", "M. Elad" ], "title": "Convolutional neural networks analyzed via convolutional sparse coding", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "F. Perronnin", "D. Larlus" ], "title": "Fisher vectors meet neural networks: A hybrid classification architecture", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "O. Russakovsky", "J. Deng", "H. Su", "J. Krause", "S. Satheesh", "S. Ma", "Z. Huang", "A. Karpathy", "A. Khosla", "M. Bernstein", "A.C. Berg", "L. Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "J. Sulam", "V. Papyan", "Y. Romano", "M. Elad" ], "title": "Multilayer convolutional sparse modeling: Pursuit and dictionary learning", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 }, { "authors": [ "X. Sun", "N.M. Nasrabadi", "T.D. Tran" ], "title": "Supervised deep sparse coding networks", "venue": "In 2018 25th IEEE International Conference on Image Processing (ICIP),", "year": 2018 }, { "authors": [ "J. Sánchez", "F. Perronnin" ], "title": "High-dimensional signature compression for large-scale image classification", "venue": "In CVPR,", "year": 2011 }, { "authors": [ "L. Xiao", "T. Zhang" ], "title": "A proximal-gradient homotopy method for the sparse least-square problem", "venue": "SIAM Journl on Optimization,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep convolutional networks have spectacular applications to classification and regression (LeCun et al., 2015), but they are black boxes that are hard to analyze mathematically because of their architecture complexity. Scattering transforms are simplified convolutional neural networks with wavelet filters which are not learned (Bruna & Mallat, 2013). They provide state-of-the-art classification results among predefined or unsupervised representations, and are nearly as efficient as learned deep networks on relatively simple image datasets, such as digits in MNIST, textures (Bruna & Mallat, 2013) or small CIFAR images (Oyallon & Mallat, 2014; Mallat, 2016). However, over complex datasets such as ImageNet, the classification accuracy of a learned deep convolutional network is much higher than a scattering transform or any other predefined representation (Oyallon et al., 2019). A fundamental issue is to understand the source of this improvement. This paper addresses this question by showing that one can reduce the learning to a single dictionary matrix, which is used to compute a positive sparse `1 code.\nThe resulting algorithm is implemented with a simplified convolutional neural network architecture illustrated in Figure 1. The classifier input is a positive `1 sparse code of scattering coefficients calculated in a dictionary D. The matrix D is learned together with the classifier by minimizing a classification loss over a training set. We show that learning D improves the performance of a scattering\nrepresentation considerably and is sufficient to reach a higher accuracy than AlexNet (Krizhevsky et al., 2012) over ImageNet 2012. This cascade of well understood mathematical operators provides a simplified mathematical model to analyze optimization and classification performances of deep neural networks.\nDictionary learning for classification was introduced in Mairal et al. (2009) and implemented with deep convolutional neural network architectures by several authors (Sulam et al., 2018; Mahdizadehaghdam et al., 2019; Sun et al., 2018). To reach good classification accuracies, these networks cascade several dictionary learning blocks. As a result, there is no indication that these operators compute optimal sparse `1 codes. These architectures are thus difficult to analyze mathematically and involve heavy calculations. They have only been applied to small image classification problems such as MNIST or CIFAR, as opposed to ImageNet. Our architecture reaches a high classification performance on ImageNet with only one dictionaryD, because it is applied to scattering coefficients as opposed to raw images. Intra-class variabilities due to geometric image transformations such as translations or small deformations are linearized by a scattering transform (Bruna & Mallat, 2013), which avoids unnecessary learning.\nLearning a dictionary in a deep neural network requires to implement a sparse `1 code. We show that homotopy iterative thresholding algorithms lead to more efficient sparse coding implementations with fewer layers. We prove their exponential convergence in a general framework that includes the ALISTA (Liu et al., 2019) algorithm. The main contributions of the paper are summarized below:\n• A sparse scattering network architecture, illustrated in Figure 1, where the classification is performed over a sparse code computed with a single learned dictionary of scattering coefficients. It outperforms AlexNet over ImageNet 2012.\n• A new dictionary learning algorithm with homotopy sparse coding, optimized by gradient descent in a deep convolutional network. If the dictionary is sufficiently incoherent, the homotopy sparse coding error is proved to convergence exponentially.\nWe explain the implementation and mathematical properties of each element of the sparse scattering network. Section 2 briefly reviews multiscale scattering transforms. Section 3 introduces homotopy dictionary learning for classification, with a proof of exponential convergence under appropriate assumptions. Section 4 analyzes image classification results of sparse scattering networks on ImageNet 2012." }, { "heading": "2 SCATTERING TRANSFORM", "text": "A scattering transform is a cascade of wavelet transforms and ReLU or modulus non-linearities. It can be interpreted as a deep convolutional network with predefined wavelet filters (Mallat, 2016). For images, wavelet filters are calculated from a mother complex wavelet ψ whose average is zero. It is rotated by r−θ, dilated by 2j and its phase is shifted by α:\nψj,θ(u) = 2 −2jψ(2−jr−θu) and ψj,θ,α = Real(e−iα ψj,θ)\nWe choose a Morlet wavelet as in Bruna & Mallat (2013) to produce a sparse set of non-negligible wavelet coefficients. A ReLU is written ρ(a) = max(a, 0).\nScattering coefficients of orderm = 1 are computed by averaging rectified wavelet coefficients with a subsampling stride of 2J :\nSx(u, k, α) = ρ(x ? ψj,θ,α) ? φJ(2 Ju) with k = (j, θ)\nwhere φJ is a Gaussian dilated by 2J (Bruna & Mallat, 2013). The averaging by φJ eliminates the variations of ρ(x ? ψj,θ,α) at scales smaller than 2J . This information is recovered by computing their variations at all scales 2j ′ < 2J , with a second wavelet transform. Scattering coefficients of order two are:\nSx(u, k, k′, α, α′) = ρ(ρ(x ? ψj,θ,α) ? ψj′,θ′,α′) ? φJ(2 Ju) with k, k′ = (j, θ), (j′, θ′)\nTo reduce the dimension of scattering vectors, we define phase invariant second order scattering coefficients with a complex modulus instead of a phase sensitive ReLU:\nSx(u, k, k′) = ||x ? ψj,θ| ? ψj′,θ′ | ? φJ(2Ju) for j′ > j\nThe scattering representation includes order 1 coefficients and order 2 phase invariant coefficients. In this paper, we choose J = 4 and hence 4 scales 1 ≤ j ≤ J , 8 angles θ and 4 phases α on [0, 2π]. Scattering coefficients are computed with the software package Kymatio (Andreux et al., 2018). They preserve the image information, and x can be recovered from Sx (Oyallon et al., 2019). For computational efficiency, the dimension of scattering vectors can be reduced by a factor 6 with a linear operator L that preserves the ability to recover a close approximation of x from LSx. The dimension reduction operator L of Figure 1 may be an orthogonal projection over the principal directions of a PCA calculated on the training set, or it can be optimized by gradient descent together with the other network parameters.\nThe scattering transform is Lipschitz continuous to translations and deformations (Mallat, 2012). Intra-class variabilities due to translations smaller than 2J and small deformations are linearized. Good classification accuracies are obtained with a linear classifier over scattering coefficients in image datasets where translations and deformations dominate intra-class variabilities. This is the case for digits in MNIST or texture images (Bruna & Mallat, 2013). However, it does not take into account variabilities of pattern structures and clutter which dominate complex image datasets. To remove this clutter while preserving class separation requires some form of supervised learning. The sparse scattering network of Figure 1 computes a sparse code of scattering representation β = LSx in a learned dictionary D of scattering features, which minimizes the classification loss. For this purpose, the next section introduces a homotopy dictionary learning algorithm, implemented in a small convolutional network." }, { "heading": "3 HOMOTOPY DICTIONARY LEARNING FOR CLASSIFICATION", "text": "Task-driven dictionary learning for classification with sparse coding was proposed in Mairal et al. (2011). We introduce a small convolutional network architecture to implement a sparse `1 code and learn the dictionary with a homotopy continuation on thresholds. The next section reviews dictionary learning for classification. Homotopy sparse coding algorithms are studied in Section 3.2." }, { "heading": "3.1 SPARSE CODING AND DICTIONARY LEARNING", "text": "Unless specified, all norms are Euclidean norms. A sparse code approximates a vector β with a linear combination of a minimum number of columns Dm of a dictionary matrix D, which are normalized ‖Dm‖ = 1. It is a vector α0 of minimum support with a bounded approximation error ‖Dα0 − β‖ ≤ σ. Such sparse codes have been used to optimize signal compression (Mallat & Zhang, 1993) and to remove noise, to solve inverse problems in compressed sensing (Candes et al., 2006), and for classification (Mairal et al., 2011). In this case, the dictionary learning optimizes the matrix D in order to minimize the classification loss. The resulting columns Dm can be interpreted as classification features selected by the sparse code α0. To enforce this interpretation, we impose that sparse code coefficients are positive, α0 ≥ 0.\nPositive sparse coding Minimizing the support of a code α amounts to minimizing its `0 \"norm\", which is not convex. This non-convex optimization is convexified by replacing the `0 norm by an `1 norm. Since α ≥ 0, we have ‖α‖1 = ∑ m α(m). The minimization of ‖α‖1 with ‖Dα− β‖ ≤ σ is solved by minimizing a convex Lagrangian with a multiplier λ∗ which depends on σ:\nα1 = argmin α≥0\n1 2 ‖Dα− β‖2 + λ∗ ‖α‖1 (1)\nOne can prove (Donoho & Elad, 2006) that α1(m) has the same support as the minimum support sparse code α0(m) along m if the support size s and the dictionary coherence satisfy:\ns µ(D) < 1/2 where µ(D) = max m 6=m′ |DtmDm′ | (2)\nThe sparse approximation Dα1 is a non-linear filtering which preserves the components of β which are \"coherent\" in the dictionary D, represented by few large amplitude coefficients. It eliminates the \"noise\" corresponding to incoherent components of β whose correlations with all dictionary vectors Dm are typically below λ∗, which can be interpreted as a threshold.\nSupervised dictionary learning with a deep neural network Dictionary learning for classification amounts to optimizing the matrix D and the threshold λ∗ to minimize the classification loss on a training set {(xi, yi)}i. This is a much more difficult non-convex optimization problem than the convex sparse coding problem (1). The sparse code α1 of each scattering representation β = LSx depends upon D and λ∗. It is used as an input to a classifier parametrized by Θ. The classification loss ∑ i Loss(D,λ∗,Θ, xi, yi) thus depends upon the dictionary D and λ∗ (through α\n1), and on the classification parameters Θ. The dictionary D is learned by minimizing the classification loss. This task-driven dictionary learning strategy was introduced in Mairal et al. (2011).\nAn implementation of the task-driven dictionary learning strategy with deep neural networks has been proposed in (Papyan et al., 2017; Sulam et al., 2018; Mahdizadehaghdam et al., 2019; Sun et al., 2018). The deep network is designed to approximate the sparse code by unrolling a fixed number N of iterations of an iterative soft thresholding algorithm. The network takes β as input and is parametrized by the dictionary D and the Lagrange multiplier λ∗, as shown in Figure 2. The classification loss is then minimized with stochastic gradient descent on the classifier parameters and on D and λ∗. The number of layers in the network is equal to the number N of iterations used to approximate the sparse code. During training, the forward pass approximates the sparse code with respect to the current dictionary, and the backward pass updates the dictionary through a stochastic gradient descent step.\nFor computational efficiency the main issue is to approximate α1 with as few layers as possible and hence find an iterative algorithm which converges quickly. Next section shows that this can be done with homotopy algorithms, that can have an exponential convergence." }, { "heading": "3.2 HOMOTOPY ITERATED SOFT THRESHOLDING ALGORITHMS", "text": "Sparse `1 codes are efficiently computed with iterative proximal gradient algorithms (Combettes & Pesquet, 2011). For a positive sparse code, these algorithms iteratively apply a linear operator and a rectifier which acts as a positive thresholding. They can thus be implemented in a deep neural network. We show that homotopy algorithms can converge exponentially and thus lead to precise calculations with fewer layers.\nIterated Positive Soft Thresholding with ReLU Proximal gradient algorithms compute sparse `1 codes with a gradient step on the regression term ‖x − Dz‖2 followed by proximal projection which enforces the sparse penalization (Combettes & Pesquet, 2011). For a positive sparse code, the proximal projection is defined by:\nproxλ(β) = argmin α≥0\n1 2 ‖α− β‖2 + λ ‖α‖1 (3)\nSince ‖α‖1 = ∑ m α(m) for α(m) ≥ 0, we verify that proxλ(β) = ρ(β − λ) where ρ(a) = max(a, 0) is a rectifier, with a bias λ. The rectifier acts as a positive soft thresholding, where λ is the threshold. Without the positivity condition α ≥ 0, the proximal operator in (3) is a soft thresholding which preserves the sign.\nAn Iterated Soft Thresholding Algorithm (ISTA) (Daubechies et al., 2004) computes an `1 sparse code α1 by alternating a gradient step on ‖Dx− z‖2 and a proximal projection. For positive codes, it is initialized with α0 = 0, and:\nαn+1 = ρ(αn + D t(β −Dαn)− λ∗) with <\n1\n‖DtD‖2,2 (4)\nwhere ‖ . ‖2,2 is the spectral norm. The first iteration computes a non-sparse code α1 = ρ( Dtβ − λ∗) which is progressively sparsified by iterated thresholdings. The convergence is slow: ‖αn−α1‖ = O(n−1). Fast Iterated Soft Thresholding Agorithm (FISTA) (Beck & Teboulle, 2009) accelerates the error decay to O(n−2), but it remains slow.\nEach iteration of ISTA and FISTA is computed with linear operators and a thresholding and can be implemented with one layer (Papyan et al., 2017). The slow convergence of these algorithms requires to use a large number N of layers to compute an accurate sparse `1 code. We show that the number of layers can be reduced considerably with homotopy algorithms.\nHomotopy continuation Homotopy continuation algorithms introduced in Osborne et al. (2000), minimize the `1 Lagrangian (1) by progressively decreasing the Lagrange multiplier. This optimization path is opposite to ISTA and FISTA since it begins with a very sparse initial solution whose sparsity is progressively reduced, similarly to matching pursuit algorithms (Davis et al., 1997; Donoho & Tsaig, 2008). Homotopy algorithms are particularly efficient if the final Lagrange multiplier λ∗ is large and thus produces a very sparse optimal solution. We shall see that it is the case for classification.\nHomotopy proximal gradient descents (Xiao & Zhang, 2013) are implemented with an exponentially decreasing sequence of Lagrange multipliers λn for n ≤ N . Jiao, Jin and Lu (Jiao et al., 2017) have introduced an Iterative Soft Thresholding Continuation (ISTC) algorithm with a fixed number of iterations per threshold. To compute a positive sparse code, we replace the soft thresholding by a ReLU proximal projector, with one iteration per threshold, over n ≤ N iterations:\nαn = ρ(αn−1 +D t(β −Dαn−1)− λn) with λn = λmax (λmax λ∗ )−n/N (5)\nBy adapting the proof of (Jiao et al., 2017) to positive codes, the next theorem proves in a more general framework that ifN is sufficiently large and λmax ≥ ‖Dtβ‖∞ then αn converges exponentially to the optimal positive sparse code.\nLISTA algorithm (Gregor & LeCun, 2010) and its more recent version ALISTA (Liu et al., 2019) accelerate the convergence of proximal algorithms by introducing an auxiliary matrix W , which is adapted to the statistics of the input and to the properties of the dictionary. Such an auxiliary matrix may also improve classification accuracy. We study its influence by replacing Dt by an arbitrary matrix W t in (5). Each column Wm of W is normalized by |W tmDm| = 1. A generalized ISTC is defined for any dictionary D and any auxiliary W by:\nαn = ρ(αn−1 +W t(β −Dαn−1)− λn) with λn = λmax (λmax λ∗ )−n/N (6)\nIf W = D then we recover the original ISTC algorithm (5) (Jiao et al., 2017). Figure 2 illustrates a neural network implementation of this generalized ISTC algorithm over N layers, with side connections. Let us introduce the mutual coherence of W and D\nµ̃ = max m6=m′\n|W tm′Dm|\nThe following theorem gives a sufficient condition on this mutual coherence and on the thresholds so that αn converges exponentially to the optimal sparse code. ALISTA (Liu et al., 2019) is a particular case of generalized ISTC where W is optimized in order to minimize the mutual coherence µ̃. In Section 4.1 we shall optimizeW jointly withD without any analytic mutual coherence minimization like in ALISTA.\nTheorem 3.1 Let α0 be the `0 sparse code of β with error ‖β−Dα0‖ ≤ σ. If its support s satisfies\ns µ̃ < 1/2 (7)\nthen thresholding iterations (6) with\nλn = λmax γ −n ≥ λ∗ = ‖W t(β −Dα0)‖∞ 1− 2γµ̃s\n(8)\ndefine an αn, whose support is included in the support of α0 if 1 < γ < (2µ̃s)−1 and λmax ≥ ‖W tβ‖∞. The error then decreases exponentially:\n‖αn − α0‖∞ ≤ 2λmax γ−n (9)\nThe proof is in Appendix A of the supplementary material. It adapts the convergence proof of Jiao et al. (2017) to arbitrary auxiliary matrices W and positive sparse codes. If we set W to minimize the mutual coherence µ̃ then this theorem extends the ALISTA exponential convergence result to the noisy case. It proves exponential convergence by specifying thresholds for a non-zero approximation error σ.\nHowever, one should not get too impressed by this exponential convergence rate because the condition sµ̃ < 1/2 only applies to very sparse codes in highly incoherent dictionaries. Given a dictionary D, it is usually not possible to findW which satisfies this hypothesis. However, this sufficient condition is based on a brutal upper bound calculation in the proof. It is not necessary to get an exponential convergence. Next section studies learned dictionaries for classification on ImageNet and shows that when W = D, the ISTC algorithm converges exponentially although sµ(D) > 1/2. When W is learned independently from D, with no mutual coherence condition, we shall see that the algorithm may not converge." }, { "heading": "4 IMAGE CLASSIFICATION", "text": "The goal of this work is to construct a deep neural network model which is sufficiently simple to be analyzed mathematically, while reaching the accuracy of more complex deep convolutional networks on large classification problems. This is why we concentrate on ImageNet as opposed to MNIST or CIFAR. Next section shows that a single `1 sparse code in a learned dictionary improves considerably the classification performance of a scattering representation, and outperforms AlexNet on ImageNet 1. We analyze the influence of different architecture components. Section 4.2 compares the convergence of homotopy iterated thresholdings with ISTA and FISTA." }, { "heading": "4.1 IMAGE CLASSIFICATION ON IMAGENET", "text": "ImageNet 2012 (Russakovsky et al., 2015) is a challenging color image dataset of 1.2 million training images and 50,000 validation images, divided into 1000 classes. Prior to convolutional networks, SIFT representations combined with Fisher vector encoding reached a Top 5 classification accuracy of 74.3% with multiple model averaging (Sánchez & Perronnin, 2011). In their PyTorch implementation, the Top 5 accuracy of AlexNet and ResNet-152 is 79.1% and 94.1% respectively2.\nThe scattering transform Sx at a scale 2J = 16 of an ImageNet color image is a spatial array of 14× 14 of 1539 channels. If we apply to Sx the same MLP classifier as in AlexNet, with 2 hidden layers of size 4096, ReLU and dropout rate of 0.3, the Top 5 accuracy is 65.3%. We shall use the same AlexNet type MLP classifier in all other experiments, or a linear classifier when specified. If we first apply to Sx a 3-layer SLE network of 1x1 convolutions with ReLU and then the same MLP then the accuracy is improved by 14% and it reaches AlexNet performance (Oyallon et al., 2017). However, there is no mathematical understanding of the operations performed by these three layers, and the origin of the improvements, which partly motivates this work.\nThe sparse scattering architecture is described in Figure 3. A 3 × 3 convolutional operator L is applied on a standardized scattering transform to reduce the number of scattering channels from 1539 to 256. It includes 3.5 106 learned parameters. The ISTC network illustrated in Figure 2 has N = 12 layers with ReLU and no batch normalization. A smaller network with N = 8 has nearly the same classification accuracy but the ISTC sparse coding does not converge as well, as explained in Section 4.2. Increasing N to 14 or 16 has little impact on accuracy and on the code precision.\n1Code to reproduce experiments is available at https://github.com/j-zarka/SparseScatNet 2Accuracies from https://pytorch.org/docs/master/torchvision/models.html\nThe sparse code is first calculated with a 1 × 1 convolutional dictionary D having 2048 vectors. Dictionary columns Dm have a spatial support of size 1 and thus do not overlap when translated. It preserves a small dictionary coherence so that the iterative thresholding algorithm converges exponentially. This ISTC network takes as input an array LSx of size 14 × 14 × 256 which has been normalized and outputs a code α1 of size 14 × 14 × 2048 or a reconstruction Dα1 of size 14× 14× 256. The total number of learned parameters in D is about 5 105. The output α1 or Dα1 of the ISTC network is transformed by a batch normalization, and a 5× 5 average pooling and then provided as input to the MLP classifier. The representation is computed with 4 106 parameters in L andD, which is above the 2.5 106 parameters of AlexNet. Our goal here is not to reduce the number of parameters but to structure the network into well defined mathematical operators.\nIf we set W = D in the ISTC network, the supervised learning jointly optimizes L, the dictionary D with the Lagrange multiplier λ∗ and the MLP classifier parameters. It is done with a stochastic gradient descent during 160 epochs using an initial learning rate of 0.01 with a decay of 0.1 at epochs 60 and 120. With a sparse code in input of the MLP, it has a Top 5 accuracy of 81.0%, which outperforms AlexNet.\nIf we also jointly optimize W to minimize the classification loss, then the accuracy improves to 83.7%. However, next section shows that in this case, the ISTC network does not compute a sparse `1 code and is therefore not mathematically understood. In the following we thus impose that W = D.\nThe dimension reduction operator L has a marginal effect in terms of performance. If we eliminate it or if we replace it by an unsupervised PCA dimension reduction, the performance drops by less than 2%, whereas the accuracy drops by almost 16% if we eliminate the sparse coding. The number of learned parameters to compute α1 then drops from 4 106 to 5 105. The considerable improvement brought by the sparse code is further amplified if the MLP classifier is replaced by a much smaller linear classifier. A linear classifier on a scattering vector has a (Top 1, Top 5) accuracy of (26.1%, 44.7%). With a ISTC sparse code with W = D in a learned dictionary the accuracy jumps to (51.6%, 73.7%) and hence improves by nearly 30%.\nThe optimization learns a relatively large factor λ∗ which yields a large approximation error ‖LSx − Dα1‖/‖LSx‖ ≈ 0.5, and a very sparse code α1 with about 4% non-zero coefficients. The sparse approximation Dα1 thus eliminates nearly half of the energy of LS(x) which can be interpreted as non-informative \"clutter\" removal. The sparse approximation Dα1 of LSx has a small\ndimension 14 × 14 × 256 similar to AlexNet last convolutional layer output. If the MLP classifier is applied to Dα1 as opposed to α1 then the accuracy drops by less than 2% and it remains slightly above AlexNet. Replacing LSx by Dα1 thus improves the accuracy by 14%. The sparse coding projection eliminates \"noise\", which seems to mostly correspond to intra-class variabilities while carrying little discriminative information between classes. Since Dα1 is a sparse combination of dictionary columns Dm, each Dm can be interpreted as \"discriminative features\" in the space of scattering coefficients. They are optimized to preserve discriminative directions between classes." }, { "heading": "4.2 CONVERGENCE OF HOMOTOPY ALGORITHMS", "text": "To guarantee that the network can be analyzed mathematically, we verify numerically that the homotopy ISTC algorithm computes an accurate approximation of the optimal `1 sparse code in (1), with a small number of iterations.\nWhen W = D, Theorem 3.1 guarantees an exponential convergence by imposing a strong incoherence condition s µ(D) < 1/2. In our classification setting, sµ(D) ≈ 60 so the theorem hypothesis is clearly not satisfied. However, this incoherence condition is not necessary. It is derived from a relatively crude upper bound in the proof of Appendix A.1. Figure 4 left shows numerically that the ISTC algorithm for W = D minimizes the Lagrangian L(α) = 12‖Dα − β‖\n2 + λ∗ ‖α‖1 over α ≥ 0, with an exponential convergence which is faster than ISTA and FISTA. This is tested with a dictionary learned by minimizing the classification loss over ImageNet.\nIf we jointly optimize W and D to minimize the classification loss then the ImageNet classification accuracy improves from 81.0% to 83.7%. However, Figure 4 right shows that the generalized ISTC network outputs a sparse code which does not minimize the `1 Lagrangian at all. Indeed, the learned matrixW does not have a minimum joint coherence with the dictionaryD, as in ALISTA (Liu et al., 2019). The joint coherence then becomes very large with sµ̃ ≈ 300, which prevents the convergence. Computing W by minimizing the joint coherence would require too many computations.\nTo further compare the convergence speed of ISTC forW = D versus ISTA and FISTA, we compute the relative mean square error MSE(x, y) = ‖x − y‖2/‖x‖2 between the optimal sparse code α1 and the sparse code output of 12 iterations of each of these three algorithms. The MSE is 0.23 for FISTA and 0.45 for ISTA but only 0.02 for ISTC. In this case, after 12 iterations, ISTC reduces the error by a factor 10 compared to ISTA and FISTA." }, { "heading": "5 CONCLUSION", "text": "This work shows that learning a single dictionary is sufficient to improve the performance of a predefined scattering representation beyond the accuracy of AlexNet on ImageNet. The resulting deep convolutional network is a scattering transform followed by a positive `1 sparse code, which are well defined mathematical operators. Dictionary vectors capture discriminative directions in\nthe scattering space. The dictionary approximations act as a non-linear projector which removes non-informative intra-class variations.\nThe dictionary learning is implemented with an ISTC network with ReLUs. We prove exponential convergence in a general framework that includes ALISTA. A sparse scattering network reduces the convolutional network learning to a single dictionary learning problem. It opens the possibility to study the network properties by analyzing the resulting dictionary. It also offers a simpler mathematical framework to analyze optimization issues." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the ERC InvariantClass 320959, grants from Région Ile-de-France and the PRAIRIE 3IA Institute of the French ANR-19-P3IA-0001 program. We thank the Scientific Computing Core at the Flatiron Institute for the use of their computing resources. We would like to thank Eugene Belilovsky for helpful discussions and comments." }, { "heading": "A APPENDIX", "text": "A.1 PROOF OF THEOREM 3.1\nLet α0 be the optimal `0 sparse code. We denote by S(α) the support of any α. We also write ρλ(a) = ρ(a − λ). We are going to prove by induction on n that for any n ≥ 0 we have S(αn) ⊂ S(α0) and ‖αn − α0‖∞ ≤ 2λn if λn ≥ λ∗. For n = 0, α0 = 0 so S(α0) = ∅ is indeed included in the support of α0 and ‖α0−α0‖∞ = ‖α0‖∞. To verify the induction hypothesis for λ0 = λmax ≥ λ∗, we shall prove that ‖α0‖∞ ≤ 2λmax.\nLet us write the error w = β −Dα0. For all m\nα0(m)W tmDm = W t mβ −W tmw − ∑ m6=m′ α0(m′)W tmDm′ .\nSince the support of α0 is smaller than s, W tmDm = 1 and µ̃ = maxm 6=m′ |W tmDm′ |\n|α0(m)| ≤ |W tmβ|+ |W tmw|+ s µ̃ ‖α0‖∞ so taking the max on m gives:\n‖α0‖∞(1− µ̃s) ≤ ‖W tβ‖∞ + ‖W tw‖∞ But given the inequalities\n‖W tβ‖∞ ≤ λmax ‖W tw‖∞ ≤ λmax(1− 2γµ̃s) (1− γµ̃s) (1− µ̃s) ≤ 1 since γ ≥ 1 and (1− µ̃s) > 0\nwe get ‖α0‖∞ ≤ 2λmax = 2λ0\nLet us now suppose that the property is valid for n and let us prove it for n + 1. We denote by DA the restriction of D to vectors indexed by A. We begin by showing that S(αn+1) ⊂ S(α0). For any m ∈ S(αn+1), since β = Dα0 + w and W tmDm = 1 we have αn+1(m) = ρλn+1(αn(m) +W t m(β −Dαn))\n= ρλn+1(α 0(m) +W tm(DS(α0)∪S(αn)−{m}(α 0 − αn)S(α0)∪S(αn)−{m} + w))\nFor any m not in S(α0), let us prove that αn+1(m) = 0. The induction hypothesis assumes that S(αn) ⊂ S(α0) and ‖α0 − αn‖∞ ≤ 2λn with λn ≥ λ∗ so:\nI = |α0(m) +W tm(DS(α0)∪S(αn)−{m}(α 0 − αn)S(α0)∪S(αn)−{m} + w)|\n≤ |W tm(DS(α0)(α0 − αn)S(α0))|+ |W tmw| since S(αn) ⊂ S(α0) and α0(m) = 0 by assumption. ≤ µ̃s‖α0 − αn‖∞ + ‖W tw‖∞\nSince we assume that λn+1 ≥ λ∗, we have ‖W tw‖∞ ≤ (1− 2γµ̃s)λn+1\nand thus\nI ≤ µ̃s‖α0 − αn‖∞ + ‖W tw‖∞ ≤ µ̃s2λn + λn+1(1− 2γµ̃s) ≤ λn+1 since λn = γλn+1.\nBecause of the thresholding ρλn+1 , it proves that αn+1(m) = 0 and hence that S(αn+1) ⊂ S(α0).\nLet us now evaluate ‖α0 − αn+1‖∞. For any (α1, α2, λ), a soft thresholding satisfies |ρλ(α1 + α2)− α1| ≤ λ+ |α2|\nso:\n|αn+1(m)− α0(m)| ≤ λn+1 + |W tm(DS(α0)∪S(αn)−{m}(α 0 − αn)S(α0)∪S(αn)−{m})|+ |W t mw|\n≤ λn+1 + µ̃s‖α0 − αn‖∞ + ‖W tw‖∞ ≤ λn+1 + µ̃s2λn + λn+1(1− 2γµ̃s) = 2λn+1\nTaking a max over m proves the induction hypothesis." } ]
2,020
null
SP:c457a63633d74f3637f83a95fc2f29bdd01b6411
[ "This paper introduced a latent space model for reinforcement learning in vision-based control tasks. It first learns a latent dynamics model, in which the transition model and the reward model can be learned on the latent state representations. Using the learned latent state representations, it used an actor-critic model to learn a reactive policy to optimize the agent's behaviors in long-horizon continuous control tasks. The method is applied to vision-based continuous control in 20 tasks in the Deepmind control suite. ", "The paper proposes Dreamer, a model-based RL method for high-dimensional inputs such as images. The main novelty in Dreamer is to learn a policy function from latent representation-and-transition models in an end-to-end manner. Specifically, Dreamer is an actor-critic method that learns an optimal policy by backpropagating re-parameterized gradients through a value function, a latent transition model, and a latent representation model. This is unlike existing methods which use model-free or planning methods on simulated trajectories to learn the optimal policy. Meanwhile, Dreamer learns the remaining components, namely a value function, a latent transition model, and a latent representation model, based on existing methods (the world models and PlaNet). Experiments on a large set of continuous control tasks show that Dreamer outperforms existing model-based and model-free methods. " ]
Learned world models summarize an agent’s experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
[ { "affiliations": [], "name": "LATENT IMAGINATION" }, { "affiliations": [], "name": "Danijar Hafner" }, { "affiliations": [], "name": "Timothy Lillicrap" }, { "affiliations": [], "name": "Mohammad Norouzi" } ]
[ { "authors": [ "A.A. Alemi", "I. Fischer", "J.V. Dillon", "K. Murphy" ], "title": "Deep variational information bottleneck", "venue": "arXiv preprint arXiv:1612.00410,", "year": 2016 }, { "authors": [ "E. Banijamali", "R. Shu", "M. Ghavamzadeh", "H. Bui", "A. Ghodsi" ], "title": "Robust locally-linear controllable embedding", "venue": "arXiv preprint arXiv:1710.05373,", "year": 2017 }, { "authors": [ "G. Barth-Maron", "M.W. Hoffman", "D. Budden", "W. Dabney", "D. Horgan", "A. Muldal", "N. Heess", "T. Lillicrap" ], "title": "Distributed distributional deterministic policy gradients", "venue": "arXiv preprint arXiv:1804.08617,", "year": 2018 }, { "authors": [ "M.G. Bellemare", "Y. Naddaf", "J. Veness", "M. Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Y. Bengio", "N. Léonard", "A. Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "J. Buckman", "D. Hafner", "G. Tucker", "E. Brevdo", "H. Lee" ], "title": "Sample-efficient reinforcement learning with stochastic ensemble value expansion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "L. Buesing", "T. Weber", "S. Racaniere", "S. Eslami", "D. Rezende", "D.P. Reichert", "F. Viola", "F. Besse", "K. Gregor", "D. Hassabis" ], "title": "Learning and querying fast generative models for reinforcement learning", "venue": "arXiv preprint arXiv:1802.03006,", "year": 2018 }, { "authors": [ "A. Byravan", "J.T. Springenberg", "A. Abdolmaleki", "R. Hafner", "M. Neunert", "T. Lampe", "N. Siegel", "N. Heess", "M. Riedmiller" ], "title": "Imagined value gradients: Model-based policy optimization with transferable latent dynamics models", "venue": "arXiv preprint arXiv:1910.04142,", "year": 2019 }, { "authors": [ "P.S. Castro", "S. Moitra", "C. Gelada", "S. Kumar", "M.G. Bellemare" ], "title": "Dopamine: A research framework for deep reinforcement learning", "venue": "arXiv preprint arXiv:1812.06110,", "year": 2018 }, { "authors": [ "K. Chua", "R. Calandra", "R. McAllister", "S. Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "D.-A. Clevert", "T. Unterthiner", "S. Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "arXiv preprint arXiv:1511.07289,", "year": 2015 }, { "authors": [ "A. Doerr", "C. Daniel", "M. Schiegg", "D. Nguyen-Tuong", "S. Schaal", "M. Toussaint", "S. Trimpe" ], "title": "Probabilistic recurrent state-space models", "venue": "arXiv preprint arXiv:1801.10395,", "year": 2018 }, { "authors": [ "F. Ebert", "C. Finn", "A.X. Lee", "S. Levine" ], "title": "Self-supervised visual planning with temporal skip connections", "venue": "arXiv preprint arXiv:1710.05268,", "year": 2017 }, { "authors": [ "S.A. Eslami", "D.J. Rezende", "F. Besse", "F. Viola", "A.S. Morcos", "M. Garnelo", "A. Ruderman", "A.A. Rusu", "I. Danihelka", "K. Gregor" ], "title": "Neural scene representation and rendering", "venue": null, "year": 2018 }, { "authors": [ "L. Espeholt", "H. Soyer", "R. Munos", "K. Simonyan", "V. Mnih", "T. Ward", "Y. Doron", "V. Firoiu", "T. Harley", "I. Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "V. Feinberg", "A. Wan", "I. Stoica", "M.I. Jordan", "J.E. Gonzalez", "S. Levine" ], "title": "Model-based value estimation for efficient model-free reinforcement learning", "venue": "arXiv preprint arXiv:1803.00101,", "year": 2018 }, { "authors": [ "C. Gelada", "S. Kumar", "J. Buckman", "O. Nachum", "M.G. Bellemare" ], "title": "Deepmdp: Learning continuous latent space models for representation learning", "venue": null, "year": 1906 }, { "authors": [ "K. Gregor", "D.J. Rezende", "F. Besse", "Y. Wu", "H. Merzic", "A. v. d. Oord" ], "title": "Shaping belief states with generative environment models for rl", "venue": null, "year": 1906 }, { "authors": [ "Z.D. Guo", "M.G. Azar", "B. Piot", "B.A. Pires", "T. Pohlen", "R. Munos" ], "title": "Neural predictive belief representations", "venue": "arXiv preprint arXiv:1811.06407,", "year": 2018 }, { "authors": [ "M. Gutmann", "A. Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "T. Haarnoja", "A. Zhou", "P. Abbeel", "S. Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "D. Hafner", "T. Lillicrap", "I. Fischer", "R. Villegas", "D. Ha", "H. Lee", "J. Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "N. Heess", "G. Wayne", "D. Silver", "T. Lillicrap", "T. Erez", "Y. Tassa" ], "title": "Learning continuous control policies by stochastic value gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "M. Henaff", "W.F. Whitney", "Y. LeCun" ], "title": "Model-based planning in discrete action", "venue": "spaces. CoRR,", "year": 2017 }, { "authors": [ "M. Henaff", "W.F. Whitney", "Y. LeCun" ], "title": "Model-based planning with discrete and continuous actions", "venue": "arXiv preprint arXiv:1705.07177,", "year": 2018 }, { "authors": [ "M. Henaff", "A. Canziani", "Y. LeCun" ], "title": "Model-predictive policy learning with uncertainty regularization for driving in dense traffic", "venue": null, "year": 1901 }, { "authors": [ "M. Hessel", "J. Modayil", "H. Van Hasselt", "T. Schaul", "G. Ostrovski", "W. Dabney", "D. Horgan", "B. Piot", "M. Azar", "D. Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "M. Jaderberg", "V. Mnih", "W.M. Czarnecki", "T. Schaul", "J.Z. Leibo", "D. Silver", "K. Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "M.I. Jordan", "Z. Ghahramani", "T.S. Jaakkola", "L.K. Saul" ], "title": "An introduction to variational methods for graphical models", "venue": "Machine learning,", "year": 1999 }, { "authors": [ "L. Kaiser", "M. Babaeizadeh", "P. Milos", "B. Osinski", "R.H. Campbell", "K. Czechowski", "D. Erhan", "C. Finn", "P. Kozakowski", "S. Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "R.E. Kalman" ], "title": "A new approach to linear filtering and prediction problems", "venue": "Journal of basic Engineering,", "year": 1960 }, { "authors": [ "M. Karl", "M. Soelch", "J. Bayer", "P. van der Smagt" ], "title": "Deep variational bayes filters: Unsupervised learning of state space models from raw data", "venue": "arXiv preprint arXiv:1605.06432,", "year": 2016 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "R.G. Krishnan", "U. Shalit", "D. Sontag" ], "title": "Deep kalman filters", "venue": "arXiv preprint arXiv:1511.05121,", "year": 2015 }, { "authors": [ "T. Kurutach", "I. Clavera", "Y. Duan", "A. Tamar", "P. Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Y. LeCun", "B. Boser", "J.S. Denker", "D. Henderson", "R.E. Howard", "W. Hubbard", "L.D. Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "A.X. Lee", "A. Nagabandi", "P. Abbeel", "S. Levine" ], "title": "Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model", "venue": null, "year": 1907 }, { "authors": [ "T.P. Lillicrap", "J.J. Hunt", "A. Pritzel", "N. Heess", "T. Erez", "Y. Tassa", "D. Silver", "D. Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "K. Lowrey", "A. Rajeswaran", "S. Kakade", "E. Todorov", "I. Mordatch" ], "title": "Plan online, learn offline: Efficient learning and exploration via model-based control", "venue": "arXiv preprint arXiv:1811.01848,", "year": 2018 }, { "authors": [ "M.C. Machado", "M.G. Bellemare", "E. Talvitie", "J. Veness", "M. Hausknecht", "M. Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "D. McAllester", "K. Statos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "arXiv preprint arXiv:1811.04251,", "year": 2018 }, { "authors": [ "V. Mnih", "K. Kavukcuoglu", "D. Silver", "A.A. Rusu", "J. Veness", "M.G. Bellemare", "A. Graves", "M. Riedmiller", "A.K. Fidjeland", "G. Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "V. Mnih", "A.P. Badia", "M. Mirza", "A. Graves", "T. Lillicrap", "T. Harley", "D. Silver", "K. Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "J. Oh", "S. Singh", "H. Lee" ], "title": "Value prediction network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A. v. d. Oord", "Y. Li", "O. Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "P. Parmas", "C.E. Rasmussen", "J. Peters", "K. Doya" ], "title": "Pipps: Flexible model-based policy search robust to the curse of chaos", "venue": null, "year": 1902 }, { "authors": [ "A. Piergiovanni", "A. Wu", "M.S. Ryoo" ], "title": "Learning real-world robot policies by dreaming", "venue": "arXiv preprint arXiv:1805.07813,", "year": 2018 }, { "authors": [ "B. Poole", "S. Ozair", "A. v. d. Oord", "A.A. Alemi", "G. Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 1905 }, { "authors": [ "D.J. Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "arXiv preprint arXiv:1401.4082,", "year": 2014 }, { "authors": [ "J. Schmidhuber" ], "title": "Making the world differentiable: On using self-supervised fully recurrent neural networks for dynamic reinforcement learning and planning in non-stationary environments", "venue": null, "year": 1990 }, { "authors": [ "J. Schrittwieser", "I. Antonoglou", "T. Hubert", "K. Simonyan", "L. Sifre", "S. Schmitt", "A. Guez", "E. Lockhart", "D. Hassabis", "T. Graepel" ], "title": "Mastering atari, go, chess and shogi by planning with a learned model", "venue": "arXiv preprint arXiv:1911.08265,", "year": 2019 }, { "authors": [ "J. Schulman", "F. Wolski", "P. Dhariwal", "A. Radford", "O. Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "D. Silver", "G. Lever", "N. Heess", "T. Degris", "D. Wierstra", "M. Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In Proceedings of the 31st International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "A. Srinivas", "A. Jabri", "P. Abbeel", "S. Levine", "C. Finn" ], "title": "Universal planning networks", "venue": "arXiv preprint arXiv:1804.00645,", "year": 2018 }, { "authors": [ "R.S. Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM SIGART Bulletin,", "year": 1991 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "N. Tishby", "F.C. Pereira", "W. Bialek" ], "title": "The information bottleneck method", "venue": "arXiv preprint physics/0004057,", "year": 2000 }, { "authors": [ "T. Wang", "J. Ba" ], "title": "Exploring model-based planning with policy networks", "venue": "arXiv preprint arXiv:1906.08649,", "year": 2019 }, { "authors": [ "T. Wang", "X. Bao", "I. Clavera", "J. Hoang", "Y. Wen", "E. Langlois", "S. Zhang", "G. Zhang", "P. Abbeel", "J. Ba" ], "title": "Benchmarking model-based reinforcement learning", "venue": null, "year": 1907 }, { "authors": [ "M. Watter", "J. Springenberg", "J. Boedecker", "M. Riedmiller" ], "title": "Embed to control: A locally linear latent dynamics model for control from raw images", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "T. Weber", "S. Racanière", "D.P. Reichert", "L. Buesing", "A. Guez", "D.J. Rezende", "A.P. Badia", "O. Vinyals", "N. Heess", "Y. Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "arXiv preprint arXiv:1707.06203,", "year": 2017 }, { "authors": [ "R.J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "M. Zhang", "S. Vikram", "L. Smith", "P. Abbeel", "M. Johnson", "S. Levine" ], "title": "Solar: deep structured representations for model-based reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hafner" ], "title": "2019), we fix it to 2 for all environments", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nIntelligent agents can achieve goals in complex environments even though they never encounter the exact same situation twice. This ability requires building representations of the world from past experience that enable generalization to novel situations. World models offer an explicit way to represent an agent’s knowledge about the world in a parametric model that can make predictions about the future. When the sensory inputs are high-dimensional images, latent dynamics models can abstract observations to predict forward in compact state spaces (Watter et al., 2015; Oh et al., 2017; Gregor et al., 2019). Compared to predictions in image space, latent states have a small memory footprint that enables imagining thousands of trajectories in parallel. Learning effective latent dynamics models is becoming feasible through advances in deep learning and latent variable models (Krishnan et al., 2015; Karl et al., 2016; Doerr et al., 2018; Buesing et al., 2018). Behaviors can be derived from dynamics models in many ways. Often, imagined rewards are maximized with a parametric policy (Sutton, 1991; Ha and Schmidhuber, 2018; Zhang et al., 2019) or by online planning (Chua et al., 2018; Hafner et al., 2018). However, considering only rewards within a fixed imagination horizon results in shortsighted behaviors (Wang et al., 2019). Moreover, prior work commonly resorts to derivative-free optimization for robustness to model errors (Ebert et al., 2017; Chua et al., 2018; Parmas et al., 2019), rather than leveraging analytic gradients offered by neural network dynamics (Henaff et al., 2019; Srinivas et al., 2018). We present Dreamer, an agent that learns long-horizon behaviors from images purely by latent imagination. A novel actor critic algorithm accounts for rewards beyond the imagination horizon while making efficient use of the neural network dynamics. For this, we predict state values and actions in the learned latent space as summarized in Figure 1. The values optimize Bellman consistency for imagined rewards and the policy maximizes the values by propagating their analytic gradients back through the dynamics. In comparison to actor critic algorithms that learn online or by experience replay (Lillicrap et al., 2015; Mnih et al., 2016; Schulman et al., 2017; Haarnoja et al., 2018; Lee et al., 2019), world models can interpolate past experience and offer analytic gradients of multi-step returns for efficient policy optimization.\n∗Correspondence to: Danijar Hafner <mail@danijar.com>.\nThe key contributions of this paper are summarized as follows: • Learning long-horizon behaviors by latent imagination Model-based agents can be short-\nsighted if they use a finite imagination horizon. We approach this limitation by predicting both actions and state values. Training purely by imagination in a latent space lets us efficiently learn the policy by propagating analytic value gradients back through the latent dynamics.\n• Empirical performance for visual control We pair Dreamer with existing representation learning methods and evaluate it on the DeepMind Control Suite with image inputs, illustrated in Figure 2. Using the same hyper parameters for all tasks, Dreamer exceeds previous model-based and model-free agents in terms of data-efficiency, computation time, and final performance." }, { "heading": "2 CONTROL WITH WORLD MODELS", "text": "Reinforcement learning We formulate visual control as a partially observable Markov decision process (POMDP) with discrete time step t ∈ [1;T ], continuous vector-valued actions at ∼ p(at | o≤t, a<t) generated by the agent, and high-dimensional observations and scalar rewards ot, rt ∼ p(ot, rt | o<t, a<t) generated by the unknown environment. The goal is to develop an agent that maximizes the expected sum of rewards Ep (∑T t=1 rt ) . Figure 2 shows a selection of our tasks. Agent components The classical components of agents that learn in imagination are dynamics learning, behavior learning, and environment interaction (Sutton, 1991). In the case of Dreamer, the behavior is learned by predicting hypothetical trajectories in the compact latent space of the world model. As outlined in Figure 3 and detailed in Algorithm 1, Dreamer performs the following operations throughout the agent’s life time, either interleaved or in parallel: • Learning the latent dynamics model from the dataset of past experience to predict future re-\nwards from actions and past observations. Any learning objective for the world model can be incorporated with Dreamer. We review existing methods for learning latent dynamics in Section 4.\n• Learning action and value models from predicted latent trajectories, as described in Section 3. The value model optimizes Bellman consistency for imagined rewards and the action model is updated by propagating gradients of value estimates back through the neural network dynamics.\n• Executing the learned action model in the world to collect new experience for growing the dataset. Latent dynamics Dreamer uses a latent dynamics model that consists of three components. The representation model encodes observations and actions to create continuous vector-valued model states st with Markovian transitions (Watter et al., 2015; Zhang et al., 2019; Hafner et al., 2018). The transition model predicts future model states without seeing the corresponding observations that will later cause them. The reward model predicts the rewards given the model states,\nRepresentation model: p(st | st−1, at−1, ot) Transition model: q(st | st−1, at−1) Reward model: q(rt | st).\n(1)\nWe use p for distributions that generate samples in the real environment and q for their approximations that enable latent imagination. Specifically, the transition model lets us predict ahead in the compact latent space without having to observe or imagine the corresponding images. This results in a low memory footprint and fast predictions of thousands of imagined trajectories in parallel. The model mimics a non-linear Kalman filter (Kalman, 1960), latent state space model, or HMM with real-valued states. However, it is conditioned on actions and predicts rewards, allowing the agent to imagine the outcomes of potential action sequences without executing them in the environment.\nobservations and actions into compact latent states ( ), for example via reconstruction, and predicts environment rewards ( ). (b) In the compact latent space, Dreamer predicts state values ( ) and actions ( ) that maximize future value predictions by propagating gradients back through imagined trajectories. (c) The agent encodes the history of the episode to compute the current model state and predict the next action to execute in the environment. See Algorithm 1 for pseudo code of the agent." }, { "heading": "3 LEARNING BEHAVIORS BY LATENT IMAGINATION", "text": "Dreamer learns long-horizon behaviors in the compact latent space of a learned world model by efficiently leveraging the neural network latent dynamics. For this, we propagate stochastic gradients of multi-step returns through neural network predictions of actions, states, rewards, and values using reparameterization. This section describes the main contribution of our paper. Imagination environment The latent dynamics define a Markov decision process (MDP; Sutton, 1991) that is fully observed because the compact model states st are Markovian. We denote imagined quantities with τ as the time index. Imagined trajectories start at the true model states st of observation sequences drawn from the agent’s past experience. They follow predictions of the transition model sτ ∼ q(sτ | sτ−1, aτ−1), reward model rτ ∼ q(rτ | sτ ), and a policy aτ ∼ q(aτ | sτ ). The objective is to maximize expected imagined rewards Eq (∑∞ τ=t γ τ−trτ ) with respect to the policy.\nAlgorithm 1: Dreamer\nInitialize dataset D with S random seed episodes. Initialize neural network parameters θ, φ, ψ randomly. while not converged do\nfor update step c = 1..C do // Dynamics learning\nDraw B data sequences {(at, ot, rt)}k+Lt=k ∼ D. Compute model states st ∼ pθ(st | st−1, at−1, ot). Update θ using representation learning.\n// Behavior learning\nImagine trajectories {(sτ , aτ )}t+Hτ=t from each st. Predict rewards E ( qθ(rτ | sτ ) ) and values vψ(sτ ). Compute value estimates Vλ(sτ ) via Equation 6. Update φ← φ+ α∇φ ∑t+H τ=t Vλ(sτ ).\nUpdate ψ← ψ− α∇ψ ∑t+H τ=t 1 2 ∥∥vψ(sτ )9Vλ(sτ )∥∥2. // Environment interaction o1 ← env.reset() for time step t = 1..T do\nCompute st ∼ pθ(st | st−1, at−1, ot) from history. Compute at ∼ qφ(at | st) with the action model. Add exploration noise to action. rt, ot+1 ← env.step(at).\nAdd experience to dataset D ← D ∪ {(ot, at, rt)Tt=1}.\nModel components Representation pθ(st | st-1, at-1, ot) Transition qθ(st | st-1, at-1) Reward qθ(rt | st) Action qφ(at | st) Value vψ(st)\nHyper parameters Seed episodes S Collect interval C Batch size B Sequence length L Imagination horizon H Learning rate α\nAction and value models Consider imagined trajectories with a finite horizon H . Dreamer uses an actor critic approach to learn behaviors that consider rewards beyond the horizon. We learn an action model and a value model in the latent space of the world model for this. The action model implements the policy and aims to predict actions that solve the imagination environment. The value model estimates the expected imagined rewards that the action model achieves from each state sτ ,\nAction model: aτ ∼ qφ(aτ | sτ ) Value model: vψ(sτ ) ≈ Eq(·|sτ ) (∑t+H τ=t γ τ−trτ ) .\n(2)\nThe action and value models are trained cooperatively as typical in policy iteration: the action model aims to maximize an estimate of the value, while the value model aims to match an estimate of the value that changes as the action model changes.\nWe use dense neural networks for the action and value models with parameters φ and ψ, respectively. The action model outputs a tanh-transformed Gaussian (Haarnoja et al., 2018) with sufficient statistics predicted by the neural network. This allows for reparameterized sampling (Kingma and Welling, 2013; Rezende et al., 2014) that views sampled actions as deterministically dependent on the neural network output, allowing us to backpropagate analytic gradients through the sampling operation,\naτ = tanh ( µφ(sτ ) + σφ(sτ ) ) , ∼ Normal(0, I). (3)\nValue estimation To learn the action and value models, we need to estimate the state values of imagined trajectories {sτ , aτ , rτ}t+Hτ=t . These trajectories branch off of the model states st of sequence batches drawn from the agent’s dataset of experience and predict forward for the imagination horizon H using actions sampled from the action model. State values can be estimated in multiple ways that trade off bias and variance (Sutton and Barto, 2018),\nVR(sτ ) . = Eqθ,qφ ( t+H∑ n=τ rn ) , (4)\nVkN(sτ ) . = Eqθ,qφ ( h−1∑ n=τ γn−τrn + γ h−τvψ(sh) ) with h = min(τ + k, t+H), (5)\nVλ(sτ ) . = (1− λ) H−1∑ n=1 λn−1VnN(sτ ) + λ H−1VHN (sτ ), (6)\nwhere the expectations are estimated under the imagined trajectories. VR simply sums the rewards from τ until the horizon and ignores rewards beyond it. This allows learning the action model without a value model, an ablation we compare to in our experiments. VkN estimates rewards beyond k steps with the learned value model. Dreamer uses Vλ, an exponentially-weighted average of the estimates for different k to balance bias and variance. Figure 4 shows that learning a value model in imagination enables Dreamer to solve long-horizon tasks while being robust to the imagination horizon. The experimental details and results on all tasks are described in Section 6.\nLearning objective To update the action and value models, we first compute the value estimates Vλ(sτ ) for all states sτ along the imagined trajectories. The objective for the action model qφ(aτ | sτ ) is to predict actions that result in state trajectories with high value estimates. The objective for the value model vψ(sτ ), in turn, is to regress the value estimates,\nmax φ Eqθ,qφ ( t+H∑ τ=t Vλ(sτ ) ) , (7) min ψ Eqθ,qφ ( t+H∑ τ=t 1 2 ∥∥∥vψ(sτ )−Vλ(sτ ))∥∥∥2). (8) The value model is updated to regress the targets, around which we stop the gradient as typical (Sutton and Barto, 2018). The action model uses analytic gradients through the learned dynamics to maximize the value estimates. To understand this, we note that the value estimates depend on the reward and value predictions, which depend on the imagined states, which in turn depend on the imagined actions. Since all steps are implemented as neural networks, we analytically compute ∇φEqθ,qφ (∑t+H τ=t Vλ(sτ ) ) by stochastic backpropagation (Kingma and Welling, 2013; Rezende et al., 2014). We use reparameterization for continuous actions and latent states and straight-through gradients (Bengio et al., 2013) for discrete actions. The world model is fixed while learning behaviors. In tasks with early termination, the world model also predicts the discount factor from each latent state to weigh the time steps in Equations 7 and 8 by the cumulative product of the predicted discount factors, so terms are weighted down based on how likely the imagined trajectory would have ended. Comparison to actor critic methods Agents using Reinforce gradients (Williams, 1992), such as A3C and PPO (Mnih et al., 2016; Schulman et al., 2017), employ value baselines to reduce gradient variance, while Dreamer backpropagates through the value model. This is similar to deterministic or reparameterized actor critics (Silver et al., 2014), such as DDPG and SAC (Lillicrap et al., 2015; Haarnoja et al., 2018). However, these do not leverage gradients through transitions and only maximize immediate Q-values. MVE and STEVE (Feinberg et al., 2018; Buckman et al., 2018) extend them to multi-step Q-learning with learned dynamics to provide more accurate Q-value targets. We predict state values, which is sufficient for policy optimization since we backpropagate through the dynamics. Refer to Section 5 for a more detailed comparison to related work." }, { "heading": "4 LEARNING LATENT DYNAMICS", "text": "Learning behaviors in imagination requires a world model that generalizes well. We focus on latent dynamics models that predict forward in a compact latent space, facilitating long-term predictions and allowing the agent to imagine thousands of trajectories in parallel. Several objectives for learning representations for control have been proposed (Watter et al., 2015; Jaderberg et al., 2016; Oord et al., 2018; Eslami et al., 2018). We review three approaches for learning representations to use with Dreamer: reward prediction, image reconstruction, and contrastive estimation. Reward prediction Latent imagination requires a representation model p(st | st−1, at−1, ot), transition model q(st | st−1, at−1, ), and reward model q(rt | st), as described in Section 2. In principle, this could be achieved by simply learning to predict future rewards given actions and past observations (Oh et al., 2017; Gelada et al., 2019; Schrittwieser et al., 2019). With a large and diverse dataset, such representations should be sufficient for solving a control task. However, with a finite dataset and especially when rewards are sparse, learning about observations that correlate with rewards is likely to improve the world model (Jaderberg et al., 2016; Gregor et al., 2019).\nReconstruction We first describe the world model used by PlaNet (Hafner et al., 2018) that learns latent dynamics by reconstructing images as shown in Figure 3a. The world model consists of the following components, where the observation model is only used to provide a learning signal,\nRepresentation model: pθ(st | st−1, at−1, ot) Observation model: qθ(ot | st) Reward model: qθ(rt | st) Transition model: qθ(st | st−1, at−1).\n(9)\nThe components are optimized jointly to increase the variational lower bound (ELBO; Jordan et al., 1999) or more generally the variational information bottleneck (VIB; Tishby et al., 2000; Alemi et al., 2016). As derived in Appendix B, the bound includes reconstruction terms for observations and rewards and a KL regularizer. The expectation is taken under the dataset and representation model,\nJREC . = Ep (∑ t ( J tO + J tR + J tD )) + const J tO . = ln q(ot | st) J tR . = ln q(rt | st) J tD . = −βKL ( p(st | st−1, at−1, ot)\n∥∥ q(st | st−1, at−1)). (10) We implement the transition model as a recurrent state space model (RSSM; Hafner et al., 2018), the representation model by combining the RSSM with a convolutional neural network (CNN; LeCun et al., 1989) applied to the image observation, the observation model as a transposed CNN, and the reward model as a dense network. The combined parameter vector θ is updated by stochastic backpropagation (Kingma and Welling, 2013; Rezende et al., 2014). Figure 5 shows video predictions of this model. We refer to Appendix A and Hafner et al. (2018) model details.\nContrastive estimation Predicting pixels can require high model capacity. We can also encourage mutual information between model states and observations by instead predicting the states from the images (Guo et al., 2018). This replaces the observation model with a state model,\nState model: qθ(st | ot). (11)\nWhile the reconstruction objective used the fact that the observation marginal is a constant, we now face the state marginal. As shown in Appendix B, this can be estimated via noise contrastive estimation (NCE; Gutmann and Hyvärinen, 2010; Oord et al., 2018) by averaging the state model over observations o′ of the current sequence batch. Intuitively, q(st | ot) makes the state predictable from the current image while ln ∑ o′ q(st | o′) keeps it diverse to prevent collapse,\nJNCE . = E (∑ t ( J tS + J tR + J tD )) J tS . = ln q(st | ot)− ln (∑ o′ q(st | o′) ) . (12)\nWe implement the state model as a CNN and again optimize the bound with respect to the combined parameter vector θ using stochastic backpropagation. While avoiding pixel prediction, the amount of information this bound can extract efficiently is limited (McAllester and Statos, 2018). We empirically compare reward, reconstruction, and contrastive objectives in our experiments in Figure 8." }, { "heading": "5 RELATED WORK", "text": "Prior works learn latent dynamics for visual control by derivative-free policy learning or online planning, augment model-free agents with multi-step predictions, or use analytic gradients of Qvalues or multi-step rewards, often for low-dimensional tasks. In comparison, Dreamer uses analytic gradients to efficiently learn long-horizon behaviors for visual control purely by latent imagination.\nControl with latent dynamics E2C (Watter et al., 2015) and RCE (Banijamali et al., 2017) embed images to predict forward in a compact space to solve simple tasks. World Models (Ha and Schmidhuber, 2018) learn latent dynamics in a two-stage process to evolve linear controllers in imagination. PlaNet (Hafner et al., 2018) learns them jointly and solves visual locomotion tasks by latent online planning. SOLAR (Zhang et al., 2019) solves robotic tasks via guided policy search in latent space. I2A (Weber et al., 2017) hands imagined trajectories to a model-free policy, while Lee et al. (2019) and Gregor et al. (2019) learn belief representations to accelerate model-free agents.\nImagined multi-step returns VPN (Oh et al., 2017), MVE (Feinberg et al., 2018), and STEVE (Buckman et al., 2018) learn dynamics for multi-step Q-learning from a replay buffer. AlphaGo (Silver et al., 2017) combines predictions of actions and state values with planning, assuming access to the true dynamics. Also assuming access to the dynamics, POLO (Lowrey et al., 2018) plans to explore by learning a value ensemble. MuZero (Schrittwieser et al., 2019) learns task-specific reward and value models to solve challenging tasks but requires large amounts of experience. PETS (Chua et al., 2018), VisualMPC (Ebert et al., 2017), and PlaNet (Hafner et al., 2018) plan online using derivative-free optimization. POPLIN (Wang and Ba, 2019) improves over online planning by self-imitation. Piergiovanni et al. (2018) learn robot policies by imagination with a latent dynamics model. Planning with neural network gradients was shown on small problems (Schmidhuber, 1990; Henaff et al., 2018) but has been challenging to scale (Parmas et al., 2019).\nAnalytic value gradients DPG (Silver et al., 2014), DDPG (Lillicrap et al., 2015), and SAC (Haarnoja et al., 2018) leverage gradients of learned immediate action values to learn a policy by experience replay. SVG (Heess et al., 2015) reduces the variance of model-free on-policy algorithms by analytic value gradients of one-step model predictions. Concurrent work by Byravan et al. (2019) uses latent imagination with deterministic models for navigation and manipulation tasks. ME-TRPO (Kurutach et al., 2018) accelerates an otherwise model-free agent via gradients of predicted rewards for proprioceptive inputs. DistGBP (Henaff et al., 2017; 2019) uses model gradients for online planning in simple tasks." }, { "heading": "6 EXPERIMENTS", "text": "We experimentally evaluate Dreamer on a variety of control tasks. We designed the experiments to compare Dreamer to current best methods in the literature, and to evaluate its ability to solve tasks with long horizons, continuous actions, discrete actions, and early termination. We further compare the orthogonal choice of learning objective for the world model. The source code for all our experiments and videos of Dreamer are available at https://danijar.com/dreamer.\nControl tasks We evaluate Dreamer on 20 visual control tasks of the DeepMind Control Suite (Tassa et al., 2018), illustrated in Figure 2. These tasks pose a variety of challenges, including sparse rewards, contact dynamics, and 3D scenes. We selected the tasks on which Tassa et al. (2018) report non-zero performance from image inputs. Agent observations are images of shape 64 × 64 × 3, actions range from 1 to 12 dimensions, rewards range from 0 to 1, episodes last for 1000 steps and have randomized initial states. We use a fixed action repeat ofR = 2 across tasks. We further evaluate the applicability of Dreamer to discrete actions and early termination on a subset of Atari games (Bellemare et al., 2013) and DeepMind Lab levels (Beattie et al., 2016) as detailed in Appendix C.\nImplementation Our implementation uses TensorFlow Probability (Dillon et al., 2017). We use a single Nvidia V100 GPU and 10 CPU cores for each training run. The training time for our Dreamer implementation is below 5 hours per 106 environment steps on the control suite, compared to 11 hours for online planning using PlaNet, and the 24 hours used by D4PG to reach similar performance. We use the same hyper parameters across all continuous tasks, and similarly across all discrete tasks, detailed in Appendix A. The world models are learned via reconstruction unless specified.\nBaseline methods The highest reported performance on the continuous tasks is achieved by D4PG (Barth-Maron et al., 2018), an improved variant of DDPG (Lillicrap et al., 2015) that uses distributed collection, distributional Q-learning, multi-step returns, and prioritized replay. We include the scores for D4PG with pixel inputs and A3C (Mnih et al., 2016) with state inputs from Tassa et al. (2018). PlaNet (Hafner et al., 2018) learns the same world model as Dreamer and selects actions via online planning without an action model and drastically improves over D4PG and A3C in data efficiency. We re-run PlaNet with R = 2 for a unified experimental setup. For Atari, we show the final performance of SimPLe (Kaiser et al., 2019), DQN (Mnih et al., 2015) and Rainbow (Hessel et al., 2018) reported by Castro et al. (2018), and for DeepMind Lab that of IMPALA (Espeholt et al., 2018) as a guideline.\nPerformance To evaluate the performance of Dreamer, we compare it to state-of-the-art reinforcement learning agents. The results are summarized in Figure 6. With an average score of 823 across tasks after 5× 106 environment steps, Dreamer exceeds the performance of the strong model-free D4PG agent that achieves an average of 786 within 109 environment steps. At the same time, Dreamer inherits the data-efficiency of PlaNet, confirming that the learned world model can help to generalize from small amounts of experience. The empirical success of Dreamer shows that learning behaviors by latent imagination with world models can outperform top methods based on experience replay. Long horizons To investigate its ability to learn long-horizon behaviors, we compare Dreamer to alternatives for deriving behaviors from the world model at various horizon lengths. For this, we learn an action model to maximize imagined rewards without a value model and compare to online planning using PlaNet. Figure 4 shows the final performance for different imagination horizons, confirming that the value model makes Dreamer more robust to the horizon and performs well even for short horizons. Performance curves for all 19 tasks with horizon of 20 are shown in Appendix D, where Dreamer outperforms the alternatives on 16 of 20 tasks, with 4 ties. Representation learning Dreamer can be used with any differentiable dynamics model that predicts future rewards given actions and past observations. Since the representation learning objective is orthogonal to our algorithm, we compare three natural choices described in Section 4: pixel reconstruction, contrastive estimation, and pure reward prediction. Figure 8 shows clear differences in task performance for different representation learning approaches, with pixel reconstruction outperforming contrastive estimation on most tasks. This suggests that future improvements in representation learning are likely to translate to higher task performance with Dreamer. Reward prediction alone was not sufficient in our experiments. Further ablations are included in the appendix of the paper." }, { "heading": "7 CONCLUSION", "text": "We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination. For this, we propose an actor critic method that optimizes a parametric policy by propagating analytic gradients of multi-step values back through learned latent dynamics. Dreamer outperforms previous methods in data-efficiency, computation time, and final performance on a variety of challenging continuous control tasks with image inputs. We further show that Dreamer is applicable to tasks with discrete actions and early episode termination. Future research on representation learning can likely scale latent imagination to environments of higher visual complexity. Acknowledgements We thank Simon Kornblith, Benjamin Eysenbach, Ian Fischer, Amy Zhang, Geoffrey Hinton, Shane Gu, Adam Kosiorek, Jacob Buckman, Calvin Luo, and Rishabh Agarwal, and our anonymous reviewers for feedback and discussions. We thank Yuval Tassa for adding the quadruped environment to the control suite." }, { "heading": "A HYPER PARAMETERS", "text": "Model components We use the convolutional encoder and decoder networks from Ha and Schmidhuber (2018), the RSSM of Hafner et al. (2018), and implement all other functions as three dense layers of size 300 with ELU activations (Clevert et al., 2015). Distributions in latent space are 30-dimensional diagonal Gaussians. The action model outputs a tanh mean scaled by a factor of 5 and a softplus standard deviation for the Normal distribution that is then transformed using tanh (Haarnoja et al., 2018). The scaling factor allows the agent to saturate the action distribution. Learning updates We draw batches of 50 sequences of length 50 to train the world model, value model, and action model models using Adam (Kingma and Ba, 2014) with learning rates 6× 10−4, 8× 10−5, 8× 10−5, respectively and scale down gradient norms that exceed 100. We do not scale the KL regularizers (β = 1) but clip them below 3 free nats as in PlaNet. The imagination horizon is H = 15 and the same trajectories are used to update both action and value models. We compute the Vλ targets with γ = 0.99 and λ = 0.95. We did not find latent overshooting for learning the model, an entropy bonus for the action model, or target networks for the value model necessary. Environment interaction The dataset is initialized with S = 5 episodes collected using random actions. We iterate between 100 training steps and collecting 1 episode by executing the predicted mode action with Normal(0, 0.3) exploration noise. Instead of manually selecting the action repeat for each environment as in Hafner et al. (2018) and Lee et al. (2019), we fix it to 2 for all environments. See Figure 12 for an assessment of the robustness to different action repeat values. Discrete control For experiments on Atari games and DeepMind Lab levels, the action model predicts the logits of a categorical distribution. We use straight-through gradients for the sampling step during latent imagination. The action noise is epsilon greedy where is linearly scheduled from 0.4→ 0.1 over the first 200, 000 gradient steps. To account for the higher complexity of these tasks, we use an imagination horizon of H = 10, scale the KL regularizers by β = 0.1, and bound rewards using tanh. We predict the discount factor from the latent state with a binary classifier that is trained towards the soft labels of 0 and γ." }, { "heading": "B DERIVATIONS", "text": "We define the information bottleneck objective (Tishby et al., 2000) for latent dynamics models,\nmax I(s1:T ; (o1:T , r1:T ) | a1:T )− β I(s1:T , i1:T | a1:T ), (13)\nwhere β is scalar and it are dataset indices that determine the observations p(ot | it) . = δ(ot − ōt) as in Alemi et al. (2016). Maximizing the objective leads to model states that can predict the sequence of observations and rewards while limiting the amount of information extracted at each time step. This encourages the model to reconstruct each image by relying on information extracted at preceeding time steps to the extent possible, and only accessing additional information from the current image when necessary. As a result, the information regularizer encourages the model to learn long-term dependencies. For the generative objective, we lower bound the first term using the non-negativity of the KL divergence and drop the marginal data probability as it does not depend on the representation model,\nI(s1:T ; (o1:T , r1:T ) | a1:T )\n= Ep(o1:T ,r1:T ,s1:T ,a1:T ) (∑ t ln p(o1:T , r1:T | s1:T , a1:T )− ln p(o1:T , r1:T | a1:T ) const ) + = E\n(∑ t ln p(o1:T , r1:T | s1:T , a1:T ) )\n≥ E (∑\nt\nln p(o1:T , r1:T | s1:T , a1:T ) ) −KL ( p(o1:T , r1:T | s1:T , a1:T ) ∥∥∥ ∏ t q(ot | st)q(rt | st) )\n= E (∑\nt\nln q(ot | st) + ln q(rt | st) ) .\n(14) For the contrastive objective, we subtract the constant marginal probability of the data under the variational encoder, apply Bayes rule, and use the InfoNCE mini-batch bound (Poole et al., 2019),\nE ( ln q(ot | st) + ln q(rt | st) )\n+ = E ( ln q(ot | st)− ln q(ot) + ln q(rt | st) ) = E ( ln q(st | ot)− ln q(st) + ln q(rt | st)\n) ≥ E ( ln q(st | ot)− ln\n∑ o′ q(st | o′) + ln q(rt | st) ) .\n(15)\nFor the second term, we use the non-negativity of the KL divergence to obtain an upper bound,\nI(s1:T ; i1:T | a1:T )\n= Ep(o1:T ,r1:T ,s1:T ,a1:T ,i1:T ) (∑ t ln p(st | st−1, at−1, it)− ln p(st | st−1, at−1) )\n= E (∑\nt\nln p(st | st−1, at−1, ot)− ln p(st | st−1, at−1) )\n≤ E (∑\nt\nln p(st | st−1, at−1, ot)− ln q(st | st−1, at−1) )\n= E (∑\nt\nKL ( p(st | st−1, at−1, ot) ∥∥ q(st | st−1, at−1))). (16)\nThis lower bounds the objective." }, { "heading": "C DISCRETE CONTROL", "text": "We evaluate Dreamer on a subset of tasks with discrete actions from the Atari suite (Bellemare et al., 2013) and DeepMind Lab (Beattie et al., 2016). While agents that purely learn through world models are not yet competitive in these domains (Kaiser et al., 2019), the tasks offer a diverse test bed with visual complexity, sparse rewards, and early termination. Agents observe 64× 64× 3 images and select one of between 3 and 18 actions. For Atari, we follow the evaluation protocol of Machado et al. (2018) with sticky actions. Refer to Figure 9 for these experiments." }, { "heading": "D BEHAVIOR LEARNING", "text": "" }, { "heading": "E REPRESENTATION LEARNING", "text": "" }, { "heading": "F ACTION REPEAT", "text": "" }, { "heading": "G CONTINUOUS CONTROL SCORES", "text": "A3C D4PG PlaNet1 Dreamer\nModality proprio pixels pixels pixels Steps 109 109 5× 106 5× 106\nAcrobot Swingup 41.90 91.70 3.21 365.26 Cartpole Balance 951.60 992.80 452.56 979.56 Cartpole Balance Sparse 857.40 1000.00 164.74 941.84 Cartpole Swingup 558.40 862.00 312.56 833.66 Cartpole Swingup Sparse 179.80 482.00 0.64 812.22 Cheetah Run 213.90 523.80 496.12 894.56 Cup Catch 104.70 980.50 455.98 962.48 Finger Spin 129.40 985.70 495.25 498.88 Finger Turn Easy 167.30 971.40 451.22 825.86 Finger Turn Hard 88.70 966.00 312.55 891.38 Hopper Hop 0.50 242.00 0.37 368.97 Hopper Stand 27.90 929.90 5.96 923.72 Pendulum Swingup 48.60 680.90 3.27 833.00 Quadruped Run − − 280.45 888.39 Quadruped Walk − − 238.90 931.61 Reacher Easy 95.60 967.40 468.50 935.08 Reacher Hard 39.70 957.10 187.02 817.05 Walker Run 191.80 567.20 626.25 824.67 Walker Stand 378.40 985.20 759.19 977.99 Walker Walk 311.00 968.30 944.70 961.67\nAverage 243.70 786.32 332.97 823.39\n1We re-run PlaNet with fixed action repeat of R = 2 to not tune the this value for each of the 20 tasks. As a result, the scores differ from Hafner et al. (2018)." } ]
2,020
null
SP:761efdd848e9b8f43b17473ad774449ae002eeb3
[ "Authors offer an alternative for masked LM pretraining that's more sample-efficient called replaced token detection. Their method basically replaces certain input tokens with alternatives which are sampled from a generator and train a discriminative model to determine whether its generated or real. The work shows empirical success getting better results than GPT with a fraction of the compute on GLUE and others.", "The paper proposed a novel sample-efficient pretraining task. One inefficiency of BERT is that only 15% tokens are used for training in each example. The paper introduced a generator+discriminator framework to optimize the utility of training examples. The generator task is the MLM which predicts the masked word. The author adds a discriminator to further learn from the example by classifying each word to be either generated or original. In this way, more words can be used. This method looks as only adding the discrimination task after BERT pretraining task. But, the authors later show that the best GLUE scores can be obtained only when both generator and discriminator are co-trained. Moreover, the adversarial ELECTRA perform worse. All these observations are interesting. It will be helpful if the authors provide more empirical analysis why the adversarial ELECTRA perform worse or failed. Is it because the GAN is hard to train or the adversarial task doesn't fit the pretraining? " ]
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
[ { "affiliations": [], "name": "DISCRIMINATORS RATHER" }, { "affiliations": [], "name": "THAN GENERATORS" }, { "affiliations": [], "name": "Kevin Clark" } ]
[ { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto Garcı́a-Durán", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In NeurIPS,", "year": 2013 }, { "authors": [ "Avishek Joey Bose", "Huan Ling", "Yanshuai Cao" ], "title": "Adversarial contrastive estimation", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Massimo Caccia", "Lucas Caccia", "William Fedus", "Hugo Larochelle", "Joelle Pineau", "Laurent Charlin" ], "title": "Language GANs falling short", "venue": "arXiv preprint arXiv:1811.02549,", "year": 2018 }, { "authors": [ "Daniel M. Cer", "Mona T. Diab", "Eneko Agirre", "Iñigo Lopez-Gazpio", "Lucia Specia" ], "title": "Semeval2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "venue": "SemEval@ACL,", "year": 2017 }, { "authors": [ "Sumit Chopra", "Raia Hadsell", "Yann LeCun" ], "title": "Learning a similarity metric discriminatively, with application to face verification", "venue": null, "year": 2005 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Urvashi Khandelwal", "Christopher D. Manning", "Quoc V. Le" ], "title": "BAM! Born-again multi-task networks for natural language understanding", "venue": null, "year": 2019 }, { "authors": [ "Ronan Collobert", "Jason Weston", "Léon Bottou", "Michael Karlen", "Koray Kavukcuoglu", "Pavel P. Kuksa" ], "title": "Natural language processing (almost) from scratch", "venue": null, "year": 2011 }, { "authors": [ "Andrew M Dai", "Quoc V Le" ], "title": "Semi-supervised sequence learning", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "William B. Dolan", "Chris Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In IWP@IJCNLP,", "year": 2005 }, { "authors": [ "Li Dong", "Nan Yang", "Wenhui Wang", "Furu Wei", "Xiaodong Liu", "Yu Wang", "Jianfeng Gao", "Ming Zhou", "Hsiao-Wuen Hon" ], "title": "Unified language model pre-training for natural language understanding and generation", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "William Fedus", "Ian J. Goodfellow", "Andrew M. Dai" ], "title": "MaskGAN: Better text generation via filling in the", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Danilo Giampiccolo", "Bernardo Magnini", "Ido Dagan", "William B. Dolan" ], "title": "The third pascal recognizing textual entailment challenge", "venue": "In ACL-PASCAL@ACL,", "year": 2007 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In AISTATS,", "year": 2010 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Xiaoqi Jiao", "Yichun Yin", "Lifeng Shang", "Xin Jiang", "Xiao Chen", "Linlin Li", "Fang Wang", "Qun Liu" ], "title": "Tinybert: Distilling bert for natural language understanding", "venue": null, "year": 1909 }, { "authors": [ "Mandar Joshi", "Danqi Chen", "Yinhan Liu", "Daniel S Weld", "Luke Zettlemoyer", "Omer Levy" ], "title": "SpanBERT: Improving pre-training by representing and predicting spans", "venue": null, "year": 1907 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "ALBERT: A lite bert for self-supervised learning of language representations", "venue": null, "year": 1909 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "RoBERTa: A robustly optimized BERT pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Gregory S. Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "In ICLR Workshop Papers,", "year": 2013 }, { "authors": [ "Robert Parker", "David Graff", "Junbo Kong", "Ke Chen", "Kazuaki Maeda" ], "title": "English gigaword, fifth edition", "venue": "Technical report, Linguistic Data Consortium, Philadelphia,", "year": 2011 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In EMNLP,", "year": 2014 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": null, "year": 2018 }, { "authors": [ "Jason Phang", "Thibault Févry", "Samuel R Bowman" ], "title": "Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks", "venue": "arXiv preprint arXiv:1811.01088,", "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training", "venue": null, "year": 2018 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy S. Liang" ], "title": "Squad: 100, 000+ questions for machine comprehension of text", "venue": null, "year": 2016 }, { "authors": [ "Nikunj Saunshi", "Orestis Plevrakis", "Sanjeev Arora", "Mikhail Khodak", "Hrishikesh Khandeparkar" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": null, "year": 2019 }, { "authors": [ "Pierre Sermanet", "Corey Lynch", "Yevgen Chebotar", "Jasmine Hsu", "Eric Jang", "Stefan Schaal", "Sergey Levine" ], "title": "Time-contrastive networks: Self-supervised learning from video", "venue": null, "year": 2017 }, { "authors": [ "Noah A. Smith", "Jason Eisner" ], "title": "Contrastive estimation: Training log-linear models on unlabeled data", "venue": "In ACL,", "year": 2005 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Y. Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In EMNLP,", "year": 2013 }, { "authors": [ "Kaitao Song", "Xu Tan", "Tao Qin", "Jianfeng Lu", "Tie-Yan Liu" ], "title": "MASS: Masked sequence to sequence pre-training for language generation", "venue": null, "year": 2019 }, { "authors": [ "Yu Sun", "Shuohuan Wang", "Yukun Li", "Shikun Feng", "Xuyi Chen", "Han Zhang", "Xin Tian", "Danxiang Zhu", "Hao Tian", "Hua Wu" ], "title": "Ernie: Enhanced representation through knowledge integration", "venue": "arXiv preprint arXiv:1904.09223,", "year": 2019 }, { "authors": [ "Zhiqing Sun", "Hongkun Yu", "Xiaodan Song", "Renjie Liu", "Yiming Yang", "Denny Zhou" ], "title": "MobileBERT: Task-agnostic compression of bert for resource limited devices, 2019b. URL https: //openreview.net/forum?id=SJxjVaNKwB", "venue": null, "year": 2019 }, { "authors": [ "Guy Tevet", "Gavriel Habib", "Vered Shwartz", "Jonathan Berant" ], "title": "Evaluating text gans as language models", "venue": "In NAACL-HLT,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In ICML,", "year": 2008 }, { "authors": [ "Alex Wang", "Amapreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Unsupervised learning of visual representations using videos", "venue": null, "year": 2015 }, { "authors": [ "Alex Warstadt", "Amanpreet Singh", "Samuel R. Bowman" ], "title": "Neural network acceptability judgments", "venue": "arXiv preprint arXiv:1805.12471,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R. Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In NAACL-HLT,", "year": 2018 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V Le" ], "title": "XLNet: Generalized autoregressive pretraining for language understanding", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Lantao Yu", "Weinan Zhang", "Jun Wang", "Yingrui Yu" ], "title": "SeqGAN: Sequence generative adversarial nets with policy gradient", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Yizhe Zhang", "Zhe Gan", "Kai Fan", "Zhi Chen", "Ricardo Henao", "Dinghan Shen", "Lawrence Carin" ], "title": "Adversarial feature matching for text generation", "venue": null, "year": 2017 }, { "authors": [ "Yukun Zhu", "Ryan Kiros", "Richard S. Zemel", "Ruslan Salakhutdinov", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Current state-of-the-art representation learning methods for language can be viewed as learning denoising autoencoders (Vincent et al., 2008). They select a small subset of the unlabeled input sequence (typically 15%), mask the identities of those tokens (e.g., BERT; Devlin et al. (2019)) or attention to those tokens (e.g., XLNet; Yang et al. (2019)), and then train the network to recover the original input. While more effective than conventional language-model pre-training due to learning bidirectional representations, these masked language modeling (MLM) approaches incur a substantial compute cost because the network only learns from 15% of the tokens per example.\nAs an alternative, we propose replaced token detection, a pre-training task in which the model learns to distinguish real input tokens from plausible but synthetically generated replacements. Instead of masking, our method corrupts the input by replacing some tokens with samples from a proposal distribution, which is typically the output of a small masked language model. This corruption procedure solves a mismatch in BERT (although not in XLNet) where the network sees artificial [MASK] tokens during pre-training but not when being fine-tuned on downstream tasks. We then pre-train the network as a discriminator that predicts for every token whether it is an original or a replacement. In contrast, MLM trains the network as a generator that predicts the original identities of the corrupted tokens. A key advantage of our discriminative task is that the model learns from all input tokens instead of just the small masked-out subset, making it more computationally efficient. Although our\napproach is reminiscent of training the discriminator of a GAN, our method is not adversarial in that the generator producing corrupted tokens is trained with maximum likelihood due to the difficulty of applying GANs to text (Caccia et al., 2018).\nWe call our approach ELECTRA1 for “Efficiently Learning an Encoder that Classifies Token Replacements Accurately.” As in prior work, we apply it to pre-train Transformer text encoders (Vaswani et al., 2017) that can be fine-tuned on downstream tasks. Through a series of ablations, we show that learning from all input positions causes ELECTRA to train much faster than BERT. We also show ELECTRA achieves higher accuracy on downstream tasks when fully trained.\nMost current pre-training methods require large amounts of compute to be effective, raising concerns about their cost and accessibility. Since pre-training with more compute almost always results in better downstream accuracies, we argue an important consideration for pre-training methods should be compute efficiency as well as absolute downstream performance. From this viewpoint, we train ELECTRA models of various sizes and evaluate their downstream performance vs. their compute requirement. In particular, we run experiments on the GLUE natural language understanding benchmark (Wang et al., 2019) and SQuAD question answering benchmark (Rajpurkar et al., 2016). ELECTRA substantially outperforms MLM-based methods such as BERT and XLNet given the same model size, data, and compute (see Figure 1). For example, we build an ELECTRA-Small model that can be trained on 1 GPU in 4 days.2 ELECTRA-Small outperforms a comparably small BERT model by 5 points on GLUE, and even outperforms the much larger GPT model (Radford et al., 2018). Our approach also works well at large scale, where we train an ELECTRA-Large model that performs comparably to RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019), despite having fewer parameters and using 1/4 of the compute for training. Training ELECTRA-Large further results in an even stronger model that outperforms ALBERT (Lan et al., 2019) on GLUE and sets a new state-of-the-art for SQuAD 2.0. Taken together, our results indicate that the discriminative task of distinguishing real data from challenging negative samples is more compute-efficient and parameter-efficient than existing generative approaches for language representation learning." }, { "heading": "2 METHOD", "text": "We first describe the replaced token detection pre-training task; see Figure 2 for an overview. We suggest and evaluate several modeling improvements for this method in Section 3.2.\n1Code and pre-trained weights will be released at https://github.com/google-research/ electra\n2It has 1/20th the parameters and requires 1/135th the pre-training compute of BERT-Large.\nPublished as a conference paper at ICLR 2020\nartist\nartist\n[MASK] v artist\nsample\nOur approach trains two neural networks, a generator G and a discriminator D. Each one primarily consists of an encoder (e.g., a Transformer network) that maps a sequence on input tokens x = [x1, ..., xn] into a sequence of contextualized vector representations h(x) = [h1, ..., hn]. For a given position t, (in our case only positions where xt = [MASK]), the generator outputs a probability for generating a particular token xt with a softmax layer:\npG(xt|x) = exp ( e(xt) ThG(x)t ) / ∑ x′ exp ( e(x′)ThG(x)t ) where e denotes token embeddings. For a given position t, the discriminator predicts whether the token xt is “real,” i.e., that it comes from the data rather than the generator distribution, with a sigmoid output layer:\nD(x, t) = sigmoid(wThD(x)t)\nThe generator is trained to perform masked language modeling (MLM). Given an input x = [x1, x2, ..., xn], MLM first select a random set of positions (integers between 1 and n) to mask out m = [m1, ...,mk].3 The tokens in the selected positions are replaced with a [MASK] token: we denote this as xmasked = REPLACE(x,m,[MASK]). The generator then learns to predict the original identities of the masked-out tokens. The discriminator is trained to distinguish tokens in the data from tokens that have been replaced by generator samples. More specifically, we create a corrupted example xcorrupt by replacing the masked-out tokens with generator samples and train the discriminator to predict which tokens in xcorrupt match the original input x. Formally, model inputs are constructed according to\nmi ∼ unif{1, n} for i = 1 to k xmasked = REPLACE(x,m,[MASK]) x̂i ∼ pG(xi|xmasked) for i ∈m xcorrupt = REPLACE(x,m, x̂)\nand the loss functions are\nLMLM(x, θG) = E (∑ i∈m − log pG(xi|xmasked) )\nLDisc(x, θD) = E ( n∑ t=1 −1(xcorruptt = xt) logD(xcorrupt, t)− 1(x corrupt t 6= xt) log(1−D(xcorrupt, t)) ) Although similar to the training objective of a GAN, there are several key differences. First, if the generator happens to generate the correct token, that token is considered “real” instead of “fake”; we found this formulation to moderately improve results on downstream tasks. More importantly, the generator is trained with maximum likelihood rather than being trained adversarially to fool the discriminator. Adversarially training the generator is challenging because it is impossible to backpropagate through sampling from the generator. Although we experimented circumventing this issue\n3Typically k = d0.15ne, i.e., 15% of the tokens are masked out.\nby using reinforcement learning to train the generator (see Appendix F), this performed worse than maximum-likelihood training. Lastly, we do not supply the generator with a noise vector as input, as is typical with a GAN.\nWe minimize the combined loss\nmin θG,θD ∑ x∈X LMLM(x, θG) + λLDisc(x, θD)\nover a large corpus X of raw text. We approximate the expectations in the losses with a single sample. We don’t back-propagate the discriminator loss through the generator (indeed, we can’t because of the sampling step). After pre-training, we throw out the generator and fine-tune the discriminator on downstream tasks." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "We evaluate on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) and Stanford Question Answering (SQuAD) dataset (Rajpurkar et al., 2016). GLUE contains a variety of tasks covering textual entailment (RTE and MNLI) question-answer entailment (QNLI), paraphrase (MRPC), question paraphrase (QQP), textual similarity (STS), sentiment (SST), and linguistic acceptability (CoLA). See Appendix C for more details on the GLUE tasks. Our evaluation metrics are Spearman correlation for STS, Matthews correlation for CoLA, and accuracy for the other GLUE tasks; we generally report the average score over all tasks. For SQuAD, we evaluate on versions 1.1, in which models select the span of text answering a question, and 2.0, in which some questions are unanswerable by the passage. We use the standard evaluation metrics of Exact-Match (EM) and F1 scores. For most experiments we pre-train on the same data as BERT, which consists of 3.3 Billion tokens from Wikipedia and BooksCorpus (Zhu et al., 2015). However, for our Large model we pre-trained on the data used for XLNet (Yang et al., 2019), which extends the BERT dataset to 33B tokens by including data from ClueWeb (Callan et al., 2009), CommonCrawl, and Gigaword (Parker et al., 2011). All of the pre-training and evaluation is on English data, although we think it would be interesting to apply our methods to multilingual data in the future.\nOur model architecture and most hyperparameters are the same as BERT’s. For fine-tuning on GLUE, we add simple linear classifiers on top of ELECTRA. For SQuAD, we add the questionanswering module from XLNet on top of ELECTRA, which is slightly more sophisticated than BERT’s in that it jointly rather than independently predicts the start and end positions and has a “answerability” classifier added for SQuAD 2.0. Some of our evaluation datasets are small, which means accuracies of fine-tuned models can vary substantially depending on the random seed. We therefore report the median of 10 fine-tuning runs from the same pre-trained checkpoint for each result. Unless stated otherwise, results are on the dev set. See the appendix for further training details and hyperparameter values." }, { "heading": "3.2 MODEL EXTENSIONS", "text": "We improve our method by proposing and evaluating several extensions to the model. Unless stated otherwise, these experiments use the same model size and training data as BERT-Base.\nWeight Sharing We propose improving the efficiency of the pre-training by sharing weights between the generator and discriminator. If the generator and discriminator are the same size, all of the transformer weights can be tied. However, we found it to be more efficient to have a small generator, in which case we only share the embeddings (both the token and positional embeddings) of the generator and discriminator. In this case we use embeddings the size of the discriminator’s hidden states.4 The “input” and “output” token embeddings of the generator are always tied as in BERT.\nWe compare the weight tying strategies when the generator is the same size as the discriminator. We train these models for 500k steps. GLUE scores are 83.6 for no weight tying, 84.3 for tying token embeddings, and 84.4 for tying all weights. We hypothesize that ELECTRA benefits from\n4We add linear layers to the generator to project the embeddings into generator-hidden-sized representations.\ntied token embeddings because masked language modeling is particularly effective at learning these representations: while the discriminator only updates tokens that are present in the input or are sampled by the generator, the generator’s softmax over the vocabulary densely updates all token embeddings. On the other hand, tying all encoder weights caused little improvement while incurring the significant disadvantage of requiring the generator and discriminator to be the same size. Based on these findings, we use tied embeddings for further experiments in this paper.\nSmaller Generators If the generator and discriminator are the same size, training ELECTRA would take around twice as much compute per step as training only with masked language modeling. We suggest using a smaller generator to reduce this factor. Specifically, we make models smaller by decreasing the layer sizes while keeping the other hyperparameters constant. We also explore using an extremely simple “unigram” generator that samples fake tokens according their frequency in the train corpus. GLUE scores for differently-sized generators and discriminators are shown in the left of Figure 3. All models are trained for 500k steps, which puts the smaller generators at a disadvantage in terms of compute because they require less compute per training step. Nevertheless, we find that models work best with generators 1/4-1/2 the size of the discriminator. We speculate that having too strong of a generator may pose a too-challenging task for the discriminator, preventing it from learning as effectively. In particular, the discriminator may have to use many of its parameters modeling the generator rather than the actual data distribution. Further experiments in this paper use the best generator size found for the given discriminator size.\nTraining Algorithms Lastly, we explore other training algorithms for ELECTRA, although these did not end up improving results. The proposed training objective jointly trains the generator and discriminator. We experiment with instead using the following two-stage training procedure:\n1. Train only the generator with LMLM for n steps. 2. Initialize the weights of the discriminator with the weights of the generator. Then train the\ndiscriminator with LDisc for n steps, keeping the generator’s weights frozen.\nNote that the weight initialization in this procedure requires having the same size for the generator and discriminator. We found that without the weight initialization the discriminator would sometimes fail to learn at all beyond the majority class, perhaps because the generator started so far ahead of the discriminator. Joint training on the other hand naturally provides a curriculum for the discriminator where the generator starts off weak but gets better throughout training. We also explored training the generator adversarially as in a GAN, using reinforcement learning to accommodate the discrete operations of sampling from the generator. See Appendix F for details.\nResults are shown in the right of Figure 3. During two-stage training, downstream task performance notably improves after the switch from the generative to the discriminative objective, but does not end up outscoring joint training. Although still outperforming BERT, we found adversarial training to underperform maximum-likelihood training. Further analysis suggests the gap is caused by two\nproblems with adversarial training. First, the adversarial generator is simply worse at masked language modeling; it achieves 58% accuracy at masked language modeling compared to 65% accuracy for an MLE-trained one. We believe the worse accuracy is mainly due to the poor sample efficiency of reinforcement learning when working in the large action space of generating text. Secondly, the adversarially trained generator produces a low-entropy output distribution where most of the probability mass is on a single token, which means there is not much diversity in the generator samples. Both of these problems have been observed in GANs for text in prior work (Caccia et al., 2018)." }, { "heading": "3.3 SMALL MODELS", "text": "As a goal of this work is to improve the efficiency of pre-training, we develop a small model that can be quickly trained on a single GPU. Starting with the BERT-Base hyperparameters, we shortened the sequence length (from 512 to 128), reduced the batch size (from 256 to 128), reduced the model’s hidden dimension size (from 768 to 256), and used smaller token embeddings (from 768 to 128). To provide a fair comparison, we also train a BERT-Small model using the same hyperparameters. We train BERT-Small for 1.5M steps, so it uses the same training FLOPs as ELECTRA-Small, which was trained for 1M steps.5 In addition to BERT, we compare against two less resource-intensive pre-training methods based on language modeling: ELMo (Peters et al., 2018) and GPT (Radford et al., 2018).6 We also show results for a base-sized ELECTRA model comparable to BERT-Base.\nResults are shown in Table 1. See Appendix D for additional results, including stronger small-sized and base-sized models trained with more compute. ELECTRA-Small performs remarkably well given its size, achieving a higher GLUE score than other methods using substantially more compute and parameters. For example, it scores 5 points higher than a comparable BERT-Small model and even outperforms the much larger GPT model. ELECTRA-Small is trained mostly to convergence, with models trained for even less time (as little as 6 hours) still achieving reasonable performance. While small models distilled from larger pre-trained transformers can also achieve good GLUE scores (Sun et al., 2019b; Jiao et al., 2019), these models require first expending substantial compute to pre-train the larger teacher model. The results also demonstrate the strength of ELECTRA at a moderate size; our base-sized ELECTRA model substantially outperforms BERT-Base and even outperforms BERT-Large (which gets 84.0 GLUE score). We hope ELECTRA’s ability to achieve strong results with relatively little compute will broaden the accessibility of developing and applying pre-trained models in NLP.\n5ELECTRA requires more FLOPs per step because it consists of the generator as well as the discriminator. 6GPT is similar in size to BERT-Base, but is trained for fewer steps." }, { "heading": "3.4 LARGE MODELS", "text": "We train big ELECTRA models to measure the effectiveness of the replaced token detection pretraining task at the large scale of current state-of-the-art pre-trained Transformers. Our ELECTRALarge models are the same size as BERT-Large but are trained for much longer. In particular, we train a model for 400k steps (ELECTRA-400K; roughly 1/4 the pre-training compute of RoBERTa) and one for 1.75M steps (ELECTRA-1.75M; similar compute to RoBERTa). We use a batch size 2048 and the XLNet pre-training data. We note that although the XLNet data is similar to the data used to train RoBERTa, the comparison is not entirely direct. As a baseline, we trained our own BERT-Large model using the same hyperparameters and training time as ELECTRA-400K.\nResults on the GLUE dev set are shown in Table 2. ELECTRA-400K performs comparably to RoBERTa and XLNet. However, it took less than 1/4 of the compute to train ELECTRA-400K as it did to train RoBERTa and XLNet, demonstrating that ELECTRA’s sample-efficiency gains hold at large scale. Training ELECTRA for longer (ELECTRA-1.75M) results in a model that outscores them on most GLUE tasks while still requiring less pre-training compute. Surprisingly, our baseline BERT model scores notably worse than RoBERTa-100K, suggesting our models may benefit from more hyperparameter tuning or using the RoBERTa training data. ELECTRA’s gains hold on the GLUE test set (see Table 3), although these comparisons are less apples-to-apples due to the additional tricks employed by the models (see Appendix B).\nResults on SQuAD are shown in Table 4. Consistent, with the GLUE results, ELECTRA scores better than masked-language-modeling-based methods given the same compute resources. For example, ELECTRA-400K outperforms RoBERTa-100k and our BERT baseline, which use similar amounts of pre-training compute. ELECTRA-400K also performs comparably to RoBERTa-500K despite using less than 1/4th of the compute. Unsurprisingly, training ELECTRA longer improves results further: ELECTRA-1.75M scores higher than previous models on the SQuAD 2.0 bench-\nmark. ELECTRA-Base also yields strong results, scoring substantially better than BERT-Base and XLNet-Base, and even surpassing BERT-Large according to most metrics. ELECTRA generally performs better at SQuAD 2.0 than 1.1. Perhaps replaced token detection, in which the model distinguishes real tokens from plausible fakes, is particularly transferable to the answerability classification of SQuAD 2.0, in which the model must distinguish answerable questions from fake unanswerable questions." }, { "heading": "3.5 EFFICIENCY ANALYSIS", "text": "We have suggested that posing the training objective over a small subset of tokens makes masked language modeling inefficient. However, it isn’t entirely obvious that this is the case. After all, the model still receives a large number of input tokens even though it predicts only a small number of masked tokens. To better understand where the gains from ELECTRA are coming from, we compare a series of other pre-training objectives that are designed to be a set of “stepping stones” between BERT and ELECTRA.\n• ELECTRA 15%: This model is identical to ELECTRA except the discriminator loss only comes from the 15% of the tokens that were masked out of the input. In other words, the sum in the discriminator loss LDisc is over i ∈m instead of from 1 to n.7\n• Replace MLM: This objective is the same as masked language modeling except instead of replacing masked-out tokens with [MASK], they are replaced with tokens from a generator model. This objective tests to what extent ELECTRA’s gains come from solving the discrepancy of exposing the model to [MASK] tokens during pre-training but not fine-tuning.\n• All-Tokens MLM: Like in Replace MLM, masked tokens are replaced with generator samples. Furthermore, the model predicts the identity of all tokens in the input, not just ones that were masked out. We found it improved results to train this model with an explicit copy mechanism that outputs a copy probability D for each token using a sigmoid layer. The model’s output distribution puts D weight on the input token plus 1 − D times the output of the MLM softmax. This model is essentially a combination of BERT and ELECTRA. Note that without generator replacements, the model would trivially learn to make predictions from the vocabulary for [MASK] tokens and copy the input for other ones.\nResults are shown in Table 5. First, we find that ELECTRA is greatly benefiting from having a loss defined over all input tokens rather than just a subset: ELECTRA 15% performs much worse than ELECTRA. Secondly, we find that BERT performance is being slightly harmed from the pre-train fine-tune mismatch from [MASK] tokens, as Replace MLM slightly outperforms BERT. We note that BERT (including our implementation) already includes a trick to help with the pre-train/finetune discrepancy: masked tokens are replaced with a random token 10% of the time and are kept the\n7We also trained a discriminator that learns from a random 15% of the input tokens distinct from the subset that was originally masked out; this model performed slightly worse.\nsame 10% of the time. However, our results suggest these simple heuristics are insufficient to fully solve the issue. Lastly, we find that All-Tokens MLM, the generative model that makes predictions over all tokens instead of a subset, closes most of the gap between BERT and ELECTRA. In total, these results suggest a large amount of ELECTRA’s improvement can be attributed to learning from all tokens and a smaller amount can be attributed to alleviating the pre-train fine-tune mismatch.\nThe improvement of ELECTRA over All-Tokens MLM suggests that the ELECTRA’s gains come from more than just faster training. We study this further by comparing BERT to ELECTRA for various model sizes (see Figure 4, left). We find that the gains from ELECTRA grow larger as the models get smaller. The small models are trained fully to convergence (see Figure 4, right), showing that ELECTRA achieves higher downstream accuracy than BERT when fully trained. We speculate that ELECTRA is more parameter-efficient than BERT because it does not have to model the full distribution of possible tokens at each position, but we believe more analysis is needed to completely explain ELECTRA’s parameter efficiency." }, { "heading": "4 RELATED WORK", "text": "Self-Supervised Pre-training for NLP Self-supervised learning has been used to learn word representations (Collobert et al., 2011; Pennington et al., 2014) and more recently contextual representations of words though objectives such as language modeling (Dai & Le, 2015; Peters et al., 2018; Howard & Ruder, 2018). BERT (Devlin et al., 2019) pre-trains a large Transformer (Vaswani et al., 2017) at the masked-language modeling task. There have been numerous extensions to BERT. For example, MASS (Song et al., 2019) and UniLM (Dong et al., 2019) extend BERT to generation tasks by adding auto-regressive generative training objectives. ERNIE (Sun et al., 2019a) and SpanBERT (Joshi et al., 2019) mask out contiguous sequences of token for improved span representations. This idea may be complementary to ELECTRA; we think it would be interesting to make ELECTRA’s generator auto-regressive and add a “replaced span detection” task. Instead of masking out input tokens, XLNet (Yang et al., 2019) masks attention weights such that the input sequence is autoregressively generated in a random order. However, this method suffers from the same inefficiencies as BERT because XLNet only generates 15% of the input tokens in this way. Like ELECTRA, XLNet may alleviate BERT’s pretrain-finetune discrepancy by not requiring [MASK] tokens, although this isn’t entirely clear because XLNet uses two “streams” of attention during pre-training but only one for fine-tuning. Recently, models such as TinyBERT (Jiao et al., 2019) and MobileBERT (Sun et al., 2019b) show that BERT can effectively be distilled down to a smaller model. In contrast, we focus more on pre-training speed rather than inference speed, so we train ELECTRA-Small from scratch.\nGenerative Adversarial Networks GANs (Goodfellow et al., 2014) are effective at generating high-quality synthetic data. Radford et al. (2016) propose using the discriminator of a GAN in downstream tasks, which is similar to our method. GANs have been applied to text data (Yu et al., 2017; Zhang et al., 2017), although state-of-the-art approaches still lag behind standard maximumlikelihood training (Caccia et al., 2018; Tevet et al., 2018). Although we do not use adversarial learning, our generator is particularly reminiscent of MaskGAN (Fedus et al., 2018), which trains the generator to fill in tokens deleted from the input.\nContrastive Learning Broadly, contrastive learning methods distinguish observed data points from fictitious negative samples. They have been applied to many modalities including text (Smith & Eisner, 2005), images (Chopra et al., 2005), and video (Wang & Gupta, 2015; Sermanet et al., 2017) data. Common approaches learn embedding spaces where related data points are similar (Saunshi et al., 2019) or models that rank real data points over negative samples (Collobert et al., 2011; Bordes et al., 2013). ELECTRA is particularly related to Noise-Contrastive Estimation (NCE) (Gutmann & Hyvärinen, 2010), which also trains a binary classifier to distinguish real and fake data points.\nWord2Vec (Mikolov et al., 2013), one of the earliest pre-training methods for NLP, uses contrastive learning. In fact, ELECTRA can be viewed as a massively scaled-up version of Continuous Bagof-Words (CBOW) with Negative Sampling. CBOW also predicts an input token given surrounding context and negative sampling rephrases the learning task as a binary classification task on whether the input token comes from the data or proposal distribution. However, CBOW uses a bag-ofvectors encoder rather than a transformer and a simple proposal distribution derived from unigram token frequencies instead of a learned generator." }, { "heading": "5 CONCLUSION", "text": "We have proposed replaced token detection, a new self-supervised task for language representation learning. The key idea is training a text encoder to distinguish input tokens from high-quality negative samples produced by an small generator network. Compared to masked language modeling, our pre-training objective is more compute-efficient and results in better performance on downstream tasks. It works well even when using relatively small amounts of compute, which we hope will make developing and applying pre-trained text encoders more accessible to researchers and practitioners with less access to computing resources. We also hope more future work on NLP pre-training will consider efficiency as well as absolute performance, and follow our effort in reporting compute usage and parameter counts along with evaluation metrics." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We thank Allen Nie, Prajit Ramachandran, audiences at the CIFAR LMB meeting and U. de Montréal, and the anonymous reviewers for their thoughtful comments and suggestions. We thank Matt Peters for answering our questions about ELMo, Alec Radford for answers about GPT, Naman Goyal and Myle Ott for answers about RoBERTa, Zihang Dai for answers about XLNet, Zhenzhong Lan for answers about ALBERT, and Danqi Chen and Mandar Joshi for answers about SpanBERT. Kevin is supported by a Google PhD Fellowship." }, { "heading": "A PRE-TRAINING DETAILS", "text": "The following details apply to both our ELECTRA models and BERT baselines. We mostly use the same hyperparameters as BERT. We set λ, the weight for the discriminator objective in the loss to 50.8 We use dynamic token masking with the masked positions decided on-the-fly instead of during preprocessing. Also, we did not use the next sentence prediction objective proposed in the original BERT paper, as recent work has suggested it does not improve scores (Yang et al., 2019; Liu et al., 2019). For our ELECTRA-Large model, we used a higher mask percent (25 instead of 15) because we noticed the generator was achieving high accuracy with 15% masking, resulting in very few replaced tokens. We searched for the best learning rate for the Base and Small models out of [1e-4, 2e-4, 3e-4, 5e-4] and selected λ out of [1, 10, 20, 50, 100] in early experiments. Otherwise we did no hyperparameter tuning beyond the experiments in Section 3.2. The full set of hyperparameters are listed in Table 6." }, { "heading": "B FINE-TUNING DETAILS", "text": "For Large-sized models, we used the hyperparameters from Clark et al. (2019) for the most part. However, after noticing that RoBERTa (Liu et al., 2019) uses more training epochs (up to 10 rather than 3) we searched for the best number of train epochs out of [10, 3] for each task. For SQuAD, we decreased the number of train epochs to 2 to be consistent with BERT and RoBERTa. For Basesized models we searched for a learning rate out of [3e-5, 5e-5, 1e-4, 1.5e-4] and the layer-wise learning-rate decay out of [0.9, 0.8, 0.7], but otherwise used the same hyperparameters as for Large models. We found the small models benefit from a larger learning rate and searched for the best one out of [1e-4, 2e-4, 3e-4, 5e-3]. With the exception of number of train epochs, we used the same hyperparameters for all tasks. In contrast, previous research on GLUE such as BERT, XLNet, and RoBERTa separately searched for the best hyperparameters for each task. We expect our results would improve slightly if we performed the same sort of additional hyperparameter search. The full set of hyperparameters is listed in Table 7.\nFollowing BERT, we do not show results on the WNLI GLUE task for the dev set results, as it is difficult to beat even the majority classifier using a standard fine-tuning-as-classifier approach. For the GLUE test set results, we apply the standard tricks used by many of the GLUE leaderboard submissions including RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2019). Specifically:\n• For RTE and STS we use intermediate task training (Phang et al., 2018), starting from an ELECTRA checkpoint that has been fine-tuned on MNLI. For RTE, we found it helpful to combine this with a lower learning rate of 2e-5.\n8As a binary classification task instead of the 30,000-way classification task in MLM, the discriminator’s loss was typically much lower than the generator’s.\n• For WNLI, we follow the trick described in Liu et al. (2019) where we extract candidate antecedents for the pronoun using rules and train a model to score the correct antecedent highly. However, different from Liu et al. (2019), the scoring function is not based on MLM probabilities. Instead, we fine-tune ELECTRA’s discriminator so it assigns high scores to the tokens of the correct antecedent when the correct antecedent replaces the pronoun. For example, if the Winograd schema is “the trophy could not fit in the suitcase because it was too big,” we train the discriminator so it gives a high score to “trophy” in “the trophy could not fit in the suitcase because the trophy was too big” but a low score to “suitcase” in “the trophy could not fit in the suitcase because the suitcase was too big.”\n• For each task we ensemble the best 10 of 30 models fine-tuned with different random seeds but initialized from the same pre-trained checkpoint.\nWhile these tricks do improve scores, they make having clear scientific comparisons more difficult because they require extra work to implement, require lots of compute, and make results less apples-\nto-apples because different papers implement the tricks differently. We therefore also report results for ELECTRA-1.75M with the only trick being dev-set model selection (best of 10 models), which is the setting BERT used to report results, in Table 8.\nFor our SQuAD 2.0 test set submission, we fine-tuned 20 models from the same pre-trained checkpoint and submitted the one with the best dev set score." }, { "heading": "C DETAILS ABOUT GLUE", "text": "We provide further details about the GLUE benchmark tasks below\n• CoLA: Corpus of Linguistic Acceptability (Warstadt et al., 2018). The task is to determine whether a given sentence is grammatical or not. The dataset contains 8.5k train examples from books and journal articles on linguistic theory.\n• SST: Stanford Sentiment Treebank (Socher et al., 2013). The tasks is to determine if the sentence is positive or negative in sentiment. The dataset contains 67k train examples from movie reviews.\n• MRPC: Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005). The task is to predict whether two sentences are semantically equivalent or not. The dataset contains 3.7k train examples from online news sources.\n• STS: Semantic Textual Similarity (Cer et al., 2017). The tasks is to predict how semantically similar two sentences are on a 1-5 scale. The dataset contains 5.8k train examples drawn from new headlines, video and image captions, and natural language inference data.\n• QQP: Quora Question Pairs (Iyer et al., 2017). The task is to determine whether a pair of questions are semantically equivalent. The dataset contains 364k train examples from the community question-answering website Quora.\n• MNLI: Multi-genre Natural Language Inference (Williams et al., 2018). Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither. The dataset contains 393k train examples drawn from ten different sources.\n• QNLI: Question Natural Language Inference; constructed from SQuAD (Rajpurkar et al., 2016). The task is to predict whether a context sentence contains the answer to a question sentence. The dataset contains 108k train examples from Wikipedia.\n• RTE: Recognizing Textual Entailment (Giampiccolo et al., 2007). Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis or not. The dataset contains 2.5k train examples from a series of annual textual entailment challenges." }, { "heading": "D FURTHER RESULTS ON GLUE", "text": "We report results for ELECTRA-Base and ELECTRA-Small on the GLUE test set in Table 8. Furthermore, we push the limits of base-sized and small-sized models by training them on the XLNet data instead of wikibooks and for much longer (4e6 train steps); these models are called ELECTRA-Base++ and ELECTRA-Small++ in the table. For ELECTRA-Small++ we also increased the sequence length to 512; otherwise the hyperparameters are the same as the ones listed in Table 6. Lastly, the table contains results for ELECTRA-1.75M without the tricks described in Appendix B. Consistent with dev-set results in the paper, ELECTRA-Base outperforms BERT-Large while ELECTRA-Small outperforms GPT in terms of average score. Unsurprisingly, the ++ models perform even better. The small model scores are even close to TinyBERT (Jiao et al., 2019) and MobileBERT (Sun et al., 2019b). These models learn from BERT-Base using sophisticated distillation procedures. Our ELECTRA models, on the other hand, are trained from scratch. Given the success of distilling BERT, we believe it would be possible to build even stronger small pre-trained models by distilling ELECTRA. ELECTRA appears to be particularly effective at CoLA. In CoLA the goal is to distinguish linguistically acceptable sentences from ungrammatical ones, which fairly closely matches ELECTRA’s pre-training task of identifying fake tokens, perhaps explaining ELECTRA’s strength at the task." }, { "heading": "E COUNTING FLOPS", "text": "We chose to measure compute usage in terms of floating point operations (FLOPs) because it is a measure agnostic to the particular hardware, low-level optimizations, etc. However, it is worth noting that in some cases abstracting away hardware details is a drawback because hardware-centered optimizations can be key parts of a model’s design, such as the speedup ALBERT (Lan et al., 2019) gets by tying weights and thus reducing communication overhead between TPU workers. We used TensorFlow’s FLOP-counting capabilities9 and checked the results with by-hand computation. We made the following assumptions:\n• An “operation” is a mathematical operation, not a machine instruction. For example, an exp is one op like an add, even though in practice the exp might be slower. We believe this assumption does not substantially change compute estimates because matrix-multiplies dominate the compute for most models. Similarly, we count matrix-multiplies as 2 ∗m ∗ n FLOPs instead of m ∗ n as one might if considering fused multiply-add operations.\n• The backwards pass takes the same number of FLOPs as the forward pass. This assumption is not exactly right (e.g., for softmax cross entropy loss the backward pass is faster), but importantly, the forward/backward pass FLOPs really are the same for matrix-multiplies, which is most of the compute anyway.\n• We assume “dense” embedding lookups (i.e., multiplication by a one-hot vector). In practice, sparse embedding lookups are much slower than constant time; on some hardware accelerators dense operations are actually faster than sparse lookups." }, { "heading": "F ADVERSARIAL TRAINING", "text": "Here we detail attempts to adversarially train the generator instead of using maximum likelihood. In particular we train the generator G to maximize the discriminator loss LDisc. As our discriminator isn’t precisely the same as the discriminator of a GAN (see the discussion in Section 2), this method is really an instance of Adversarial Contrastive Estimation (Bose et al., 2018) rather than Generative Adversarial Training. It is not possible to adversarially train the generator by back-propagating through the discriminator (e.g., as in a GAN trained on images) due to the discrete sampling from the generator, so we use reinforcement learning instead.\nOur generator is different from most text generation models in that it is non-autogregressive: predictions are made independently. In other words, rather than taking a sequence of actions where each action generates a token, the generator takes a single giant action of generating all tokens simultaneously, where the probability for the action factorizes as the product of generator probabilities for each token. To deal with this enormous action space, we make the following simplifying assumption: that the discriminator’s prediction D(xcorrupt, t) depends only on the token xt and the non-replaced\n9See https://www.tensorflow.org/api_docs/python/tf/profiler\ntokens {xi : i 6∈ m}, i.e., it does not depend on other generated tokens {x̂i : i ∈ m ∧ i 6= t}. This isn’t too bad of an assumption because a relatively small number of tokens are replaced, and it greatly simplifies credit assignment when using reinforcement learning. Notationally, we show this assumption by (in a slight abuse of notation) by writing D(x̂t|xmasked) for the discriminator predicting whether the generated token x̂t equals the original token xt given the masked context xmasked. A useful consequence of this assumption is that the discriminator score for non-replaced tokens (D(xt|xmasked) for t 6∈m) is independent of pG because we are assuming it does not depend on any replaced token. Therefore these tokens can be ignored when training G to maximize LDisc. During training we seek to find\nargmax θG LDisc = argmax θG E x,m,x̂ ( n∑ t=1 −1(xcorruptt = xt) logD(xcorrupt, t)−\n1(xcorruptt 6= xt) log(1−D(xcorrupt, t)) ) Using the simplifying assumption, we approximate the above by finding the argmax of\nE x,m,x̂ (∑ t∈m −1(x̂t = xt) logD(x̂|xmasked)− 1(x̂t 6= xt) log(1−D(x̂|xmasked)) ) = E\nx,m ∑ t∈m E x̂t∼pG R(x̂t,x)\nwhere R(x̂t,x) = { − logD(x̂t|xmasked) if x̂t = xt − log(1−D(x̂t|xmasked)) otherwise\nIn short, the simplifying assumption allows us to decompose the loss over the individual generated tokens. We cannot directly find argmaxθG using gradient ascent because it is impossible to backpropagate through discrete sampling of x̂. Instead, we use policy gradient reinforcement learning (Williams, 1992). In particular, we use the REINFORCE gradient\n∇θGLDisc ≈ E x,m ∑ t∈m E x̂t∼pG ∇θg log pG(x̂t|xmasked)[R(x̂t,x)− b(xmasked, t)]\nWhere b is a learned baseline implemented as b(xmasked, t) = − log sigmoid(wThG(xmasked)t) where hG(xmasked) are the outputs of the generator’s Transformer encoder. The baseline is trained with cross-entropy loss to match the reward for the corresponding position. We approximate the expectations with a single sample and learn θG with gradient ascent. Despite receiving no explicit feedback about which generated tokens are correct, we found the adversarial training resulted in a fairly accurate generator (for a 256-hidden-size generator, the adversarially trained one achieves 58% accuracy at masked language modeling while the same sized MLE generator gets 65%). However, using this generator did not improve over the MLE-trained one on downstream tasks (see the right of Figure 3 in the main paper)." }, { "heading": "G EVALUATING ELECTRA AS A MASKED LANGUAGE MODEL", "text": "This sections details some initial experiments in evaluating ELECTRA as a masked language model. Using slightly different notation from the main paper, given a context c consisting of a text sequence with one token x masked-out, the discriminator loss can be written as\nLDisc = − ∑\nx∈vocab\n( (1− pmask)pdata(x|c) logD(x, c) + //unmasked token\npmaskpdata(x|c)pG(x|c) logD(x, c) + //generator samples correct token pmask(1− pdata(x|c))pG(x|c) log(1−D(x, c)) ) //generator samples incorrect token\nFinding the critical points of this loss with respect to D shows that for a fixed generator the optimal discriminator is\nD(x, c) = pdata(x|c)(a+ pG(x|c))/(apdata(x|c) + pG(x|c))\nwhich means\npdata(x|c) = D(x, c)pG(x|c)/(a(1−D(x, c)) + pG(x|c))\nwhere a = (1 − pmask)/pmask is the number of unmasked tokens for every masked token. We can use this expression to evaluate ELECTRA as a masked language model by selecting argmaxx∈vocabD(x, c)pG(x|c)/(a(1 − D(x, c)) + pG(x|c)) as the model’s prediction for a given context. In practice, selecting over the whole vocabulary is very expensive, so we instead take the argmax over the top 100 predictions from the generator.10 Using this method, we compared ELECTRA-Base and BERT-Base on the Wikipedia+BooksCorpus dataset. We found that BERT slightly outperformed ELECTRA at masked language modeling (77.9% vs 75.5% accuracy). It is possible that the assumption of an optimal discriminator, which is certainly far from correct, is harming ELECTRA’s accuracy under this evaluation scheme. However, perhaps it is not too surprising that a model like BERT that is trained specifically for generation performs better at generation while a model with a discriminative objective like ELECTRA is better at being fine-tuned on discriminative tasks. We think comparisons of BERT’s and ELECTRA’s MLM predictions might be an interesting way to uncover more about the differences between ELECTRA and BERT encoders in future work." }, { "heading": "H NEGATIVE RESULTS", "text": "We briefly describe a few ideas that did not look promising in our initial experiments:\n• We initially attempted to make BERT more efficient by strategically masking-out tokens (e.g., masking our rarer tokens more frequently, or training a model to guess which tokens BERT would struggle to predict if they were masked out). This resulted in fairly minor speedups over regular BERT.\n• Given that ELECTRA seemed to benefit (up to a certain point) from having a weaker generator (see Section 3.2), we explored raising the temperature of the generator’s output softmax or disallowing the generator from sampling the correct token. Neither of these improved results.\n• We tried adding a sentence-level contrastive objective. For this task, we kept 20% of input sentences unchanged rather than noising them with the generator. We then added a prediction head to the model that predicted if the entire input was corrupted or not. Surprisingly, this slightly decreased scores on downstream tasks.\n10For ELECTRA-Base, this means the upper-bound for accuracy is around 95%." } ]
2,020
ELECTRA: PRE-TRAINING TEXT ENCODERS
SP:a7c5bc5a6764e8188597507fdde1cc3ad514d2ba
[ "This paper proposes two main changes to the End2End Memory Network (EMN) architecture: a separation between facts and the items that comprise these facts in the external memory, policy to learn the number of memory-hops to reason. The paper also introduces a new Paired Associative Inference (PAI) task inspired by neuroscience and shows that most of the existing models including transformers struggle to solve this task while the proposed architecture (called MEMO) solves it better. MEMO also works well in the shortest path finding tasks and bAbI tasks.", "This paper presents a new task (paired associate inference), drawn from cognitive psychology, which requires linking many pieces of information together to make inferences with long range dependencies. Experimental results show that standard memory architectures fail on these tasks. To redress this, the paper proposes a new memory architecture with several new features that allow for much better performance on the paired associate task. Finally, the paper undertakes systematic experiments on more traditional domains like shortest path problems, showing that the new architecture achieves modest improvements." ]
Recent research developing neural network architectures with external memory have often used the benchmark bAbI question and answering dataset which provides a challenging number of tasks requiring reasoning. Here we employed a classic associative inference task from the memory-based reasoning neuroscience literature in order to more carefully probe the reasoning capacity of existing memoryaugmented architectures. This task is thought to capture the essence of reasoning – the appreciation of distant relationships among elements distributed across multiple facts or memories. Surprisingly, we found that current architectures struggle to reason over long distance associations. Similar results were obtained on a more complex task involving finding the shortest path between nodes in a path. We therefore developed MEMO, an architecture endowed with the capacity to reason over longer distances. This was accomplished with the addition of two novel components. First, it introduces a separation between memories/facts stored in external memory and the items that comprise these facts in external memory. Second, it makes use of an adaptive retrieval mechanism, allowing a variable number of ‘memory hops’ before the answer is produced. MEMO is capable of solving our novel reasoning tasks, as well as match state of the art results in bAbI.
[ { "affiliations": [], "name": "EPISODIC MEMORIES" }, { "affiliations": [], "name": "Andrea Banino" }, { "affiliations": [], "name": "Adrià Puigdomènech Badia" }, { "affiliations": [], "name": "Raphael Köster" }, { "affiliations": [], "name": "Martin J. Chadwick" }, { "affiliations": [], "name": "Vinicius Zambaldi" }, { "affiliations": [], "name": "Demis Hassabis Caswell" }, { "affiliations": [], "name": "Barry Matthew Botvinick" }, { "affiliations": [], "name": "Dharshan Kumaran" }, { "affiliations": [], "name": "Charles Blundell" } ]
[ { "authors": [ "Andrea Banino", "Raphael Koster", "Demis Hassabis", "Dharshan Kumaran" ], "title": "Retrieval-based model accounts for striking profile of episodic memory and generalization", "venue": "Scientific Reports,", "year": 2016 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "A. Bhattacharyya" ], "title": "On a measure of divergence between two multinomial populations", "venue": "Sankhyā: The Indian Journal of Statistics (1933-1960),", "year": 1946 }, { "authors": [ "Tolga Bolukbasi", "Joseph Wang", "Ofer Dekel", "Venkatesh Saligrama" ], "title": "Adaptive neural networks for efficient inference", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "M Bunsey", "H Eichenbaum" ], "title": "Conservation of hippocampal memory function in rats and humans", "venue": null, "year": 1996 }, { "authors": [ "Victor Campos Camunez", "Brendan Jou", "Xavier Giró Nieto", "Jordi Torres Viñals", "Shih-Fu Chang" ], "title": "Skip rnn: learning to skip state updates in recurrent neural networks", "venue": "In Sixth International Conference on Learning Representations: Monday April 30-Thursday May", "year": 2018 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Gated feedback recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Junyoung Chung", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "arXiv preprint arXiv:1609.01704,", "year": 2016 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Howard Eichenbaum", "Neal J Cohen" ], "title": "From conditioning to conscious recollection: Memory systems of the brain", "venue": "Number 35. Oxford University Press on Demand,", "year": 2004 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "arXiv preprint arXiv:1803.03635,", "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Marco Gori", "Gabriele Monfardini", "Franco Scarselli" ], "title": "A new model for learning in graph domains", "venue": "In Proceedings", "year": 2005 }, { "authors": [ "Alex Graves" ], "title": "Adaptive computation time for recurrent neural networks", "venue": "arXiv preprint arXiv:1603.08983,", "year": 2016 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Alex Graves", "Greg Wayne", "Malcolm Reynolds", "Tim Harley", "Ivo Danihelka", "Agnieszka GrabskaBarwińska", "Sergio Gómez Colmenarejo", "Edward Grefenstette", "Tiago Ramalho", "John Agapiou" ], "title": "Hybrid computing using a neural network with dynamic external memory", "venue": null, "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Mikael Henaff", "Jason Weston", "Arthur Szlam", "Antoine Bordes", "Yann LeCun" ], "title": "Tracking the world state with recurrent entity networks", "venue": "arXiv preprint arXiv:1612.03969,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Raphael Koster", "Martin J Chadwick", "Yi Chen", "David Berron", "Andrea Banino", "Emrah Düzel", "Demis Hassabis", "Dharshan Kumaran" ], "title": "Big-loop recurrence within the hippocampal system supports integration of information across episodes", "venue": null, "year": 2018 }, { "authors": [ "Ankit Kumar", "Ozan Irsoy", "Peter Ondruska", "Mohit Iyyer", "James Bradbury", "Ishaan Gulrajani", "Victor Zhong", "Romain Paulus", "Richard Socher" ], "title": "Ask me anything: Dynamic memory networks for natural language processing", "venue": "In Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Dharshan Kumaran", "James L McClelland" ], "title": "Generalization through the recurrent interaction of episodic memories: a model of the hippocampal system", "venue": "Psychological Review,", "year": 2012 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "arXiv preprint arXiv:1511.05493,", "year": 2015 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter Battaglia" ], "title": "Learning deep generative models of graphs", "venue": "arXiv preprint arXiv:1803.03324,", "year": 2018 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P Kingma" ], "title": "Learning sparse neural networks through l_0 regularization", "venue": "arXiv preprint arXiv:1712.01312,", "year": 2017 }, { "authors": [ "David Marr", "David Willshaw", "Bruce McNaughton" ], "title": "Simple memory: a theory for archicortex", "venue": "In From the Retina to the Neocortex,", "year": 1991 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy P Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In Proceedings of the 33nd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Juan Pavez", "Héctor Allende", "Héctor Allende-Cid" ], "title": "Working memory networks: Augmenting memory networks with a relational reasoning module", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Jack Rae", "Jonathan J Hunt", "Ivo Danihelka", "Timothy Harley", "Andrew W Senior", "Gregory Wayne", "Alex Graves", "Timothy Lillicrap" ], "title": "Scaling memory-augmented neural networks with sparse reads and writes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Anna C Schapiro", "Nicholas B Turk-Browne", "Matthew M Botvinick", "Kenneth A Norman" ], "title": "Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2017 }, { "authors": [ "Minjoon Seo", "Sewon Min", "Ali Farhadi", "Hannaneh Hajishirzi" ], "title": "Query-reduction networks for question answering", "venue": "arXiv preprint arXiv:1606.04582,", "year": 2016 }, { "authors": [ "Yelong Shen", "Po-Sen Huang", "Jianfeng Gao", "Weizhu Chen" ], "title": "Reasonet: Learning to stop reading in machine comprehension", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Larry R Squire", "Craig EL Stark", "Robert E Clark" ], "title": "The medial temporal lobe", "venue": "Annu. Rev. Neurosci.,", "year": 2004 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Sainbayar Sukhbaatar", "Jason Weston", "Rob Fergus" ], "title": "End-to-end memory networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Jason Weston", "Antoine Bordes", "Sumit Chopra", "Alexander M Rush", "Bart van Merrinboer", "Armand Joulin", "Tomas Mikolov" ], "title": "Towards ai-complete question answering: A set of prerequisite toy tasks", "venue": "arXiv preprint arXiv:1502.05698,", "year": 2015 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Michael A Yassa", "Craig EL Stark" ], "title": "Pattern separation in the hippocampus", "venue": "Trends in neurosciences,", "year": 2011 }, { "authors": [ "Adams Wei Yu", "Hongrae Lee", "Quoc V Le" ], "title": "Learning to skim text", "venue": "arXiv preprint arXiv:1704.06877,", "year": 2017 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart" ], "title": "Relational deep reinforcement learning", "venue": "arXiv preprint arXiv:1806.01830,", "year": 2018 }, { "authors": [ "Dagmar Zeithamova", "Margaret L Schlichting", "Alison R Preston" ], "title": "The hippocampus and inferential reasoning: building memories to navigate future decisions", "venue": "Frontiers in human neuroscience,", "year": 2012 }, { "authors": [ "Vaswani" ], "title": "added a column vector to the memory store with a time encoding", "venue": null, "year": 2017 }, { "authors": [ "Graves" ], "title": "2016), with exact sizes of each layer described on Table 13", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "During our every day life we need to make several judgments that require connecting facts which were not experienced together, but acquired across experiences at different points in time. For instance, imagine walking your daughter to a coding summer camp and encountering another little girl with a woman. You might conclude that the woman is the mother of the little girl. Few weeks later, you are at a coffee shop near your house and you see the same little girl, this time with a man. Based on these two separated episodes you might infer that there is a relationship between the woman and the man. This flexible recombination of single experiences in novel ways to infer unobserved relationships is called inferential reasoning and is supported by the hippocampus (Zeithamova et al., 2012).\nInterestingly, it has been shown that the hippocampus is storing memories independently of each other through a process called pattern separation (Yassa & Stark, 2011; Marr et al., 1991). The reason hippocampal memories are kept separated is to minimize interference between experiences, which allows us to recall specific events in the form of ’episodic’ memories (Eichenbaum & Cohen, 2004; Squire et al., 2004). Clearly, this separation is in conflict with the above mentioned role of the hippocampus in generalisation – i.e. how can separated memories be chained together? Interestingly, a recent line of research (Kumaran & McClelland, 2012; Banino et al., 2016; Schapiro et al., 2017; Koster et al., 2018) sheds lights on this tension by showing that the integration of separated experiences emerges at the point of retrieval through a recurrent mechanism. This allows\n∗1DeepMind, London, UK. 2 CoMPLEX, University College London, London, UK. 3 Department of Cell and Developmental Biology, University College London, London, UK. Correspondece should be sent to abanino@google.com, adriap@google.com, cblundell@google.com †, ‡ These authors contributed equally.\nmultiple pattern separated codes to interact, and therefore support inference. In this paper we rely on these findings to investigate how we can take inspiration from neuroscience models to investigate and enhance inferential reasoning in neural networks.\nNeural networks augmented with external memory, like the Differential Neural Computer (Graves et al., 2016, DNC), and end to end memory networks (Sukhbaatar et al., 2015, EMN) have shown remarkable abilities to tackle difficult computational and reasoning tasks. Also, more powerful attention mechanisms (Vaswani et al., 2017; Dehghani et al., 2018) or the use of context (Seo et al., 2016) have recently allowed traditional neural networks to tackle the same set of tasks. However, some of these tasks – e.g. bAbI (Weston et al., 2015) – present repetitions and commonalities between the train and the test set that neural networks can exploit to come up with degenerate solutions. To overcome this limitation we introduced a new task, called Paired Associative Inference (PAI - see below), which is derived from the neuroscientific literature (Bunsey & Eichenbaum, 1996; Banino et al., 2016). This task is meant to capture the essence of inferential reasoning – i.e. the appreciation of distant relationships among elements distributed across multiple facts or memories. PAI is fully procedurally generated and so it is designed to force neural networks to learn abstractions to solve previously unseen associations.\nWe then use the PAI task, followed by a task involving finding the shortest path and finally bAbi to investigate what kind of memory representations effectively support memory based reasoning. The EMN and other similar models (Sukhbaatar et al., 2015; Santoro et al., 2017; Pavez et al., 2018) have used fixed memory representations based on combining word embeddings with a positional encoding transformation. A similar approach has been recently implemented by current state of the art language model (Vaswani et al., 2017; Devlin et al., 2018). By contrast our approach, called MEMO, retains the full set of facts into memory, and then learns a linear projection paired with a powerful recurrent attention mechanism that enable greater flexibility in the use of these memories. MEMO is based on the same basic structure of the external memory presented in EMN (Sukhbaatar et al., 2015), but its new architectural components can potentially allow for flexible weighting of individual elements in memory and so supporting the form of the inferential reasoning outlined above.\nNext, we tackle the problem of prohibitive computation time. In standard neural networks, the computation grows as a function of the size of the input, instead of the complexity of the problem being learnt. Sometimes the input is padded with a fixed number of extra values to provide greater computation (Graves et al., 2016), in other cases, input values are systematically dropped to reduce the amount of computation (e.g., frame dropping in reinforcement learning (Mnih et al., 2016)). Critically, these values are normally hand tuned by the experimenter; instead, here we are interested in adapting the amount of compute time to the complexity of the task. To do so we drawn inspiration from a model of human associative memory called REMERGE (Kumaran & McClelland, 2012). In this model, the content retrieved from memory is recirculated back as the new query, then the difference between the content retrieved at different time steps in the re-circulation process is used to calculate if the network has settled into a fixed point, and if so this process terminates.\nTo implement this principle in a neural network, we were inspired by techniques such as adaptive computation time (Graves, 2016). In our architecture, the network outputs an action (in the reinforcement learning sense) that indicates whether it wishes to continue computing and querying its memory, or whether it is able to answer the given task. We call this the halting policy as the network learns the termination criteria of a fixed point operator. Like ACT, the network outputs a probability of halting, but unlike ACT, the binary halting random variable is trained using REINFORCE (Williams, 1992). Thus we use reinforcement learning to adjust weights based upon the counterfactual problem: what would be the optimal number of steps of computation, given a particular number of steps was taken this time? The use of REINFORCE to perform variable amount of computation has been investigated already (e.g. Shen et al., 2017; Louizos et al., 2017) however our approach differs in that we added an extra term to the REINFORCE loss that, by exploiting the mathematical properties of binary random variables, naturally minimizes the expected number of computation steps. Thus we directly encourage our network to explicitly prefer representations and computation that minimize the amount of required computation.\nTo sum up, our contributions are:\n1. A new task that stresses the essence of reasoning — i.e. the appreciation of distant relationships among elements distributed across multiple facts.\n2. An in depth investigation of the memory representation that support inferential reasoning, and extensions to existing memory architectures that show promising results on these reasoning tasks.\n3. A REINFORCE loss component that learn the optimal number of iterations required to learn to solve a task.\n4. Significant empirical results on three tasks demonstrating the effectiveness of the above two contributions: paired associative inference, shortest path finding, and bAbI (Weston et al., 2015)." }, { "heading": "2 METHODS", "text": "" }, { "heading": "2.1 RECAPITULATING END-TO-END MEMORY NETWORKS", "text": "We begin by describing End-to-End Memory Networks (Sukhbaatar et al., 2015, EMN), as a reminder of this architecture, to introduce notation and nomenclature, and also as a contrast to our work. We will focus on the multilayer, tied weight variant of EMN as this most closely resembles our architecture.\nThe set up used for the rest of the paper is as follows: given a set of knowledge inputs {xi}Ii=1 = {{xi1, xi2, ..., xiS}}Ii=1, and a query or question q = {q1, q2, ..., qS} ∈ RS , the network must predict the answer a. I represents the length of the knowledge input sequence, and S is the length of each input sentence; for instance, in bAbI (Weston et al., 2015), I will be the number of stories and S is the number of words in each sentence in a story. xis will be the word in position s in the sentence, in the ith story and will be a O-dimensional one hot vector encoding one of O possible input words.\nEMN embeds each word and sums the resulting vectors: ki = ∑ s ls ·Wkxis (1)\nvi = ∑ s Wvxis (2)\nq0 =Wqq (3)\nwhere Wk,Wv ∈ Rd×O, Wq ∈ Rd×S are embedding matrices for the key, values and query, respectively. Also ls is a positional encoding column vector (as defined in (Sukhbaatar et al., 2015)), · represent an element wise multiplication and O is the size of the vocabulary. At each step t, EMN calculates the vector of weights over the memory elements ki and produces the output. Let K be the I × d matrix formed by taking ki as its rows, and similarly V formed by vi as rows, then:\nwt = softmax(Kqt) (4) qt+1 = wtV +Wqvqt (5) at = softmax(Waqt+1) (6)\nwhere wt ∈ RI are weights over the memory slots, Wqv,Wa ∈ Rd×d is a linear mapping relating the query at the previous step to the current one, qt+1 is the query to be used at the next step, and at is the answer (usually only produced right at the end). EMN is trained via a cross entropy loss on at at the final step." }, { "heading": "2.2 MEMO", "text": "MEMO embeds the input differently. First, a common embedding ci, of size S × dc, is derived for each input matrix xi ∈ RS×O:\nci = xiWc (7)\nwhere Wc ∈ RO×dc . Then each of these embeddings is adapted to either be a key or a value. However, contrary to EMN, we do not use hand-coded positional embeddings, instead the words in\neach sentence and their one-hot encoding in xi, embedded as ci, are combined and then this vector is passed trough a linear projection followed by an attention mechanism (explained in detail below). This allows ci to capture flexibly any part of the input sentence in xi. MEMO uses multiple heads to attend to the memory following (Vaswani et al., 2017). Each head has a different view of the same common inputs ci. Let H denote the total number of heads, and h index the particular head, then for each h ∈ {1, . . . ,H} we have:\nk (h) i =W (h) k vec(ci) (8)\nv (h) i =W (h) v vec(ci) (9)\nq (h) 0 =W (h) q q (10)\nwhere W (h)k ,W (h) v ∈ Rd×Sdc and W (h)q ∈ Rd×S are embedding matrices for the key, values and query respectively. vec(c) means flattening the matrix c into a vector with the same number of elements as the original matrix, and vec−1(v) is the reverse operation of a vector v into a matrix such that vec−1(vec(c)) = c. The result is three d-dimensional vectors k(h)i , v (h) i and q (h) 0 . Keeping each item separated into memory allow us to learn how to weight each of these items when we perform a memory lookup. This contrasts with the hand-coded positional embeddings used in EMN (Sukhbaatar et al., 2015) and updated recently in Vaswani et al. (2017), and proved critical for enabling the flexible recombination of the stored items.\nThe attention mechanism used by MEMO also differs from that shown above for EMN. Firstly, it is adapted to use multi-head attention. Secondly, we use DropOut (Srivastava et al., 2014) and LayerNorm (Ba et al., 2016) to improve generalisation and learning dynamics. Let K(h) ∈ RI×d denote the matrix formed by taking each k(h)i as a row, and V (h) be the matrix formed by taking each v (h) i as a row. In contrast, let Qt ∈ RH×d be the matrix formed by taking each q (h) t as the rows. The attention mechanism then becomes:\nh (h) t = 1√ d WhK (h)q (h) t (11)\nw (h) t = DropOut(softmax(h (h) t )) (12)\nq (h) t+1 = w (h) t V (h) (13) Qt+1 = LayerNorm ( vec−1(Wqvec(Qt+1)) +Qt ) (14)\nat = softmax (WaDropOut(relu(Wqavec(Qt+1)))) (15)\nwhere Wh ∈ RI×I ,Wq ∈ RHd×Hd are matrices for transforming the logits and queries respectively. Wa ∈ RO×da and Wqa ∈ Rda×Hd are the matrices of the output MLP that produces the answer at. It is worth noting, that even though our attention mechanism uses some of the feature implemented in Vaswani et al. (2017) – i.e. normalization factor sqrt(d) and multiheading, it differs from it because rather than doing self-attention it preserves the query separated from the keys and the values. This aspect is particularly important in terms of computational complexity, in that MEMO is linear with respect to the number of sentences of the input, whereas methods relying on self-attention (e.g. Dehghani et al., 2018) have quadratic complexity (see Appendix E)." }, { "heading": "2.3 THE HALTING POLICY", "text": "In the previous sections we described how MEMO can output a sequence of potential answers to an input query, here we describe how to learn the number of computational steps – hops – required to effectively answer a. To make this decision, we collect some information at every step and we use this to create an observation st. That observation is then processed by gated recurrent units (GRUs) (Chung et al., 2015) followed by a MLP which defines a binary policy π(a|st, θ) and approximate its value function V (st, θ). The input st to this network is formed by the Bhattacharyya distance (Bhattacharyya, 1946) between the attention weights of the current time steps Wt and the ones at the previous time step, Wt−1 (both Wt and Wt−1 are taken after the softmax), and the number of steps taken so far as a one-hot vector t. The idea behind the way we build st is that if the attention is focused on the same slot of memory for too many consecutive steps than there is no reason to keep\nquerying the memory because the information retrieved will be the same - i.e. the network has settled into a fixed point.\nzt = GRUR(zt−1, d(Wt,Wt−1), t) (16) vt, πt =MLPR(zt) (17) ht = σ(πt) (18)\nThis network is trained using REINFORCE (Williams, 1992). More concretely, the parameters θ are adjusted using a n-step look ahead values, R̂t = ∑ i=0...n−1 γ irt+i + γ nV (st+n, θ), where γ is the discount factor. The objective function of this network is to minimize: LHop−Net =\nLπ + αLV + βLHop where: Lπ = −Est∼π [ R̂t ] , LV = Est∼π [( R̂t − V (st, θ) )2] and LHop =\n−Est∼π [π(·|st, θ)]. Interestingly, LHop is a term that directly follows from the fact that π is a binary policy. Specifically, the expectation of a binary random variable is its probability and the expectation of their sum is the sum of the expectation. Consequently, the new term that we introduce in the loss, LHop allows us to directly minimize the expected number of hops. This term, similar in motivation to (Louizos et al., 2017) (although differs mathematically), directly encourages our network to explicitly prefer representations and computation that minimise the amount of required computation. It is also worth noting that an argument against using REINFORCE when training discrete random variables is that the variance can be prohibitively high (Sutton & Barto, 2018). Interestingly, in the case of a binary halting random variable, the variance is just p(1− p) where p is the probability of halting and the variance is bounded by 1/4 which we find is not too large for learning to proceed successfully in practice.\nFinally, the reward structure is defined by the answer a:\nrt = { 1, if â = a 0, otherwise\nwhere a is the target answer associate with the input and â is the prediction from the network. The final layer of MLPR was initialized with biasinit, in order to increase the chances that π produces a probability of 1 (i.e. do one more hop). Finally, we set a maximum number of hops, N , that the network could take. If N was reached, the network stopped performing additional hops. Critically, there was no gradient sharing between the hop network and the main MEMO network explained above. All model hyperparameters are reported in appendix D." }, { "heading": "3 RELATED WORK", "text": "" }, { "heading": "3.1 MEMORY-AUGMENTED NEURAL NETWORKS", "text": "In recent years there has been increasing interest in the development of memory-augmented networks, due to their potential for solving abstract and relational reasoning tasks. Alongside the EMN (described in detail above), another influential early model deploying memory-augmentation was the Differential Neural Computer (Graves et al., 2014; 2016, DNC). The DNC operates sequentially on the inputs, and learns to read and write to a memory store. The model proved capable of solving a range of algorithmic problems, but was difficult to scale to higher-dimensional problem domains. A more recent extension incorporated sparsity into the DNC(Rae et al., 2016), allowing the model to perform well at larger-scale tasks, including the bAbI task suite (Weston et al., 2015). Since these initial models were published, a number of alternative memory-augmented architectures have been developed (Kumar et al., 2016; Henaff et al., 2016; Pavez et al., 2018). The Dynamic Memory Network (Kumar et al., 2016) shares some similarities with EMNs, but operates on sequential inputs rather than parallel. The Recurrent Entity Network (Henaff et al., 2016) has similarities with the DNC, but uses a parallel architecture, enabling simultaneous updates across several memory locations. The Working Memory Network (Pavez et al., 2018) is closely based on EMNs, but additionally incorporates a working memory buffer, and a RelationNet (Santoro et al., 2017). These components thereby enable relational reasoning over the retrieved contents of memory. Each of these new models has proven to perform well at various reference tasks, including the bAbI task suite (Weston et al., 2015)." }, { "heading": "3.2 ADAPTIVE COMPUTATION TIME", "text": "In general, the time required to solve a problem is expected to increase with the complexity of the task. However, most of the machine learning algorithms do not adapt their computational budget based on the complexity of the task. One approach to this problem is represented by Adaptive Computation Time (ACT) (Graves, 2016). ACT is a mechanism for learning a scalar halting probability, called the “ponder time”, to dynamically modulate the number of computational steps needed for each input. An alternative approach is represented by Adaptive Early Exit Networks (Bolukbasi et al., 2017), which give the network the ability to exit prematurely - i.e. not computing the whole hierarchy of layers - if no more computation is needed. Another approach to conditional computation is the use of REINFORCE (Williams, 1992) to learn a discrete latent variables which dynamically adjust the number of computation steps. This has been applied to recurrent neural networks where each layer decides whether or not to activate the next one (Chung et al., 2016). REINFORCE has also been used to learn how many steps to \"jump\" in sequence, and so reducing the total number processed inputs (Yu et al., 2017). This jump technique has also been applied to recurrent neural network without the need of REINFORCE (Campos Camunez et al., 2018). A similar idea has also been applied to neural network augmented with external memeory (Shen et al., 2017), but it differs from ours both in the REINFORCE loss and in the fact that our method introduce the idea of using the distance between attention weights at different time steps (hop) as a proxy to check if more information can be retrieved from memory - i.e. another hop is needed – or whether the network has settled and it has enough information to correctly answer the query." }, { "heading": "3.3 GRAPH NEURAL NETWORKS", "text": "Graph Neural Networks (GNNs) (Scarselli et al., 2008; Gori et al., 2005) consist of an iterative message passing process which propagates node and edge embeddings throughout a graph. Neural networks are then used to aggregate or learn functions over graph components to perform supervised, semi-supervised, representation and reinforcement learning tasks (Kipf & Welling, 2016; Gilmer et al., 2017; Hamilton et al., 2017; Zambaldi et al., 2018). The message passing process implements similar computation to attention mechanisms (Veličković et al., 2017; Battaglia et al., 2018) and as a limit case, self-attention can be viewed as a fully-connected GNN. Our work differs from GNNs in two fundamental ways: even though GNNs may exhibit a recurrent component (Li et al., 2015; 2018), its implementation is based in unrolling the recurrence for a fixed number of steps and use backpropagation through time in the learning process, whereas our method performs an adaptive computation to modulate the number of message passing steps; our model does not require message passing between memories–input queries attend directly to the memory slots." }, { "heading": "4 PAIRED ASSOCIATIVE INFERENCE TASK", "text": "One contribution of this paper is to introduce a task, derived from neuroscience, to carefully probe the reasoning capacity of neural networks. This task is thought to capture the essence of reasoning – the appreciation of distant relationships among elements distributed across multiple facts or memories. This process is formalized in a prototypical task widely used to study the role of the hippocampus in generalization – the paired associative inference (PAI) task (Bunsey & Eichenbaum, 1996; Banino\net al., 2016, see Fig. 1). Here, two images are randomly associated together. For example, analogous to seeing a little girl with a woman as in the example in the main text, in the PAI task the agent (human participant or a artificial neural network) would be presented with an image A, e.g. a woman, and image B, e.g. a girl, side by side. Later, in a separate event, the agent would be exposed to a second pair, the image B again, but this time paired with a new image C, e.g. the other person. This is analogous to seeing the little girl a second time with a different person. During test time two types of query can be asked: direct and indirect queries. Direct queries are a test of episodic memory as the answer relies on retrieving an episode that was experienced. In contrast, indirect queries require inference across multiple episodes (see appendix A.1 for further details). Here the network is presented with CUE, image A, and two possible choices: image C, the MATCH, that was originally paired with B; or another image C ′ , the LURE, which was paired with B ′ – i.e. forming a different triplet A ′ − B′ − C ′ . The right answer, C, can only be produced by appreciating that A and C are linked because they both were paired with B. This is analogous to the insight that the two people walking the same little girl are likely to have some form of association. For the specific details on how the batch was created please refer to appendix A.1." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 BASELINES", "text": "We compared MEMO with two other memory-augmented architectures: End to End Memory Networks (EMN) (Sukhbaatar et al., 2015) and DNC (Graves et al., 2016). We also compared MEMO with the Universal Transformer (UT) (Dehghani et al., 2018), the current state of the art model in the bAbI task suite (Weston et al., 2015). See the appendix for details about the baselines." }, { "heading": "5.2 PAIRED ASSOCIATIVE INFERENCE", "text": "Table 1 reports the summary of results of our model (MEMO) and the other baselines on the hardest inference query for each of the PAI tasks (full results are reported in the Appendix A.2). On the smaller set - i.e. A-B-C - MEMO was able to achieve the highest accuracy together with DNC, whereas EMN, even with 4 or 10 hops, wasn’t able to achieve the same level of accuracy, also UT was not able to accurately solve this inference test. For longer sequences -i.e. length 4 and 5 - MEMO was the only architecture which successfully answered the most complex inference queries.\nTo investigate how these results were achieved, we run further analysis on the length 3 PAI task. Interestingly, to solve this task DNC required 10 pondering steps to get the same accuracy as MEMO, which instead converged to 3 hops (see Fig. 3b in Appendix A.3). To analyze how MEMO approached this task we then analyzed the attention weights of an inference query, where the goal was to associate a CUE with the MATCH and avoid the interference of the LURE (see Appendix A.1 for task details). For clarity we report here the original sequence A − B − C, respectively composed by the following class IDs: 611 − 191 − 840 (however this sequence was not directly experienced together by the network, as the two associations A−B and B − C where stored in slot 10 and 25, respectively). As depicted in Figure 2, in the first hop MEMO retrieved the memory in slot 10, which contained the CUE, ID 611, and the associated item, ID 191, which form an A−B association. Then in the following hop this slot was partially active, but most of the mass was placed on slot 16, which contained the memory association B − C; that is, ID 191 and ID 840, the MATCH. Interestingly, slot 13 which was associated with the LURE, ID 943, got a bit of mass associated with it. So in this second hop MEMO assigned appropriate probability masses to all the slots needed to support a correct inference decision, which was then confirmed in the last hop. This sequence of memories activation is reminiscent of the one predicted by computational model of the hippocampus (Kumaran & McClelland, 2012; Schapiro et al., 2017) and observed in neural data (Koster et al., 2018). Moreover, another instance of MEMO which used 7 hops, to solve the task with the same level of accuracy, presented a very different pattern of memories activation (see Fig. 4 Appendix A.3.2). This is an indication of the fact that the the algorithm used to solve the inference problem depends on how many hops the network takes. This aspect could also be related to knowledge distillation in neural networks (Hinton et al., 2015; Frankle & Carbin, 2018), whereby many hops are used to initially solved the task (i.e. over-parametrization) and then these are automatically reduced to use less computation (see Fig. 3b in Appenix A.3).\nWe also ran a set of ablation experiments on MEMO (see Table 7 in Appendix A.3.1). This analysis confirmed that it is the combination of the specific memory representations (i.e. facts kept separated) and the recurrent attention mechanism that supports successful inference – i.e. employing these two components individually was not enough. Interestingly, this conclusion was valid only for inference queries, not for direct queries (see Fig. 3c,d in Appendix A.3). Indeed, by definition, direct queries are a pure test of episodic memory and so can be solved with a single memory look-up. Finally, we also compared our adaptive computation mechanism with ACT (Graves, 2016) and we found that, for this task, our method was more data efficient (see Fig. 5 in Appendix A.3.3)." }, { "heading": "5.3 SHORTEST PATH ON RANDOMLY GENERATED GRAPHS", "text": "We then turn to a set of synthetic reasoning experiments on randomly generated graphs (see Appendix B.1 for details). Table 2 shows the accuracy of the models on the task related to finding the shortest path between two nodes. On a small graph with 10 nodes, with a path length of 2 and 2 outgoing edges per node, DNC, the Universal Transformer, and MEMO had perfect accuracy in predicting the intermediate shortest path node. However, on more complicated graphs (20 nodes, 3 separated outgoing edges), with a path length of 3, MEMO outperformed EMN in predicting the first node of the path (by more than 50%), and, similarly to DNC, almost completely solved the task. Additionally, MEMO outperformed DNC in more complicated graphs with a high degree of connectivity (5 outdegree), being better by more than 20% at predicting both nodes in the shortest path. This showed the great scalability that MEMO holds; the model was able to iterate and considered more paths as the number of hops increases. Finally, Universal Transformer had a different performance in predicting the first node versus the second one of the shortest path. In the latter case, the results showed that\nUT achieved a slightly lower performance than MEMO, showing its great capability of computing operations that require direct reasoning." }, { "heading": "5.4 QUESTION ANSWERING ON THE BABI TASKS", "text": "Finally, we turn our attention to the bAbI question answering dataset (Weston et al., 2015), which consists of 20 different tasks. In particular we trained our model on the joint 10k training set (specifics of training are reported in the appendix C.1).\nTable 3 reports the averaged accuracy of our model (MEMO) and the other baselines on bAbI (the accuracy for each single task averaged across the best set of hyper-parameters is reported in Table 10 in the Appendix C.3). In the 10k training regime MEMO was able to solve all tasks, thereby matching the number of tasks solved by (Seo et al., 2016; Dehghani et al., 2018), but with a lower error (for single task results refer to Appendix C.3).\nWe also ran an extensive set of ablation experiments to understand the contribution of each architectural component to the final performance (see Appendix C.2 for details). As observed previously it was the combination of memory representations and the powerful recurrent attention that was critical to achieve state of the art performance on bAbI. Finally the use of layernorm (Ba et al., 2016) in recurrent attention mechanism was critical to achieve a more stable training regime and so better performance.\nTest results for the best run (chosen by validation loss) on the bAbI task. DNC results from (Graves et al., 2016), Universal Transformer results from (Dehghani et al., 2018). In parenthesis we report the\nnumber of tasks solved, with 20 being the maximum." }, { "heading": "6 CONCLUSIONS", "text": "In this paper we conducted an in-depth investigation of the memory representations that support inferential reasoning and we introduce MEMO, an extension to existing memory architectures, that shows promising results on these reasoning tasks. MEMO showed state-of-the-art results in a new proposed task, the paired associative inference, which had been used in the neuroscience literature to explicitly test the ability to perform inferential reasoning. On both this task, and a challenging graph traversal task, MEMO was the only architecture to solve long sequences. Also, MEMO was able to solve the 20 tasks of the bAbI dataset, thereby matching the performance of the current state-of-the-art results. Our analysis also supported the hypothesis that these results are achieved by the flexible weighting of individual elements in memory allowed by combining together the separated storage of single facts in memory with a powerful recurrent attention mechanism." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors would like to thank Adam Santoro, and many other colleagues at DeepMind for useful discussions and feedback on this work." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PAIRED ASSOCIATIVE INFERENCE TASK", "text": "(Please refer to Figure 1 in the main text)\nTo make this task challenging for a neural network we started from the ImageNet dataset (Deng et al., 2009). We created three sets, training, validation and test which used the images from the respective three sets of ImageNet to avoid any overlapping. All images were embedded using a pre-trained ResNet (He et al., 2016). We generated 3 distinct datasets with sequences of length three (i.e. A−B−C), four (i.e. A−B−C −D) and five (i.e. A−B−C −D−E) items. Each dataset contains 1e6 training images, 1e5 evaluation images and 2e5 testing images. Each sequence was randomly generate with no repetition in each single dataset.\nTo explain how the batch was built let’s refer to sequences of length, S, being equal to 3. Each batch entry is composed by a memory, a query and a target. In order to create a single entry in the batch we selected N sequences from the pool, with N = 16.\nFirst, we created the memory content with all the possible pair wise association between the items in the sequence, e.g. A1B1 and B1C1, A2B2 and B2C2, ..., ANBN and BNCN . For S = 3, this resulted in a memory with 32 rows.\nThen we generated all the possible queries. Each query consist of 3 images: the cue, the match and the lure. The cue is an image from the sequence (e.g. A1), as is the match (e.g. C1). The lure is an image from the same memory set but from a different sequence (e.g. C7). There are two types of queries - ’direct’ and ’indirect’. In ’direct’ queries the cue and the match can be found in the same memory slot, so no inference is require. For example, the sequence A1 - B1 - C1 produces the pairs A1 - B1 and B1 - C1 which are stored different slots in memory. An example of a direct test trail would be A1 (cue) - B1 (match) - B3 (lure). Therefore, ’direct’ queries are a test of episodic memory as the answer relies on retrieving an episode that was experienced. In contrast, ’indirect’ queries require inference across multiple episodes. For the previous example sequence, the inference trail would be A1 (cue) - C1 (match) - C3 (lure). The queries are presented to the network as a concatenation of three image embedding vectors (the cue, the match and the lure). The cue is always in the first position in the concatenation, but to avoid any degenerate solution, the position of the match and lure are randomized. It is worth noting that the lure image always has the same position in the sequence (e.g. if the match image is a C the lure is also a C) but it is randomly drawn from a different sequence that is also present in the current memory. This way the task can only be solved by appreciating the correct connection between the images, and this need to be done by avoiding the interference coming for other items in memory. For each entry in the batch we generated all possible queries that the current memory store could support and then one was selected at random. Also the batch was balanced, i.e. half of the elements were direct queries and the other half was indirect. The targets that the network needed to predict are the class of the matches.\nIt is worth mentioning that longer sequences provide more ’direct’ queries, but also multiple ’indirect’ queries that require different levels of inference, e.g. the sequence An - Bn - Cn - Dn - En produces the ’indirect’ trial A1 (cue) - C1 (target) - C3 (lure) with ’distance’ 1 (one pair apart) and A1 (cue) - E1 (target) - E3 (lure) with ’distance’ 4 (4 pairs apart). The latter trial required more inference steps and requires to appreciate the overlapping images of the entire sequence.\nFinally we use the inputs as follows:\n• For EMN and MEMO, memory and query are used as their naturally corresponding inputs in their architecture.\n• In the case of DNC (section G), we embed stories and query in the same way it is done for MEMO. Memory and query are presented in sequence to the model (in that order), followed by blank inputs as pondering steps to provide a final prediction.\n• For UT, we embed stories and query in the same way it is done for MEMO. Then we use the encoder of UT with architecture described in Section H. We use its output as the output of the model." }, { "heading": "A.2 PAI - QUERIES WISE RESULTS", "text": "The result repoted below are from the evaluation set at the end of training. Each evaluation set contains 600 items.\nFor EMN and DNC the results shown are from the hyper-parameter with the lower loss on the validation set. For MEMO the results are the average and relative standard deviation (reported in parenthesis), obtained by averaging the 5 hyper-parameters with the lower loss on the validation set." }, { "heading": "A.3 PAIRED ASSOCIATIVE INFERENCE FURTHER ANALYSIS ON LENGTH 3 EXPERIMENTS.", "text": "" }, { "heading": "A.3.1 ABLATIONS", "text": "Table 7: PAI - Ablations - sequence of length 3: A-B-C\nMEMO Network Architecture A-C inference trial\nPositional encoding as in\n(Vaswani et al., 2017)\nMemories kept separated\nRecurrent attention\nw/ Layernorm Accuracy\n3 7 7 57.59(10.11) 3 7 3 52.79(3.12) 7 3 7 73.26(15.86) 7 3 3 97.59(1.85)\nResults for the best run (chosen by validation set) on the PAI task. 7= not present; 3= present" }, { "heading": "A.3.2 ATTENTION WEIGHTS ANALYSIS", "text": "Figure 4: Attention weights analysis of length 3 PAI task, in the case where the network converged to 7 hops. In this case the network uses the first two hops to retrieve the slot where the cue is present and the the hops number 3, 4 and 5 to retrieve the slot with the match. The weights are sharp and they focus only on 1 single slot." }, { "heading": "A.3.3 ADAPTIVE COMPUTATION", "text": "" }, { "heading": "B SHORTEST PATH TASK", "text": "" }, { "heading": "B.1 TRAINING DETAILS", "text": "Graph generation In the shortest path experiments, we generate the graph in the same fashion as Graves et al. (2016): the graphs used to train the networks are generated by uniformly sampling a set of two-dimensional points from a unit square, each point corresponding to a node in the graph. For each node, the K nearest neighbours in the square are used as the K outbound connections, with K independently sampled from a uniform range for each node.\nGraph representation We represent our task in three parts: a graph description, a query, and the target. The graph description is presented a sequence of tuples of integers that represent a connection between nodes, holding a token for the source node, and another for the destination node. The query is also represented as a tuple of integers, although, in that case, source and destination are simply the beginning and end of the path to find. The target is the sequence of node IDS that constitute the path between source and destination of the query.\nWhen training, we sample a mini-batch of 64 graphs, with associated queries and target paths. Following our description above, queries are represented as a matrix of size 64× 2, targets are of size 64 × (L − 1), and graph descriptions are of size 64 ×M × 2, where L is the length of the shortest path, and M is the number of maximum nodes we allow one graph description to have. In our experiments, we fix the upper bound M to be the maximum number of nodes that we have multiplied by the out-degree of the nodes in the graph.\nAll networks were trained for 2e4 epochs, each one formed by 100 batch updates.\n• For EMN and MEMO, we set the graph description to be the contents of their memory, and we use the query as input. In order to answer the sequence of nodes that is used as target, we keep the keys k(h)i and values v (h) i fixed, and we proceed to use our algorithm as described\nfor each answer, with independent numbers of hops for each one. The model then predict the answers for nodes sequentially: the first node is predicted before the second. However, one important difference between MEMO and EMN is that for EMN we use the ground truth answer of the first node as the query for the second node, whereas for MEMO we used the answer predicted by the model for the first node as the query for the second node. This was done to enhance the performance of EMN while testing the real capabilities of MEMO to sequentially reasoning over multiple steps problems. The weights that are used for each answer are not shared.\n• For the Universal Transformer, we also embed the query and graph description as done for EMN and MEMO. After that, we concatenate the embeddings of query and graph description and use the encoder of the UT architecture (with specific description in Section H). We use\nits output as answer. After providing an answer, that answer is provided as initial query for the following round of hops. The weights that are used for each answer are not shared.\n• For DNC, we also embed the query and graph description as done for EMN and MEMO. Since it is naturally a sequential model, the information is presented differently: the tuples of the graph description are presented first, and after that the query tuple is presented. After that, the pondering steps are used to be able to output the sequence of nodes that constitute the proposed shortest path.\nThe output of the models is trained using Adam using a cross-entropy loss against all the sampled target sequences. Training is done for a fixed number of steps, detailed in Appendix Section D.2.\nFor evaluation, we sample a batch of 600 graph descriptions, queries, and targets. We evaluate the mean accuracy over all the nodes of the target path. We report average values and standard deviation over the best 5 hyper parameters we used.\nIt is worth noting that given this training regime:\n• DNC and UT have a ‘global view’ on the problem in order to provide an answer for the second node. This means that, to answer the second node in the path, they can still reason and work backwards from the end node, and while still having information about the initial node in the path. This makes it intuitive for them to achieve better performance in the second node, as it is closest to the end node of the path, so less reasoning is needed to achieve good performance.\n• On the contrary, MEMO has a ‘local view’ on the problem, the answer to the second node depends on the answer about the first node. Therefore, it cannot do better than chance if the answer to the first node is not correct." }, { "heading": "B.2 RESULTS", "text": "To better compare the performance of MEMO versus EMN we also run another experiment where we tested the models in two conditions:\n• The ground truth answer of the first node was used as the query for the second node.\n• The answer predicted by the model for the first node was used as the query for the second node.\nThe results are summarized in Table 8. In the case of 20 Nodes with 5 outbound edges, we can see that if we give MEMO the ground truth for node 1 as query for node 2 the performance increases from the one related to the prediction of the first node (85.38%(0.05) vs. 69.20%(0.07)). Interestingly, if we use for EMN the same training regime used for MEMO - i.e. the prediction is used to query the second node - then EMN perform almost at chance level (22.30%). The same results are also confirmed in the simpler scenario with 20 nodes and 3 outbound edges." }, { "heading": "C BABI", "text": "" }, { "heading": "C.1 TRAINING AND EVALUATION DETAILS", "text": "For this experiment we used the English Question Answer dataset Weston et al. (2015). We use the training and test datasets that they provide with the following pre-processing:\n• All text is converted to lowercase.\n• Periods and interrogation marks were ignored.\n• Blank spaces are taken as word separation tokens.\n• Commas only appear in answers, and they are not ignored. This means that, e.g. for the path finding task, the answer ’n,s’ has its own independent label from the answer ’n,w’. This also implies that every input (consisting of ’query’ and ’stories’) corresponds to a single answer throughout the whole dataset.\n• All the questions are stripped out from the text and put separately (given as \"queries\" to our system).\nAt training time, we sample a mini-batch of 128 queries from the test dataset, as well as its corresponding stories (which consist of the text prior to the question). As a result, the queries are a matrix of 128× 11 tokens, and sentences are of size 128× 320× 11, where 128 is the batch size, 320 is the max number of stories, and 11 is the max sentence size. We pad with zeros every query and group of stories that do not reach the max sentence and stories size.\n• For EMN and MEMO, stories and query are used as their naturally corresponding inputs in their architecture.\n• In the case of DNC, we embed stories and query in the same way it is done for MEMO. Stories and query are presented in sequence to the model (in that order), followed by blank inputs as pondering steps to provide a final prediction.\n• For UT, we embed stories and query in the same way it is done for MEMO. Then, we use the encoder of UT with architecture described in Section H. We use its output as the output of the model.\nAfter that mini-batch is sampled, we perform one optimization step using Adam for all the models that we have run in our experiments, with hyper parameters detailed in Appendix Section D.2. We stop after a fixed number of time-steps, as also detailed in D.2.\nMany of the tasks in bAbI require some notion of temporal context, to account for this in MEMO we added a column vector to the memory store with a time encoding derived from Vaswani et al. (2017).\nAll networks were trained for 2e4 epochs, each one formed by 100 batch updates.\nFor evaluation, we sample a batch of 10, 000 elements from the dataset and compute the forward pass in the same fashion as done in training. With that, we compute the mean accuracy over those examples, as well as the accuracy per task for each of the 20 tasks of bAbI. We report average values and standard deviation over the best 5 hyper parameters we used." }, { "heading": "C.2 BABI ABLATIONS", "text": "Results for the best run (chosen by validation set) on the bAbI task. The model was trained and tested jointly on all tasks. All tasks received approximately equal training resources. 7= not present;\n3= present" }, { "heading": "C.3 TASK-WISE RESULTS", "text": "" }, { "heading": "D MEMO TRAINING DETAILS AND HYPER-PARAMETERS", "text": "" }, { "heading": "D.1 TRAINING DETAILS", "text": "To train MEMO network parameters we use Adam (Kingma & Ba, 2014) with polynomial learning rate decay, starting at lstartmemo value, and batch size always equal to 64 for the PAI and shortest path and 128 for bAbI. In all the three tasks MEMO was trained using a cross entropy loss, and the network had to predict the class ID in the paired associative inference task, the node ID in the shortest path problem and the word ID in bAbi.\nThe halting policy network parameters were updated using RMSProp(Tieleman & Hinton, 2012), with learning rate lhalt.\nThe other parameters are reported in Table 11 and 12." }, { "heading": "D.3 RANGE OF HYPER-PARAMETERS USED IN SWEEPS", "text": "" }, { "heading": "D.2 FIXED HYPER-PARAMETERS USED ACROSS TASKS", "text": "" }, { "heading": "E MEMO COMPLEXITY ANALYSIS", "text": "In terms of temporal complexity, MEMO has a complexity of O(ns ·A ·N ·H · I · S · d), where ns is the number of samples we process with our network, A is the number of answers, N is the upper bound of the number of hops we can take, H is the number of heads used, I is the number of stories, and S is the number of words in each sentence. This is due to the fact that, for each sample, we do the hopping procedure for every answer, taking a number of hops. For each hop we query our memory by interacting with all its slots I , for all its size S × d. For all our experiments, all parameters A, N , H , I , S, d are fixed to constants.\nFurther, it is worth noting that MEMO is linear with respect to the number of sentences of the input, whereas the Universal Transformer has quadratic complexity.\nWith respect to spatial complexity, MEMO holds information of all the weights constant, apart from all the context information that needs to be used to answer a particular query. Since the context information is the only one that is input dependent, the spatial complexity is in this case O(I · S · d), which is the size of our memory. In all our experiments, such size is fixed." }, { "heading": "F ACT DESCRIPTION", "text": "We implement ACT as specified in Graves (2016). Based on our implementation of MEMO, we start by defining the halting unit h as the following:\nht = σ(πt) (19) where πt is the binary policy of MEMO. This is slightly different than the original ACT which represents such unit with: ht = σ(Whst + bh) (20) where Wh and bh are trainable weights and biases respectively, and st is the previous observed state. We argue that this slight change increases the fairness of the comparison, for two reasons: firstly, πt(a|st, θ) depends on st, but it uses several non-linearities to do so, rather than it being a simple linear projection, so it should enable more powerful representations. Secondly, this makes it much more similar to our model while still being able to evaluate the feasibility of this halting mechanism.\nFrom this point we proceed as in the original work by defining the halting probability:\npt = { R if t = T ht otherwise\n(21)\nwhere\nT = min{t′ : t′∑ t=1 ht >= 1− } (22)\nwhere , as in Graves (2016) is fixed to be 0.01 in all experiments. The reminder R is defined as:\nR = 1− T−1∑ t=1 ht (23)\nFinally, the answer provided by MEMO+ACT is defined as:\na = T∑ t=1 ptat (24)\nwhere at corresponds to the answer that MEMO has provided at hop t." }, { "heading": "G DNC ARCHITECTURE AND HYPERPARAMETERS", "text": "We use the same architecture as described in Graves et al. (2016), with exact sizes of each layer described on Table 13.\nWe also performed a search on hyperparameters to find such hyperparameters, with ranges reported on Table 14." }, { "heading": "H UNIVERSAL TRANSFORMER ARCHITECTURE AND HYPERPARAMETERS", "text": "We use the same architecture as described in Dehghani et al. (2018). More concretely, we use the implementation and hyperparameters described as ’universal_transformer_small’ that is available at https://github.com/tensorflow/tensor2tensor/blob/master/ tensor2tensor/models/research/universal_transformer.py. For completeness, we describe the hyperpameters used on Table 15.\nWe also performed a search on hyperparameters to train on our tasks, with ranges reported on Table 16." } ]
2,020
MEMO: A DEEP NETWORK FOR FLEXIBLE COMBINA-
SP:b054b02760d839fe09152fbdc75e3090d147345b
[ "In this paper, the authors propose generalize the FM to consider both interaction between features and interaction between samples. For the interaction between features, the authors propose to use graph convolution to capture high-order feature interactions. Moreover, the authors construct a graph on the instances based on similarity. Then a GCN is applied to the sample graph where the feature embedding is shared between the two components. Experiments are carried out on four datasets with tasks of link prediction and regression. Comparison to several baselines demonstrate the superior performance of the proposed method.", "This paper proposes to combine the graph neural networks and factorization machines. First, the authors propose a relational feature interaction component (RFO) tp deal with the categorical features. This component first uses the factorization machine to project the features to h^FI(x), then it uses an aggregation operation to get the prediction y^RFI. To explore high-order correlations, the authors further propose to calculate a concurrence graph, on which RFI propagates the embedding vectors to get relational high-order correlations. To further model high-order sample interactions, this work then presents a special graph convolutional operation that considers the element-wise products of the encoded features. " ]
Factorization Machines (FMs) is an important supervised learning approach due to its unique ability to capture feature interactions when dealing with highdimensional sparse data. However, FMs assume each sample is independently observed and hence incapable of exploiting the interactions among samples. On the contrary, Graph Neural Networks (GNNs) has become increasingly popular due to its strength at capturing the dependencies among samples. But unfortunately, it cannot efficiently handle high-dimensional sparse data, which is quite common in modern machine learning tasks. In this work, to leverage their complementary advantages and yet overcome their issues, we proposed a novel approach, namely Deep Relational Factorization Machines, which can capture both the feature interaction and the sample interaction. In particular, we disclosed the relationship between the feature interaction and the graph, which opens a brand new avenue to deal with high-dimensional features. Finally, we demonstrate the effectiveness of the proposed approach with experiments on several real-world datasets.
[]
[ { "authors": [ "Mathieu Blondel", "Akinori Fujino", "Naonori Ueda", "Masakazu Ishihata" ], "title": "Higher-order factorization machines", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Xiaoshuang Chen", "Yin Zheng", "Jiaxing Wang", "Wenye Ma", "Junzhou Huang" ], "title": "Rafm: Rank-aware factorization machines", "venue": "arXiv preprint arXiv:1905.07570,", "year": 2019 }, { "authors": [ "Huifeng Guo", "Ruiming Tang", "Yunming Ye", "Zhenguo Li", "Xiuqiang He" ], "title": "Deepfm: a factorizationmachine based neural network for ctr prediction", "venue": "arXiv preprint arXiv:1703.04247,", "year": 2017 }, { "authors": [ "Wei Guo", "Ruiming Tang", "Huifeng Guo", "Jianhua Han", "Wen Yang", "Yuzhou Zhang" ], "title": "Order-aware embedding neural network for ctr prediction", "venue": "In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2019 }, { "authors": [ "Xiangnan He", "Tat-Seng Chua" ], "title": "Neural factorization machines for sparse predictive analytics", "venue": "In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval,", "year": 2017 }, { "authors": [ "Yuchin Juan", "Yong Zhuang", "Wei-Sheng Chin", "Chih-Jen Lin" ], "title": "Field-aware factorization machines for ctr prediction", "venue": "In Proceedings of the 10th ACM Conference on Recommender Systems,", "year": 2016 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Jianxun Lian", "Xiaohuan Zhou", "Fuzheng Zhang", "Zhongxia Chen", "Xing Xie", "Guangzhong Sun" ], "title": "xdeepfm: Combining explicit and implicit feature interactions for recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Yanru Qu", "Han Cai", "Kan Ren", "Weinan Zhang", "Yong Yu", "Ying Wen", "Jun Wang" ], "title": "Product-based neural networks for user response prediction", "venue": "IEEE 16th International Conference on Data Mining (ICDM),", "year": 2016 }, { "authors": [ "Steffen Rendle" ], "title": "Factorization machines", "venue": "IEEE International Conference on Data Mining,", "year": 2010 }, { "authors": [ "Steffen Rendle" ], "title": "Social network and click-through prediction with factorization machines", "venue": "In KDDCup Workshop,", "year": 2012 }, { "authors": [ "Steffen Rendle", "Zeno Gantner", "Christoph Freudenthaler", "Lars Schmidt-Thieme" ], "title": "Fast contextaware recommendations with factorization machines", "venue": "In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval,", "year": 2011 }, { "authors": [ "Anh-Phuong Ta" ], "title": "Factorization machines with follow-the-regularized-leader for ctr prediction in display advertising", "venue": "IEEE International Conference on Big Data (Big Data),", "year": 2015 }, { "authors": [ "Ruoxi Wang", "Bin Fu", "Gang Fu", "Mingliang Wang" ], "title": "Deep & cross network for ad click predictions", "venue": "In Proceedings of the ADKDD’17,", "year": 2017 }, { "authors": [ "Xiang Wang", "Xiangnan He", "Meng Wang", "Fuli Feng", "Tat-Seng Chua" ], "title": "Neural graph collaborative filtering", "venue": "arXiv preprint arXiv:1905.08108,", "year": 2019 }, { "authors": [ "Gang Wu", "Viswanathan Swaminathan", "Saayan Mitra", "Ratnesh Kumar" ], "title": "Context-aware video recommendation based on session progress prediction", "venue": "IEEE International Conference on Multimedia and Expo (ICME),", "year": 2017 }, { "authors": [ "Jun Xiao", "Hao Ye", "Xiangnan He", "Hanwang Zhang", "Fei Wu", "Tat-Seng Chua" ], "title": "Attentional factorization machines: Learning the weight of feature interactions via attention networks", "venue": "arXiv preprint arXiv:1708.04617,", "year": 2017 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Weinan Zhang", "Tianming Du", "Jun Wang" ], "title": "Deep learning over multi-field categorical data", "venue": "In European conference on information retrieval,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many supervised learning tasks need to model data with numerous categorical features, which is usually converted into a set of binary features using one-hot encoding. However, when the original categorical features have high cardinalities, such data becomes high-dimensional and sparse. The difficulty in modeling such data is that, most machine learning techniques rely on co-occurrence of features to model their interactions, while in sparse data such co-occurrences are relatively rare compared to the number of possible feature combinations, and hence over-fitting occurs. This is particularly common in web applications, which may involve high-cardinality categorical features such as user IDs, item IDs, and ZipCodes, etc. Factorization Machines (FMs) was introduced byRendle (2010) to model such high-dimensional sparse data. The key idea of FMs is learning a latent vector of each one-hot encoded feature, and capture an arbitrary pairwise (order-2) interaction by inner product of respective latent vectors. The success of FMs has been evidenced by applications such as click-through rate prediction (Rendle, 2012; Ta, 2015) and recommendation (Rendle et al., 2011; Wu et al., 2017).\nTo further improve the performance of FMs, numerous variants have been proposed (Blondel et al., 2016; Chen et al., 2019; Guo et al., 2017; He & Chua, 2017; Juan et al., 2016; Lian et al., 2018; Qu et al., 2016; Wang et al., 2017; Xiao et al., 2017). For instance, Field-aware Factorization Machine (FFM) (Juan et al., 2016) is proposed to conduct fine-grained feature interaction. With the development of deep neural networks in recent years, some deep variants have been proposed. For instance, DeepFM (Guo et al., 2017) combines FM and deep neural networks (DNN) to do both the second-order and high-order feature interaction. Deep & Cross Network (DCN) (Wang et al., 2017) stacks multiple interaction layer to learn the high-order feature interaction. Attentional Factorization Machines (Xiao et al., 2017) employs a neural attention network to additionally weight each interaction term in FMs. Order-aware Embedding Neural Network (Guo et al., 2019) learns an order-specific latent vector for each binary feature in Higher-order Factorization Machines.\nHowever, all the aforementioned FMs variants only focus on the feature interaction. In real-world applications, there also exists sample interaction. For instance, when predicting CTR in a social network, two users in the same basketball group have a large probability to click the same advertisement of basketball shoes since they are supposed to have similar hobbies for basketball. Thus, it is necessary and helpful to incorporate the sample interaction when conducting prediction. Graph Convolutional Networks (GCN) and the ilk (Kipf & Welling, 2016) use a graph convolution opera-\ntion to capture the correlation between nodes in a graph. Specifically, this operation aggregates all neighbors’ information when making prediction for each node (sample) in a graph. As a result, the prediction encodes the interaction between nodes. Recently, GCN has been widely used in numerous applications to capture the interaction in the sample level, such as recommendation (Wang et al., 2019; Ying et al., 2018). However, although GCN can incorporate the sample interaction, yet it is difficult for GCN to deal with the sparse categorical features well.\nSummarily, FMs and GCNs are two general classes of techniques that are typically used for different applications. Both have been shown to be the state-of-the-art in their own respective areas. Meanwhile, they also suffer their intrinsic drawbacks. To overcome the disadvantages of using either approach independently while inheriting the complementary advantages of both approaches, a straightforward solution is to combine these two technologies together. However, how to seamlessly unify these two independent models together to capture both the feature interaction and sample interaction is challenging. To address this issue, we propose a novel Deep Relational Factorization Machine (DRFM). In particular, to model the relationship between different features, we tackle it from the perspective of graph. More specifically, the interaction in different orders between features is reformulated by the path in a feature concurrence graph, with which our method can easily capture the feature interaction in different order by using graph convolutional operation. As far as we know, this is the first work dealing with the feature interaction from the graph view. Moreover, to model the interaction between different samples, we proposed a novel sample interaction component which can capture the high-order sample interaction both linearly and exponentially. Extensive experimental results confirmed the effectiveness of our proposed method. At last, we summarize the contribution of this work as follows:\n• We disclosed the relationship between the feature interaction and the graph, and proposed a novel graph-based method to deal with the feature interaction. This opens a new avenue to deal with high-dimensional categorical features.\n• We proposed a general framework that fuses FMs and GCNs into a single unified learning approach, called DRFM. It overcomes the disadvantages of using either approach independently while inheriting the complementary advantages of both approaches.\n• We demonstrated the effectiveness of DRFM for both link prediction and regression tasks." }, { "heading": "2 RELATED WORK", "text": "Factorization Machines (FM) was first proposed by Rendle (2010). With a factorized interaction term, FM is good at dealing with the data with sparse categorical features. However, the standard FM can only capture the second-order feature interaction. To utilize the high-order feature interaction, Blondel et al. (2016) proposed the high-order factorization machine (HOFM) which explicitly incorporates the high-order feature combinations. In addition, to conduct fine-grained feature interaction, Juan et al. (2016) proposed the Field-aware Factorization Machine (FwFM) which assigns multiple latent representation to each feature in terms of feature groups. Similarly, Chen et al. (2019) also proposed to represent each feature with multiple latent representations according to the frequency of feature occurrences.\nRecently, deep neural networks (DNN) have shown promising performance on a wide variety of tasks, such as computer vision, natural language processing. Inspired by this, some researchers proposed to combine DNN with FM to fully utilize their advantages. For instance, Factorizationmachine supported Neural Networks (FNN) first pre-trains the factorization machine to get the latent representation of features and then feeds these representations to DNN to learn high-order feature interaction implicitly (Zhang et al., 2016). To train FM and DNN in an end-to-end way, Productbased Neural Network (PNN) proposed by Qu et al. (2016) introduced a product layer to connect the feature embedding layer and DNN layers. However, both of these two models only focus on the high-order feature interaction, ignoring the low-order interaction. To address this issue, Guo et al. (2017) proposed DeepFM which models FM and DNN in two branches and trains them simultaneously. Wang et al. (2017) proposed Deep & Cross Network (DCN) to explicitly capture the feature interaction with different orders. Similarly, xDeepFM (Lian et al., 2018) also aims at capturing feature interactions with different orders, but it uses the inner product rather than outer product like DCN.\nAs we discussed earlier, FM and its variants aim at capturing the feature interaction. In some realworld applications, it is necessary to capture the sample interaction. To do that, Kipf & Welling (2016) proposed graph convolutional networks (GCN). Specifically, GCN employs the graph convolutional operation to capture the interaction between samples and their neighbors. Recently, GCN has been applied to various tasks to capture the sample interaction. For instance, Ying et al. (2018) proposed PinSage to explore the item-item interaction in the recommender system. Wang et al. (2019) proposed the Neural Graph Collaborative Filtering (NGCF) to utilize the user-item interaction for recommendation." }, { "heading": "3 PRELIMINARIES", "text": "In this section, we are going to present some preliminaries about factorization machine and graph convolutional neural networks.\nThroughout this paper, a graph is represented by G = (V, E) where V = {vi} represents the node set and E = {(vi, vj)} represents the edge set. In this paper, we focus on the attributed graph and the node feature matrix is represented by X = [x1,x2, · · · ,xn] ∈ Rd×n. In this work, we focus on the high-dimensional categorical features, which are very common in real-world applications, such as recommendation and CTR prediction. Specifically, we assume the feature of each node is represented as xi = [0, 1, 0, · · · , 1, 1, 0, 0]T ∈ Rd whose features is categorical and the number of non-zero values is much less than d. In addition, if a node has the ground-truth (e.g. the regression task in our experiment), it is denoted by Y = [yi] ∈ Rn. Note that we will use samples and nodes interchangeably throughout this paper.\nBased on aforementioned terminologies, Factorization Machine (FM) is defined as follows:\nŷi = b+w Txi + ∑ p<q 〈vp,vq〉xi,pxi,q (1)\nwhere ŷi ∈ R denotes the prediction of node vi, xi = [xi,p] ∈ Rd represents node features where xi,p is the p-th feature of node vi, vp ∈ Rk stands for the embedding of the p-th feature. Compared the regular linear model, FM can capture the interaction between different features. Specifically, in the non-linear term, the dot product 〈vp,vq〉 computes the interaction between feature xi,p and xi,q . However, FM can only capture the interaction inside each node, ignoring interaction between different nodes.\nConvolution is an effective operator to capture the local correlation. The regular convolutional operator is used to extract features by exploring the feature correlation. On the contrary, the graph convolutional operator is proposed to explore the sample correlation. Specifically, the graph convolutional operation in the l-th hidden layer of the Graph Convolutional Neural Network (GCN)is defined as follows:\nzl+1i = 1√ |N (i)| ∑ i′∈N (i) 1√ |N (i′)| Wl+1hli′\nhl+1i = f(z l+1 i )\n(2)\nwhere hli ∈ Rdl denotes the hidden representation of node vi in the l-th layer, Wl+1 ∈ Rdl×dl+1 represents the model parameter, N (i) indicates the neighbors of node vi, and f(·) stands for the non-linear activation function. It can be found that the representation zl+1i of the i-th sample is constructed by aggregating its neighbors. In this way, GCN can capture the sample interaction. However, although GCN can explore the interaction between different samples, yet it is not good at exploring the feature interaction. In the following section, we will propose a new model to address these two issues of FM and GCN." }, { "heading": "4 DEEP RELATIONAL FACTORIZATION MACHINE", "text": "As shown in Eq. (1), the regular FM (Rendle, 2010) only considers the interaction between different features, ignoring the interaction between different samples. In many real-world applications, the sample interaction might be important for prediction. For instance, in a recommender system, the interaction between users and items is important to make accurate recommendation. If ignoring this\nkind of interaction, it will be difficult to find the potential items to do recommendation. Another good example is the campaign performance prediction in online advertising. Specifically, an advertiser may only change a small part of its original campaign, such as targeting countries, and launch a new campaign. As a result, these two campaigns share a lot of common information. If we can capture the correlation between these two campaigns, it will be beneficial to get better prediction result.\nBased on the aforementioned intuition, a natural question is that: how to capture the feature interaction and sample interaction simultaneously? A straightforward method is to combine them together directly as follows:\nyi = y FM i + y GCN i ,\nyFMi = w Txi + ∑ p<q 〈vp,vq〉xi,pxi,q , yGCNi = g(f( 1√ |N (i)| ∑ i′∈N (i) 1√ |N (i′)| Wl+1xli′))\n(3)\nwhere g(·) denotes the prediction function based on node features. Although this straightforward method can explore the feature interaction and sample interaction simultaneously, yet it can be seen that GCN and FM are almost independent. Specifically, the prediction from FM does not use the sample interaction and that from GCN does not involve the feature interaction too.\nTo address this issue, we propose the Deep Relational Factorization Machine (DRFM). In detail, our proposed DRFM also has two components: the sample interaction component and the relational feature interaction component. As for the sample interaction component, we proposed a novel sample interaction layer which acts on the sample graph. As for the relational feature interaction component, we proposed to capture both the high-order feature interaction based on the feature graph and the sample interaction based on the sample graph." }, { "heading": "4.1 RELATIONAL FEATURE INTERACTION", "text": "The relational feature interaction (RFI) component aims at dealing with the categorical features to capture the feature interaction. At the same time, it should capture the sample interaction. Based on these goals, the prototype of our relational feature interaction component is defined as follows:\nhFI(xi) = w Txi + ∑ p<q 〈vp,vq〉xi,pxi,q yRFIi = 1√ |N (i)| ∑ i′∈N (i) 1√ |N (i′)| hFI(xi′) (4)\nwhere hFM (xi′) denotes the feature-interaction kernel which is used to deal with categorical features to capture the feature interaction, yRFIi represents the prediction for node vi from the RFI component which considers both the feature interaction and sample interaction. Compared with the naive method, we can find that our proposed RFI can capture the sample interaction when dealing with categorical features, while the naive method cannot.\nHowever, Eq. (4) can only capture the second-order interaction between different features, ignoring the high-order interaction. As we know, second order may be not enough due to the complexity of real-world datasets. Thus, it is necessary to capture the high-order feature interaction. In addition, the relationship between different features might be highly non-linear. Thus, it is important to explore the non-linearity between different features. To address these issues, we further propose a novel high-order feature-interaction kernel. Specifically, we deal with this issue from a totally new perspective. In particular, given a sample with categorical features, we can construct a concurrent feature graph in terms of the concurrence between different features. For instance, given x = [0, 1, 0, 1, 1, 0, 0] where the first, fourth, and fifth feature appear simultaneously, then there should be a link between them in the concurrence graph due to their concurrence, which is shown as\nfollows:\nG = \n1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 1 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1\n (5)\nFor the concurrence graph, each feature is viewed as a node in the graph. As a result, a path in this graph indicates the concurrence of the features in this graph. Then, a long path corresponds to a high-order feature interaction, while a short path corresponds to a low-order feature interaction. Consequently, a lot of operations on the graph can be used to deal with the high-dimensional categorical features. In particular, inspired by the graph convolutional operation, we propose the following model to capture the high-order interaction between different features layer by layer:\nvl+1p = graph conv(v 0 p,v l q)\nv0p = σ(Wv 0 p)\nvl+1p = σ(Wv l+1 p ) hl+1i = ∑\np:xi,p=1\nvl+1p\n(6)\nwhere vlp denotes the embedding of the p-th feature in the l-th layer. v l p encodes the high-order interaction in high layers. Here, we use the graph conv to capture the interaction between different features. Unlike the standard graph convolutional operation, we propose the following interaction kernel:\ngraph conv(v0p,v l q) = v 0 p ◦ ∑ q:Gpq=1 vlq (7)\nwhere ◦ denotes the element-wise product. It can be seen that we always use the embedding in the input layer v0p to interact with that in the high layer. In this way, the first layer will capture the second-order interaction. In the second layer, it will capture the third-order interaction, and in the high layers, high-order interaction will be captured. In other words, our method can capture the high-order interaction linearly. On the contrary, if we use vlp to do the product, the order will increase in an exponential way which might be too aggressive. Moreover, unlike existing high-order methods (Wang et al., 2017; Lian et al., 2018), we conduct the non-linear transformation for the feature embedding in each layer, which is shown in the second and third formulation in Eq. (6), to handle the highly non-linear relationship. At last, to capture the feature interaction in different order, we concatenate hl+1i in all layers as follows:\nhFIi = [(h 1 i ) T , (h2i ) T , · · · , (hLi )T ]T hRFIi = 1√ |N (i)| ∑ i′∈N (i) 1√ |N (i′)| hFIi′ (8)\nIn summary, we proposed a novel feature interaction component based on the feature concurrence graph, which can capture the high-order interaction and explore the non-linearity. As far as we know, this is the first work trying to deal with feature interaction from the graph perspective. This will open a new avenue to tackle high-dimensional categorical features by using graph operations." }, { "heading": "4.2 SAMPLE INTERACTION", "text": "With our proposed RFI component, our model can capture the high-order feature interaction. However, the RFI component cannot capture high-order sample interaction. To address this issue, we develop a novel sample-interaction (SI) component, which is used to further explore the interaction between different samples. Moreover, the sample-interaction component should benefit from the RFI component. Therefore, we enforce the SI component shares the same feature embedding with RFI component. In this way, the feature embedding will be updated by both components. On the contrary, in the naive method, two components only share the raw input features. The high-order\nsample interaction information cannot be used to update the feature embedding. As a result, in our method, the two components can benefit each other.\nSpecifically, the SI component is defined as follows:\nĥli = h l i + 1√ |N (i)| ∑ i′∈N (i) 1√ |N (i′)| hli′ ◦ hli\nhl+1i = σ(W l+1ĥli)\n(9)\nwhere h0i = ∑ p:xi,p=1 vp. Compared with the regular graph convolutional operation, our method conducts an explicit sample interaction by hli′ ◦ hli. In addition, by using a residual connection, our method can capture the sample interaction both linearly and exponentially.\nSimilar with the feature interaction component, to get different orders of sample interaction, we keep all the intermediate hli. Then, we concatenate all of them as the node representation:\nhSIi = [(h 1 i ) T , (h2i ) T , · · · , (hLi )T ]T (10)\nAt last, after obtaining the representation from these two components, we concatenate them together for prediction as follows:\nŷ = [(hRFIi ) T , (hSIi ) T ]W (11)\nWith this prediction, we can use our model for both classification and regression tasks." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we design experiments to verify the performance of the proposed approach." }, { "heading": "5.1 DATASETS", "text": "• Modcloth is a dataset where users rate the clothes they bought. Each user has five attributes: [user id, bra size, cup size, hips, height]. Each cloth has three attributes: [item id, size, category]. There are 47,185 users and 1,364 clothes.\n• Renttherunway is a dataset where 88,178 users rate 5,795 clothes they rented. Here, each user has six attributes: [user id, weight, body type, age, bust size, height] while each cloth has three attributes : [item id, size, category].\n• Book-crossing contains the historical rating information for books by users. Users have three attributes: [user id, location, age]. Books also have three attributes [isbn, yearofpublication, publisher]. In addition, the number of users is 278,858 and the number of books is 271,360.\n• Company-X-CTR data is an advertiser-level CTR data which includes the winning rate of each campaign bidding for exposure. Each campaign has 63 categorical attributes.\nIn our experiments, all attributes of these datasets are transformed to categorical features. Then, we use the one-hot encoding to represent the categorical feature. Moreover, the first three datasets construct a bipartite graph respectively. As for the last dataset, campaigns construct a regular graph. At last, we summarize the statistics of these datasets in Table 1." }, { "heading": "5.2 EXPERIMENTAL SETTINGS", "text": "Throughout our experiments, we evaluate our method on two tasks: link prediction (Section 5.3) and regression (Section 5.4). For the link prediction task, we use the first four datasets. In particular, all the existing links in a graph are viewed as positive links while non-existing links are treated negative ones. We randomly select 10% positive links for the training set and 10% positive links for the validation set. The rest positive links are used for the testing set. In addition, we randomly select the negative links for these three sets where the amount of negative links is same as that of positive links in each set. As for the regression task, we use the Company-X-CTR data, campaigns with the same\nplacement id are connected to construct the graph. The winning rate of each campaign is normalized to [0, 1].\nTo evaluate the performance of our proposed method, we compare it with 5 baseline methods, which are described as follows:\n• DeepFM (Guo et al., 2017) combines the standard FM and multi-layer perceptron neural (MLP) network together where the input of MLP is the feature embedding from FM. DeepFM can capture the second-order feature interaction explicitly by FM and high-order feature interaction implicitly by MLP.\n• DCN (Wang et al., 2017) is proposed to capture the high-order feature interaction explicitly by stacking multiple interaction layers together.\n• PNN (Qu et al., 2016) uses a product layer to capture the feature interaction explicitly and then stacks MLP over the product layer to capture the high-order interaction implicitly.\n• xDeepFM (Lian et al., 2018) also aims at capturing the high-order feature interaction explicitly layer by layer like DCN. But it uses a different feature interaction layer from DCN.\n• GCN (Kipf & Welling, 2016) is a graph convolutional neural network whose goal is to capture the correlation among samples for prediction.\nThroughout our experiments, we set the embedding size of each feature to 10 for all methods. As for the sample interaction component of our method, the dimension is set to [16, 16]. To make a fair comparison, GCN and the baseline methods with an MLP component are also set in the same way. As for the feature interaction component of our method, the dimension is set to [10, 10, 10]. Similarly, baseline methods with the high-order feature interaction component are also set to the same dimension. Moreover, the batch size is set to 1024 and the learning rate is set to 0.0001." }, { "heading": "5.3 LINK PREDICTION", "text": "In Table 2, we show the results for link prediction across a number of graphs from different domains. In all cases, DRFM outperforms the baseline methods across all the different real-world data sets. Compared to the best baseline method, DRFM achieves a gain in AUC of 1.6% for Modcloth, 4.9% for Renttherunway, 0.4% for Book-crossing. From Table 2, we also observe that by combining FMs and GCNs, the standard deviation in AUC is much smaller than other FM-based methods and\ntypically less than GCN as well. These results indicate utility of combining FMs and GCNs for learning a more accurate and robust model for prediction.\nTo further verify the effectiveness of our proposed high-order feature interaction component, we compare DRFM with its variant which replaces the hihg-order feature interaction component with the regular FM component. Here, we call it DRFM-second since it can only capture the secondorder feature interaction. Due to the space limitation, we only report the result of Modecloth in Figure 1. It can be seen that DRFM with our high-order feature interaction component has much better performance than that with only the regular FM component, which confirms the effectiveness of our high-order feature interaction component." }, { "heading": "5.4 REGRESSION", "text": "To further verify the performance of our proposed method, we use DRFM to predict the winning rate of a campaign bidding for exposure. Here, each campaign is configured with different attributes, such as targeting countries, targeting device types, user segment rules, etc. Advertisers might change only one or two attributes and launch a another campaign. This new campaign and its original campaign share a lot of common information so that they are highly correlated. Thus, it will be beneficial to capture the relationship between different campaigns when making prediction. To this end, we construct the graph for campaigns. Specifically, in this dataset, this kind of new campaigns and their original campaigns share the same ID. Thus, we can construct the graph in terms of the shared ID. More specifically, two campaign are connected if they share a same ID.\nThe result is shown in Figure 2. Here, to measure the performance of different methods, we use Root Mean Squared Error (RMSE) as the metric. A smaller value indicates a better result. It can be seen that the regular GCN performs worse than all the other baseline methods. The possible reason is that GCN does not utilize the feature interaction for prediction, while feature interaction is confirmed to be a powerful technique in the high-dimensional CTR prediction. Moreover, our method DRFM outperforms all state-of-the-art FMs, which confirms the effectiveness of incorporating the relational information for prediction." }, { "heading": "6 CONCLUSION", "text": "In this work, we described a new class of models that combine Factorization Machines (FMs) and Graph Neural Network (GNNs) into a unified learning approach. By seamlessly combining FMs and GNNs, we obtain the unique advantages offered by each while overcoming the issues that arise when either is used independently. Using real-world data from different domains, we demonstrated the effectiveness of combining GNNs and FMs for both link prediction and regression tasks. While this work demonstrated the utility of combining FMs and GNNs in a single unified learning framework, there remains many open research problems to investigate in future work. One important future direction is to explore other FM and GNN variants (besides the vanilla ones used in this work), and systematically investigate the utility and effectiveness of these different combinations." } ]
2,019
null
SP:c3d608213089ac61f4887e18c5c1e58363c78a09
[ "This paper introduces a new kind of algorithmic fairness framework where the focus is on first finding a fair classifier that does \"no harm\" and then in a subsequent step potentially allow doing harm in order to achieve even fairer outcomes. Fairness is here understood as risk disparity: how different are the risks achieved by our model in the various subgroups. The risk is task-dependent and can be something like a cross-entropy loss for classification problems. The goal is to have similar risks in the subgroups that correspond to sensitive attributes.", "This paper considers the notion of \"no-harm\" group fairness, i.e. trying to reduce the risk gap between minority and majority groups without excessive reduction in performance on the majority groups. Authors formalize the problem by defining a Pareto fair classifier, i.e. one that minimizes the risk gaps between groups and belongs to the family of Pareto classifiers containing the classifier minimizing the empirical risk. Authors suggest an optimization procedure for finding the Pareto fair classifier and demonstrate its performance on multiple datasets." ]
Common fairness definitions in machine learning focus on balancing various notions of disparity and utility. In this work we study fairness in the context of risk disparity among sub-populations. We introduce the framework of Pareto-optimal fairness, where the goal of reducing risk disparity gaps is secondary only to the principle of not doing unnecessary harm, a concept that is especially applicable to high-stakes domains such as healthcare. We provide analysis and methodology to obtain maximally-fair no-unnecessary-harm classifiers on finite datasets. We argue that even in domains where fairness at cost is required, no-unnecesary-harm fairness can prove to be the optimal first step. This same methodology can also be applied to any unbalanced classification task, where we want to dynamically equalize the misclassification risks across outcomes without degrading overall performance any more than strictly necessary. We test the proposed methodology on real case-studies of predicting income, ICU patient mortality, classifying skin lesions from images, and assessing credit risk, demonstrating how the proposed framework compares favorably to other traditional approaches.
[ { "affiliations": [], "name": "NO-HARM FAIRNESS" } ]
[ { "authors": [ "Alekh Agarwal", "Alina Beygelzimer", "Miroslav Dudı́k", "John Langford", "Hanna Wallach" ], "title": "A reductions approach to fair classification", "venue": "arXiv preprint arXiv:1803.02453,", "year": 2018 }, { "authors": [ "Solon Barocas", "Andrew D Selbst" ], "title": "Big data’s disparate impact", "venue": "Calif. L. Rev.,", "year": 2016 }, { "authors": [ "Toon Calders", "Sicco Verwer" ], "title": "Three naive bayes approaches for discrimination-free classification", "venue": "Data Mining and Knowledge Discovery,", "year": 2010 }, { "authors": [ "Irene Chen", "Fredrik D Johansson", "David Sontag" ], "title": "Why is my classifier discriminatory", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Elliot Creager", "David Madras", "Jörn-Henrik Jacobsen", "Marissa A Weis", "Kevin Swersky", "Toniann Pitassi", "Richard Zemel" ], "title": "Flexibly fair representation learning by disentanglement", "venue": null, "year": 1906 }, { "authors": [ "Pedro Domingos" ], "title": "A unified bias-variance decomposition", "venue": "In Proceedings of 17th International Conference on Machine Learning,", "year": 2000 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017a. URL http://archive. ics.uci.edu/ml", "venue": null, "year": 2017 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017b. URL https:// archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data)", "venue": null, "year": 2017 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Richard Zemel" ], "title": "Fairness through awareness", "venue": "In Proceedings of the 3rd innovations in theoretical computer science conference,", "year": 2012 }, { "authors": [ "Michael Feldman", "Sorelle A Friedler", "John Moeller", "Carlos Scheidegger", "Suresh Venkatasubramanian" ], "title": "Certifying and removing disparate impact", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Sorelle A Friedler", "Carlos Scheidegger", "Suresh Venkatasubramanian", "Sonam Choudhary", "Evan P Hamilton", "Derek Roth" ], "title": "A comparative study of fairness-enhancing interventions in machine learning", "venue": "In Proceedings of the Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "Arthur M Geoffrion" ], "title": "Proper efficiency and the theory of vector maximization", "venue": "Journal of mathematical analysis and applications,", "year": 1968 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nathan Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tatsunori B Hashimoto", "Megha Srivastava", "Hongseok Namkoong", "Percy Liang" ], "title": "Fairness without demographics in repeated loss minimization", "venue": "arXiv preprint arXiv:1806.08010,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Alistair EW Johnson", "Tom J Pollard", "Lu Shen", "H Lehman Li-wei", "Mengling Feng", "Mohammad Ghassemi", "Benjamin Moody", "Peter Szolovits", "Leo Anthony Celi", "Roger G Mark" ], "title": "Mimic-iii, a freely accessible critical care database", "venue": "Scientific data,", "year": 2016 }, { "authors": [ "Matthew Joseph", "Michael Kearns", "Jamie Morgenstern", "Seth Neel", "Aaron Roth" ], "title": "Rawlsian fairness for machine learning", "venue": "arXiv preprint arXiv:1610.09559,", "year": 2016 }, { "authors": [ "Toshihiro Kamishima", "Shotaro Akaho", "Hideki Asoh", "Jun Sakuma" ], "title": "Fairness-aware classifier with prejudice remover regularizer", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2012 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Juhani Koski" ], "title": "Defectiveness of weighting method in multicriterion optimization of structures", "venue": "Communications in applied numerical methods,", "year": 1985 }, { "authors": [ "Christos Louizos", "Kevin Swersky", "Yujia Li", "Max Welling", "Richard Zemel" ], "title": "The variational fair autoencoder", "venue": "arXiv preprint arXiv:1511.00830,", "year": 2015 }, { "authors": [ "Kaisa Miettinen" ], "title": "Nonlinear multiobjective optimization, volume 12", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Philipp Tschandl", "Cliff Rosendahl", "Harald Kittler" ], "title": "The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions", "venue": "Scientific data,", "year": 2018 }, { "authors": [ "Berk Ustun", "Yang Liu", "David Parkes" ], "title": "Fairness without harm: Decoupled classifiers with preference guarantees", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Blake Woodworth", "Suriya Gunasekar", "Mesrob I Ohannessian", "Nathan Srebro" ], "title": "Learning nondiscriminatory predictors", "venue": "arXiv preprint arXiv:1702.06081,", "year": 2017 }, { "authors": [ "Muhammad Bilal Zafar", "Isabel Valera", "Manuel Gomez Rodriguez", "Krishna P Gummadi" ], "title": "Fairness constraints: Mechanisms for fair classification", "venue": "arXiv preprint arXiv:1507.05259,", "year": 2015 }, { "authors": [ "Muhammad Bilal Zafar", "Isabel Valera", "Manuel Rodriguez", "Krishna Gummadi", "Adrian Weller" ], "title": "From parity to preference-based notions of fairness in classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Rich Zemel", "Yu Wu", "Kevin Swersky", "Toni Pitassi", "Cynthia Dwork" ], "title": "Learning fair representations", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Feldman. Feldman" ], "title": "2015) provides a preprocessing algorithm to sanitize input observations", "venue": null, "year": 2015 }, { "authors": [ "Zafar. Zafar" ], "title": "2017) Addresses disparate mistreatment via a convex relaxation", "venue": null, "year": 2017 }, { "authors": [ "Friedler" ], "title": "2019), they train a logistic regression classifier", "venue": null, "year": 2019 }, { "authors": [ "Hardt. Hardt" ], "title": "2016) proposes a post-processing algorithm that takes in an arbitrary predictor", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning algorithms play an important role in decision making in society. When these algorithms are used to make high-impact decisions such as hiring, credit-lending, predicting mortality for intensive care unit patients, or classifying benign/malign skin lesions, it is paramount to guarantee that these decisions are both accurate and unbiased with respect to sensitive attributes such as gender or ethnicity. A model that is trained naively may not have these properties by default; see, for example Barocas & Selbst (2016).\nIn these critical applications, it is desirable to impose some fairness criteria. Much of the fairness in machine learning literature attempts to produce algorithms that satisfy Demographic Parity, which aims to make algorithm’s predictions independent of the sensitive populations (Louizos et al. (2015); Zemel et al. (2013); Feldman et al. (2015)); or Equality of Odds or Equality of Opportunity, which aims to produce predictions that are independent of the sensitive attributes given the ground truth (Hardt et al. (2016); Woodworth et al. (2017)). Notions of Individual Fairness have also been advanced (Dwork et al. (2012); Joseph et al. (2016); Zemel et al. (2013)). These notions of fairness can be appropriate in many scenarios, but in domains where quality of service is paramount, such as healthcare, we argue that it is necessary to strive for models that are as close to fair as possible without introducing any unnecessary harm to any subgroup (Ustun et al. (2019)). Even if the overall fairness goal is a potentially harmful, zero-gap classifier, pursuing first a Pareto-fair classifier and later applying other harmful methodologies ensures that all possible non-harmful trade-offs are covered before explicitly degrading performance, therefore minimal harm is introduced to the decision.\nIn this work we make use of the concept of Pareto optimality to measure and analyze discrimination (unfairness) in terms of the difference in predictive risks across sub-populations defined by our sensitive attributes, a fairness metric that has been explored in other recent works such as Calders & Verwer (2010); Dwork et al. (2012); Feldman et al. (2015); Chen et al. (2018); Ustun et al. (2019). We examine the subset of models from our hypothesis class that have the best trade-offs between sub-population risks, and select from this set the one with the smallest risk disparity gap. This is in direct contrast to common post-hoc correction methods like the ones proposed in Hardt et al. (2016); Woodworth et al. (2017), where noise is potentially added to the decisions of the best performing sub-population. While this latter type of approach diminishes the risk-disparity gap, it does so by degrading performance on advantaged groups, the previously disadvantaged groups do not directly benefit from this treatment. Since our proposed methodology does not require test-time\naccess to sensitive attributes, and can be applied to any standard classification or regression task, it can also be used to reduce risk disparity between outcomes, acting as an adaptive risk equalization loss compatible with unbalanced classification scenarios.\nMain Contributions. We formalize the notion of no-harm risk fairness1 using Pareto optimality (Mas-Colell et al. (1995)), a state of resource allocations from which it is impossible to reallocate without making one subgroup worse. We show that finding a Pareto-fair classifier is equivalent to finding a model in our hypothesis class that belongs to the Pareto front (the set of all Pareto optimal allocations) with respect to the sub-population risks with the smallest possible risk disparity. This general notion is already amenable to non-binary sensitive attributes. We analyze the fairness performance trade-offs that can be expected from different approaches with an illustrative example. We provide a concrete algorithm that promotes fair solutions belonging to the Pareto front; this algorithm can be applied to any standard classifier or regression task that can be trained using (Stochastic) Gradient Descent. We show that if the goal is to obtain a zero-gap classifier, first recovering the fairest Pareto optimal solution and then adding harmful post-hoc corrections ensures the lowest risk levels across all subgroups. We demonstrate how our methodology performs on synthetic and real tasks such as inferring income status in the Adult dataset Dua & Graff (2017a) irrespective of their ethnicity or gender, predicting ICU mortality rates in the MIMIC-III dataset from hospital notes Johnson et al. (2016), classifying skin lesions in the HAM10000 dataset Tschandl et al. (2018), and assessing credit risk on the German Credit dataset Dua & Graff (2017b)." }, { "heading": "2 RELATED WORK", "text": "There is an extensive body of work on fairness in machine learning. Following Friedler et al. (2019), we compare our methodology against the works of Feldman et al. (2015); Kamishima et al. (2012); Zafar et al. (2017). Our method shares conceptual similarities with Zafar et al. (2017); Woodworth et al. (2017); Agarwal et al. (2018), with differences on how we define our fairness objective and adapt it to work with standard neural networks. Although optimality is often discussed in the fairness literature, it is usually in the context of utility-discrimination tradeoffs. To the best of our knowledge, this is the first work to discuss optimality with respect to subgroup risks on a unified classifier, a distinction that disallows extreme performance degradation in the pursuit of fairness.\nThe work presented in Hashimoto et al. (2018) discusses decoupled classifiers as a way of minimizing group-risk disparity, but simultaneously cautions against this methodology when presented with insufficiently large datasets. The works of Chen et al. (2018); Ustun et al. (2019) also empirically report the disadvantages of decoupled classifiers as a way to mitigate risk disparity. Here we argue for the use of a single classifier because it allows transfer learning between diverse sub-populations. We do not need access to the sensitive attribute during test time, but in cases where this is possible, we instead choose to incorporate it as part of our observation features.\nThe work of Chen et al. (2018) uses the unified bias-variance decomposition advanced in Domingos (2000) to identify that noise levels across different sub-populations may differ, making perfect fairness parity impossible without explicitly degrading performance on one subclass. Their methodology attempts to bridge the disparity gap by collecting additional samples from high-risk sub-populations. Here we modify our classifier loss to bridge the disparity gap without inducing unnecessary harm, which could prove to be synergistic with their methodology." }, { "heading": "3 PROBLEM STATEMENT", "text": "Consider we have access to a datasetD = {(xi, yi, ai)}ni=1 containing n independent triplet samples drawn from a joint distribution (xi, yi, ai) ∼ P (X,Y,A) where xi ∈ X are our input features (e.g., images, tabular data, etc.), yi ∈ Y is our target variable, and ai ∈ A indicates group membership or sensitive status (e.g., ethnicity, gender); our input features X may or may not explicitly contain A.\nLet h ∈ H be a classifier from a compact hypothesis class H trained to infer y from x, h : X → Y; and a loss function ` : Y ×Y → R. We define the class-specific risk of classifier h on subgroup a as Ra(h) = EX,Y |A=a[`(h(X), Y )]. The risk discrimination gap between two subgroups a, a′ ∈ A is\n1In this paper we use “no-harm” to mean “no-unnecessary-harm,” in other words, the system doesn’t degrade performance on any class unless it is strictly necessary to improve performance in a disadvantaged class.\nmeasured as Γa,a′(h) = |Ra(h)−Ra′(h)|, and we define the pairwise discrimination gap vector as ~ΓA(h) = {Γa,a′(h)}a,a′∈A. Our goal is to obtain a classifier h ∈ H that minimizes this gap without causing unnecessary harm to any particular group in A. To formalize this notion, we define: Definition 3.1. Dominant risk vector: A vector r′ ∈ Rk is said to dominate r ∈ Rk, noted as r r′, if ri ≥ r′i,∀i = 1, ..., k and ∃j : rj > r′j (i.e., strict inequality on at least one component). Definition 3.2. Dominant risk classifier: Classifier h′ is said to dominate h, noted as h h′, if the risks vector r′ = {Ra(h′)}|A|a=1 dominates r = {Ra(h)} |A| a=1. Definition 3.3. Pareto front: We define the Pareto front as P(H,A) = {h ∈ H : @h′ ∈ H | h h′}. This means that there is no other classifier in H that is at least as good in all risks and strictly better in at least one of them. It is the set of classifiers such that improving one group’s risk comes at the cost of increasing other’s.\nThe Pareto front defines the best achievable trade-offs between population risks Ra(h). This definition is already suited for classification and regression tasks where the sensitive attributes are categorical. Constraining the classifier to be in the Pareto front disallows laziness, there exists no other classifier in the hypothesis classH that is at least as good on all class-specific risks and strictly better in one of them. As shown in Chen et al. (2018); Domingos (2000), the risk can be decomposed in bias, variance and noise for some loss functions, where the noise is the smallest achievable risk for infinitely large datasets (Bayes-optimal risk). If the noise differs between sensitive groups, zero-discrimination (perfect fairness) can only be achieved by introducing bias or variance.\nLiterature on fairness has focused on putting constraints on the norm of discrimination gaps (Zafar et al. (2017; 2015); Creager et al. (2019); Woodworth et al. (2017)). We follow a similar criteria in Definition 3.4 and define the Pareto-fair classifier as the classifier in the Pareto front that minimizes ||~ΓA(h)||∞ (the maximum risk discrimination gap). Note that one could alternatively choose to find the Pareto classifier that minimizes the maximum subgroup risk. Definition 3.4. Pareto-fair classifier and Pareto-fair vector: A classifier h∗ is an optimal Paretofair classifier if it minimizes the discrimination gap among all Pareto front classifiers, h∗ = arg min h∈P(H,A) ||~ΓA(h)||∞. The Pareto-fair vector r∗ ∈ R|A| is defined as r∗ = {Ra(h∗)}a∈A.\nEven when perfect equality of risk is desirable, Pareto classifiers still serve as useful intermediaries. To this end, Lemma 3.1, shows that applying a mechanism for reaching equality of risk on a dominated classifier h ∈ H leads to equal or worse risks than applying it to a Pareto classifier hp ∈ P(H,A) that dominates h. Lemma 3.1. If h 6∈ P(H,A)→ ∃hp ∈ P(H,A) : hp h ∧ Ra(hERp ) ≤ Ra(hER)∀a, with hER an equal-risk classifier : Ra(hER) = max\na′∈A Ra′ (h),∀a and hERp : Ra(hERp ) = max a′∈A Ra′ (hp).\nTo exemplify these notions graphically, Figure 1 shows a scenario with binary sensitive attributes a and binary output variable y where none of the Pareto front classifiers achieve equality of risk. Here the noise level differs between subgroups, and the Pareto-fair vector r∗ is not achieved by either a Naive classifier (minimizes expected global risk), or a classifier where subgroups are re-sampled to appear with equal probability (rebalanced Naive classifier). Note that the amount of performance degradation required to enforce perfect fairness starting from the Naive classifier is higher than when starting from the Pareto-fair vector.\nOur objective is to find the Pareto-fair classifier h∗ as in Definition 3.4. In particular, we will give regularity conditions on the hypothesis class H and the loss function `(h(X), Y ). It can be shown that for a sufficiently rich class of hypothesis functions H and for risk functions Ra(h) that are convex with respect to h, the space of risk vectors is convex (see Geoffrion (1968); Koski (1985); Miettinen (2012)). Under these conditions, we can find an auxiliary loss function φ : R|A| → R defined in terms of the subgroup risks {Ra(h)}a∈A (denoted from here on as r ∈ R|A|) that has a global minima on r∗.\nTo prove the existence of loss function φ with the desired global minimum r∗, it is convenient to think of the convex set of risk vectors as the intersection of a convex Pareto set (defined by a convex Pareto front and all risks that are dominated by it), and an additional convex set Ω. We further require both this Pareto set and Ω to be smooth, so that we can apply standard tools from smooth convex optimization and prove that for every risk vector r′ in the Pareto front there exists a function φ that has r′ as its global optima. This is formalized in Lemma 3.3. Definition 3.5. Pareto field: A function P : Ω → R is a Pareto field over a convex set Ω ⊂ Rk if P ∈ C1 is a continuously differentiable function such that∇iP (r) > 0 ∀r ∈ Ω,∀i = 1, . . . , k.\nLemma 3.2. Let Ω ⊂ Rk be a convex set, and P : Ω → R a Pareto field. Then the set D = {r ∈ Ω : P (r) = 0} is a Pareto set in Ω, and the set D+ = {r ∈ Ω : P (r) > 0} is the set of dominated points, i.e., D+ = {r ∈ Ω : ∃r′ ∈ D | r r′} Lemma 3.3. Let Ω ⊂ Rk be a convex set defined by Ω = {r ∈ Rk : gc(r) ≥ 0 ∀ c ∈ {1, . . . , C}, gc continuously differentiable}; let P : Ω → R be a convex Pareto field with corresponding proper Pareto set D = {r ∈ Ω : P (r) = 0}. Let r̂ ∈ D and φ(r,µ) =∑k i=1 ri + µi(ri − c)+2, with c < r̂, where we denote (x)+2 = max2(x, 0). There exists a set of vectors µ̂ 0 such that: r̂ = arg min\nr∈Rk φ(r, µ̂) s.t. : P (r) ≥ 0, gc(r) ≥ 0∀c ∈ {1, . . . , C}.\nWhere C is the number of constraints that characterized Ω. Proofs for all lemmas are given in the supplementary material, Section A.1. Lemma 3.3 motivates our loss function to take the form φ(r,µ) = ∑|A| i=1 ri+µi(ri−c)+2, with |A| the number of sensitive groups. The challenge is to find µ∗ such that our Pareto-fair vector r∗ minimizes φ(r,µ∗). In Section 4 we provide an algorithm to approximately recover the Pareto-fair classifier h∗ by adaptively searching for µ∗ and optimizing a classifier on φ(·,µ) using tools from standard Stochastic Gradient Descent." }, { "heading": "4 OPTIMIZATION METHODS", "text": "Recall that we wish to recover the Pareto-fair classifier h∗ within our hypothesis class. From Lemma 3.3, there exists a loss function of the form\nφ(r;µ, c) = |A|∑ i=1 ri + µi(ri − c)+2,\nφ(h;µ, c) = ∑ a∈A Ra(h) + µa(Ra(h)− c)+2, (1)\nsuch that h∗ = arg min h∈H φ(h;µ, c). Note that we can state the loss function directly on the risk vectors r, or implicitly on the classifier h. Since Ra(h) is differentiable with respect to h, φ(h;µ, c) can be directly minimized using gradient descent on h for any choice of values µ, c. Let DTr =\n{x, y, a}NTi=1, DVal = {x, y, a} NV i=1 be our training and validation datasets respectively. The proposed implementation of the Pareto-fair framework is formalized in Algorithm 1 where we specify how to update the penalty coefficients ~µ, c.\nAlgorithm 1: ParetoFairOptimization Given: hθ, `,DTr,DVal, nµ, np, nmax, γ > 0, k ≥ 1, ξ, ζ ∈ (0, 1), lr,B µ← 1, µ∗ ← 1, µcount ← 0, ecount ← 0, c← 0, Γ∗ ←∞, h∗ ← hθ while ecount ≤ nmax and µcount ≤ nµ do\nµcount ← µcount + 1, ecount ← ecount + 1 hθ, r\nVal ← AdaptiveOptimize(hθ, `,µ, c,DTr,DVal, np, lr,B) // Optimize current loss // Check that solution is Pareto efficient and reduces fairness gap\nif ||~ΓA(h)||∞ < Γ∗ and rVal is not dominated by previous validation risks then h∗ ← hθ, Γ∗ ← ||~ΓA(h)||∞, cold ← c, c← mina r Val a k\nµ∗ ← µ · (r Val−cold)+ (rVal−c)+ , a ′ ← arg maxa rVala , µcount ← 0\nelse lr← ζ lr, µ← µ∗, γ ← γξ, hθ ← h∗ end µa′ ← (1 + γ)µa′\nend // Exit loop due to excessive iterations or no improvement in fairness Return: h∗\nWe regularly check that reductions in the fairness gap generalize to the validation set; we additionally check if the trade-offs are in the non-dominated solution set (i.e., we have not observed a universally better classifier during training). Algorithm 2 (AdaptiveOptimize, shown in Section A.2) summarizes how we perform stochastic gradient descent steps with early stopping in-between µ, c updates. Lemma A.5 (shown in supplementary material, Section A.1) shows that this algorithm is convergent for |A| = 2 when the minimization step for a fixed µ, c is performed exactly. To conclude this section, we stress that the proposed framework is independent of the desired algorithm class H and loss function `; these are kept from the original application. The Pareto-fair classifier uses the same inputs as the Naive classifier, with parameters that have been optimized towards Pareto fairness. Code will be made available." }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "We applied the methodology described in Section 4 to learn a Pareto-fair classifier (classifier in the group-risk Pareto front with the smallest risk disparity). We first validate our methodology on synthetic data with known Pareto-fair classifiers. Observations are drawn from a Gaussian mixture model where each sensitive attribute is encoded by a corresponding Gaussian mode, and target attributes are binary. We demonstrate our methodology on publicly available fairness datasets, and show how the risk disparity gaps are subsequently reduced. Where applicable, we compare our results against the methodologies proposed in Zafar et al. (2017); Kamishima et al. (2012); Hardt et al. (2016); Feldman et al. (2015)." }, { "heading": "5.1 SYNTHETIC DATA", "text": "We tested our approach on synthetic data where the observations are drawn from conditional Gaussian distributions X|A = a ∼ N(µA, 1), the target variable y is a conditional Bernoulli variable with distribution Y |X = x,A = a ∼ Ber ( fa(x) ) with fa(x) = ρlowa 1[x ≤ ca] + ρhigha 1[x > ca] ) , and |A| = 3. We used Brier Score (BS, refer to Section A.4) as our loss function. Under these conditions, the Bayes-optimal classifier for each subgroup is h(x) = fa(x). The subgroup risk function for any classifier h can be computed as Ra(h) = EX|A=a[Y − h(X))2] = EX|A=a[fa(X)(1 − h(X))2 + (1 − fa(X))h(X)2]. Network details are given in the supplementary material, Section A.6. Figure 2 shows the analytically-derived Pareto-fair classifier, as well as the trade-off obtained by the proposed algorithm. We remark that the Pareto-fair classifier in this scenario cannot be achieved using linear logistic regression, something that can be seen from the functional form of the Bayes Optimal classifier derived in Section A.8 in supplementary material. From the center graph in Figure 2 we can see that the approximated Pareto-Fair risks (dashed\nlines) are close to the theoretical best (solid lines).The achieved maximum risk discrimination is also close to the theoretical best. The algorithm is able to correctly trade risks from the two advantaged subgroups to reduce the risk of the worst performing subgroup." }, { "heading": "5.2 REAL DATASETS", "text": "We evaluate the performace of our algorithm on mortality prediction (MIMIC-III), skin lesion classification (HAM10000), income prediction (Adult), and credit lending (German). We also compared with several state of the art methods (Zafar et al. (2017); Kamishima et al. (2012); Hardt et al. (2016); Feldman et al. (2015)) and with a model trained to minimize the average risk (Naive) and one that undersamples the minority class (Naive Balanced or NaiveB). A description of these methods is provided in Section A.3 of Supplementary material. Metrics used for evaluation include accuracy (Acc), confidence (Conf), expected and maximum calibration error (ECE and MCE respectively), Brier score (BS) and cross-entropy (CE). Values for every metric are reported per sensitive attribute. The dataset average is reported as (sample mean), and the group-normalized average as (group mean). The best and worst performing groups are also shown as (group max) and (group min), and the worst case difference between subgroups is reported as (disc). For an in-depth description of the the metrics and datasets used for evaluation, refer to Sections A.4 and A.5 of Supplementary material. Here we present a comparison of the performance in terms of accuracy and ECE but an extensive comparison is available in additional results Section A.7 in supplementary material.\nSimilarly to Chen et al. (2018); Zafar et al. (2017); Hardt et al. (2016); Woodworth et al. (2017), we omit the sensitive attribute a ∈ A from our observation features. Our method trains a single classifier for the entire dataset to avoid needing test-time access to sensitive attributes whenever possible. All classifiers shown in this section are implemented using neural networks and/or logistic regression, we used either cross-entropy or Brier score as our training loss depending on the dataset. For details on the architecture and hyperparameters used on each dataset, refer to the supplementary material, Section A.6. We show how the proposed Pareto-fair approach produces well calibrated models that reduce group disparities across several metrics." }, { "heading": "5.2.1 PREDICTING MORTALITY IN INTENSIVE CARE PATIENTS", "text": "Medical decisions in general and mortality prediction in particular are examples where notions of fairness among sub-populations are of paramount importance, and where ethical considerations make no-harm fairness a very attractive paradigm. To that end, we used clinical notes collected from adult ICU patients at the Beth Israel Deaconess Medical Center (MIMIC-III dataset) Johnson et al. (2016) to predict patient mortality. We study fairness with respect to age (adult or senior), ethnicity (white or nonwhite), and outcome (alive/deceased). This leads to a total of 8 sensitive groups. We included outcome (alive/deceased) as a sensitive sub-population criteria to demonstrate a case where sensitive attributes would not be available at test-time, and because in our experiments patients who ultimately passed away on ICU were under-served by a Naive classifier. Table 1 shows empirical accuracies and expected calibration errors of all tested methodologies. It is important to note how imbalanced these groups are by looking at the ’ratio’ column. Here one can see that 56.7% of the samples correspond to the majority class (alive, white, senior) and only 0.4% to the minority\nclass (deceased, nonwhite, adult). Our methodology produces low discrimination classifiers with high group accuracies, for example, ParetoFair trained with brier score loss (BS) increased the classification accuracy of the most under-served group by over 50% while reducing the accuracy of the best-served group by 12% when compared to the Naive classifier." }, { "heading": "5.2.2 SKIN LESION CLASSIFICATION", "text": "The HAM10000 dataset Tschandl et al. (2018) collects over 10, 000 dermatoscopic images of skin lesions over a diverse population. Lesions are classified according to diagnostic categories. We found that a Naive classifier exhibited almost no measurable discrimination based on age or race on this dataset. We instead chose to use the diagnosis class as both the target and sensitive variable, casting balanced risk minimization as a particular use-case for Pareto fairness. This is possible since our methodology does not require test-time access to sensitive labels. It was not possible to show comparisons against Hardt et al. (2016) since the sensitive attribute is perfectly predictive of the outcome. Table 3 shows accuracies and ECE for all tested methodologies. The Pareto-fair classifier has the overall best calibration results and smallest accuracy disparities across methodologies. Comparisons against other methods were not possible because the target labels are non-binary." }, { "heading": "5.2.3 INCOME PREDICTION AND CREDIT RISK ASSESMENT", "text": "We tested the proposed method on the Adult UCI dataset Dua & Graff (2017a) and on the German Credit dataset Dua & Graff (2017b). In the Adult UCI dataset the goal is to predict a person’s\nincome, which can be an important factor on meaningful decisions such as credit lending. In the German Credit dataset the goal is predicting credit risk. We select gender and ethnicity as our sensitive attributes. To compare ourselves against state of the art methods Zafar et al. (2017); Feldman et al. (2015); Kamishima et al. (2012) we binarize the sensitive attributes into White-Male and Other when dealing with ethnicity and gender simultaneously, or Male and Female when dealing with gender, and use the unified testbed provided in Friedler et al. (2019). We limit our hypothesis class H to linear logistic regression to compare evenly against these standard baselines. Results on both datasets are shown in Table 5." }, { "heading": "6 DISCUSSION", "text": "There exists a rich literature of fairness in machine learning in general, and risk-based fairness in particular. Here we explore a relatively untapped sub-problem where the goal is to reduce risk disparity gaps in the most ethical way possible (i.e., minimizing unnecessary harm). Unlike other works in the area, our problem investigates on how to reduce this disparity gap without collecting additional data samples, using the entirety of the available training data to produce an algorithm that is maximally fair with respect to sub-populations, and that does not necessarily require test-time access to sensitive attributes.\nWe provide a concrete algorithmic adaptation to any standard classification or regression loss to bridge this disparity gap at no unnecessary harm, and demonstrate its performance on several realworld case studies. Even for applications where the need for strict fairness outweighs the need for no-harm classifiers, this methodology can be applied before any post-hoc corrections to ensure that the risk disparity gap is closed in the most risk-efficient way for all involved sub-populations. The proposed algorithm does not sweep through different disparity constraint values, as previously done in related works, making it a simpler alternative.\nAs an avenue of future research, it could be of interest to analyze if we can automatically identify high-risk sub-populations as part of the learning process and attack risk disparities as they arise, rather than relying on preexisting notions of disadvantaged groups or populations. We strongly believe that no-unnecessary-harm notions of fairness are of great interest for several applications, especially so on domains such as healthcare and lending, where decisions are highly impactful." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOFS", "text": "Here we restate the Lemmas shown in Section 3 along with a sketch of the proofs.\nLemma 3.1 If h 6∈ P(H,A) → ∃hp h ∈ P(H,A) : Ra(hERp ) ≤ Ra(hER),∀a, where hER is a equality of risks classifier in H such that Ra(hER) = max\na′∈A Ra′ (h),∀a and hERp : Ra(hERp ) =\nmax a′∈A Ra′ (hp),∀a.\nProof. If hp dominates h→ Ra(hERp ) = max a′∈A Ra′ (hp) ≤ max a′∈A Ra′ (h) = Ra(h ER) ∀a ∈ A.\nDefinition A.1. Dominant vector: A vector r′ ∈ Rk is said to dominate r ∈ Rk, noted as r′ r, if ri ≥ r′i,∀i = 1, ..., k and ∃j : rj > r′j (i.e., strict inequality on at least one component). Definition A.2. Pareto field: A function P : Ω → R is defined as a Pareto field over a convex set Ω ⊂ Rk if P ∈ C1 is a continuously differentiable function such that ∇iP (r) > 0∀r ∈ Ω,∀i = 1, . . . , k. Lemma A.1. Let Ω ⊂ Rk be a convex set, and P : Ω → R a Pareto field. Then the set D = {r ∈ Ω : P (r) = 0} is a proper Pareto set in Ω, and the set D+ = {r ∈ Ω : P (r) > 0} is the set of dominated points, i.e., D+ = {r ∈ Ω : ∃r′ ∈ D | r r′}.\nProof. First we prove that D = {r ∈ Ω : P (r) = 0} is a Pareto set. Assume by contradiction that there exists r, r′ ∈ D | r′ r, then r′ − r = c = ∑ i δiei δi ≥ 0∀i ∈ {1, . . . , k}, ∃j ∈ {1, . . . , k} | δj > 0. ei is a standard basis vector. Using the Gradient theorem we have\nP (r′) = P (r) + ∫ r→r′ ∇P (λ)dλ,\n= P (r) + ∫ 1 0 〈(δ1, . . . , δk), (∇P0(r + λc), . . . ,∇Pk(r + λc))〉dλ,\n= P (r) + ∑ i δi ∫ 1 0 ∇Pi(r + λc)︸ ︷︷ ︸ >0 dλ,\n> P (r),\nwhich directly contradicts r′, r ∈ D. No point in set D is dominated by another point in set D, making set D a proper Pareto set.\nTo show that the set D+ is the set of dominated points, we first note that for all r ∈ Ω | r′ ∈ D, r r′ we have P (r) > P (r′) using the same arguments as above, meaning that the set of dominated points is included in D+. Similarly, for all r ∈ Ω | r′ ∈ D, r′ r we have P (r′) ≥ P (r), meaning that non-dominated points are not a part of set D+.\nLemma A.2. Let Ω ⊂ R|A| be a convex set defined by Ω = {r ∈ R|A| : gc(r) ≥ 0 ∀ c ∈ {1, . . . , C}, gc continuously differentiable}; let P : Ω → R be a convex Pareto field with corresponding Pareto set D = {r ∈ Ω : P (r) = 0}. Let r∗ ∈ D and φ(r, µ) = ∑|A| i=1 ri+µi(ri− c)2+, with c < r∗. There exists a set of µ∗ 0 such that: r∗ = arg min\nr∈R|A| φ(r, µ∗) s.t. : P (r) ≥ 0, gc(r) ≥ 0∀c ∈ {1, . . . , C}\n.\nProof. By hypothesis, both Ω and {r ∈ Ω : P (r) ≥ 0} are convex, so the intersection is also convex. For µ 0, φ(r, µ) is a convex function of r. Under these hypothesis, the Karush-KuhnTucker (KKT) conditions are necessary and sufficient to recover a global minimizer. So it suffices to find a set µ∗ such that the KKT conditions are exactly satisfied at r∗.\nLet J∗ = {j : gj(r∗) = 0} be the set of indices of active gc constraints at r∗, note that by hypothesis, r∗ ∈ D and therefore P (r∗) is always active. Focusing on the first KKT condition on the active set we get∇φ(r∗, ~µ) = λ∇P (r∗) + ∑ j∈J∗ ηj∇gj(r∗), in matrix form:1...\n1\n+ 2(r ∗ 1 − c)+ 0\n. . . 0 2(r∗A − c)+\nµ = [ | | |∇P ∇gj1 . . . ∇gj|J∗| | | | ] λ η1 ... ηc∗ . By hypothesis, we have ri − c > 0 and therefore\nµi = λ∇Pi(r∗) +\n∑ j∈J∗ ηj∇g j i (r ∗)− 1 2(r∗i − c) .\nIn particular, for ηj = 0 ∀j and µ1 > [∇Pi(r ∗)\n∇P1(r∗) − 1] 1 2(r∗1−c)+ ∀i 6= 1 we get\nµi = [(µ12(r ∗ 1 − c)+ + 1) ∇Pi(r∗) ∇P1(r∗) − 1] 1 2(r∗i − c)+ > 0∀i 6= j,\nµ1 > 0,\nλ = µ12(r ∗ 1 − c)+ + 1 ∇P1(r∗) > 0,\nwhich satisfies the KKT conditions and produces the required minimizer r∗ for all points r∗ ∈ D.\nLemma A.3. Let Ω ⊂ R2 be a convex set defined by Ω = {r ∈ R2 : gc(r) ≥ 0 ∀ c ∈ {1, . . . , C}, gc continuously differentiable}; let P : Ω → R be a convex Pareto field with corresponding Pareto set D = {r ∈ Ω : P (r) = 0}. Let > 0, r∗ = arg min\nr∈D [maxi ri − minj rj ]\nthen c = min(r1, r2)− < min(r∗1 , r∗2) ∀(r1, r2) ∈ D.\nProof. case r∗1 = r∗2: Assume r ∈ D : mini ri > r∗1 . Then r r∗, which contradicts the hypothesis r ∈ D. case r∗1 6= r∗2: Without loss of generality, we can take r∗1 > r∗2 . We can then prove by couterpositive that r ∈ D implies r1 > r2. Assume by contrapositive that there exists r′ ∈ D : r′2 > r′1. By hypothesis D is a continuous curve, and we can find a path R(t) : R(0) = r∗, R(1) = r′, R(t) ∈ D∀t ∈ [0, 1], which would imply there exists t̂ : R(t̂) = (r̂, r̂), in which case r∗1 = r∗2 , which contradicts r∗1 6= r∗2 . We then have that for r ∈ D, r1 > r2. Assume by counterpositive that r2 = min(r1, r2) > min(r∗1 , r ∗ 2) = r ∗ 2 then r1 < r ∗ 1 (since both r, r\n∗ ∈ D), but then |r1 − r2| < |r∗1 − r∗2 | which contradicts the hypothesis.\nPutting both statements together, we proved that if r∗1 > r ∗ 2 , then ∀r ∈ D,min(r1, r2) = r2 ≤ min(r∗1 , r ∗ 2) = r ∗ 2 .\nLemma A.4. Let Ω ⊂ R2 be a convex set defined by Ω = {r ∈ R2 : gc(r) ≥ 0 ∀ c ∈ {1, . . . , C}, gc continuously differentiable}; let P : Ω → R be a convex Pareto field with corresponding Pareto set D = {r ∈ Ω : P (r) = 0}. Let µ = (µ1, µ2) > 0, µ′ = (µ1, µ′2); µ′2 > µ2. Define φ(r, µ, c) = ∑2 i=1 ri + µi(ri − c)2+ and\nr = arg min s. t.P (r)≥0\nr∈Ω\nφ(r, µ, c),\nr′ = arg min s. t.P (r)≥0\nr∈Ω\nφ(r, µ′, c).\nWith c < r2, then, r′1 ≥ r1, r′2 ≤ r2, with equality if and only if r′ = r̂ = arg min s. t. r∈{P (r)≥0}∩Ω r2\nProof. By hypothesis, we have φ(r, µ′, c) ≥ φ(r′, µ′, c)\nφ(r, µ, c) + (µ′2 − µ2)(r2 − c)2+ ≥ φ(r′, µ, c) + (µ′2 − µ2)(r′2 − c)2+\nφ(r, µ, c) ≥ φ(r′, µ, c) + (µ′2 − µ2)[(r′2 − c)2+ − (r2 − c)2+] since by hypothesis φ(r, µ, c) ≤ φ(r′, µ, c), it follows r′2 ≤ r2, and therefore r′1 ≥ r1, because they both belong to the Pareto set.\nCase r′ = r. To analyze when the equality arises, note that the tangent to D at r is τ(r) = (∇2P (r),−∇1P (r)) and that r + δτ(r), δ > 0 is a valid search direction if r 6= r̂ (r does not minimize r2). Also note that∇φ(r, µ′, c) = ∇φ(r, µ, c) + (0, 2(µ′2 − µ2)(r2 − c)+) By contradiction, assume r 6= r̂, then r + δτ(r), δ > 0 is a valid search direction and 〈(δτ(r)),∇φ(r, µ′, c〉 > 0, which implies r′ 6= arg min\ns. t.P (r)≥0 r∈Ω\nφ(r, µ′, c) and contradicts the hypoth-\nesis.\nTherefore, if r′ = r it follows r′ = r = r̂\nLemma A.5. Let Ω ⊂ R|A| be a convex set defined by Ω = {r ∈ R|A| : gc(r) ≥ 0 ∀ c ∈ {1, . . . , C}, gc continuously differentiable}; let P : Ω → R be a convex Pareto field with corresponding Pareto set D = {r ∈ Ω : P (r) = 0}. Let r∗ = arg min\nr∈D (maxi ri − minj rj) and φ(r, µ, c) = ∑|A| i=1 ri + µi(ri − c)2+. Let µ0 > 0 be an initial penalty vector, r0 = arg min\nr∈D\n∑ i ri,\n> 0, c0 = mini r0i − ,γ > 0, ξ ∈ (0, 1). Define the following auxiliary variables ck+1 = min\ni rki −\nΓk = max i rki −min i rki a† = arg max a (rka)\nµ† a† = (γ + 1)µka†\nµ†\\a† = µ k \\a†\nr† = arg min s. t.P (r)≥0\nr∈Ω\nφ(r, µ†, ck+1)\nc† = min i r†i − Γ† = max i r†i −min j r†j\n(2)\nWe define the following iteration procedure compute Eq 2\nIf Γk = Γ†\nterminate\nWhile Γ† > Γk\nγ ← ξγ Recompute Eq 2\nEnd While\nµk+1 = µ† · (r † − ck)+\n(r† − c†)+\nrk+1 = r†\nThe procedure is convergent to r∗ for |A| = 2.\nProof. From Lemma A.3 we have ck < mini r∗i ∀k, we can then apply Lemma A.2 and state that there exists a set µ∗k such that r∗ = arg min\ns. t.P (r)≥0 r∈Ω\nφ(r, µ∗k, ck).\nWe split the proof in two scenarios, one where Γ∗ = 0 and then Γ∗ > 0, since they show different convergence properties.\ncase Γ∗ = 0. Assume without loss of generality that rk1 = arg maxi rki , from Lemma A.4 we have µk1 < µ k∗ 1 , and further, for all µ † 1 ∈ (µk1 , µk∗1 ), we have rk1 < r † 1 < r ∗ 1 . Since |A| = 2 and rk, r†, r∗ all belong to D, we also have rk2 > r † 2 > r ∗ 2 , which implies Γ\nk > Γ† > Γ∗ = 0. The subiteration procedure is always successful, and we get a sequence of iterates such that Γk > Γk+1, a strictly decreasing sequence bounded below by 0, which implies asymptotic convergence to r∗. Procedure never terminates with probability 1 (event µk ∈ µ∗k has probability 0). case Γ∗ > 0 Assume without loss of generality that r∗1 < r∗2 . Following the arguments from Lemma A.3 we observe that r1 < r2 ∀(r1, r2) ∈ D. In these conditions, we have that µ k 2\nµk1 < µk+12 µk+11 and\nck+1 > ck. We will re-derive the KKT conditions in this scenario to show that convergence is guaranteed for µk2 > µ2∗, and that this µ2∗ is independent of ck, meaning that convergence to the optimal r∗ can be achieved in a finite number of steps.\nLet ∇P (r∗) = (dP1, dP2) > 0, with corresponding tangent vector τ∗ = (dP2,−dP1), for r∗ to be the risk with minimal gap, r∗ + δτ∗ must be an infeasible descent direction for δ > 0. Therefore, there is an active constraint from Ω, g(r), with ∇g(r∗) = (dg1, dg2) such that 〈(dg1, dg2), (dP2,−dP1)〉 < 0. Deriving the first KKT conditions, we get\n1 + µ12(r ∗ 1 − c) = λdP1 + ηdg1, 1 + µ22(r ∗ 2 − c) = λdP2 + ηdg2,\nand in terms of λ, η\nλ = 1 + µ12(r ∗ 1 − c)− ηdg1 dP1 ,\nη = dP1(1 + µ22(r ∗ 2 − c)− dP2(1 + µ12(r∗1 − c)) dg2dP1 − dP2dg1 .\nNote that dg2dP1 − dP2dg1 > 0 from 〈(dg1, dg2), (dP2,−dP1)〉 < 0. To recover a valid set of Lagrange multipliers η, λ > 0 , it suffices to have\ndP1(1 + µ22(r ∗ 2 − c)) > dP2(1 + µ12(r∗1 − c)),\nµ2 > (1 + µ12(r\n∗ 1 − c))dP2dP1 − 1\n2(r∗2 − c) ,\nµ2 > (1 + µ12(r\n∗ 1 − c0))dP2dP1 − 1\n2 ,\nwhere we used that c0 < ck < ∀k. Therefore, for µ k 2\nµk1 large enough, rk = r∗ and the algorithm\nterminates." }, { "heading": "A.2 ALGORITHMIC DETAILS", "text": "Here we provide details on how we optimize our adaptive loss in-between penalty update (µa) steps, shown in Algorithm 2." }, { "heading": "A.3 METHODS", "text": "We compare the performance of the following methods:\nAlgorithm 2: AdaptiveOptimize Given: hθ, L, ~µ, c,DTr,DVal, np, lr,B p← 0, φ∗ ←∞ h∗ ← hθ while p ≤ np // Making loss progress do\nfor {(xi, yi, ai}Bi=1 ∈ DTr // Run one epoch of SGD on training set do\nIa = {i ∈ {1, . . . , B} ∧ ai = a}, R̄Ba(hθ) = 1|Ia| ∑ i∈Ia L(hθ(xi), yi) // Empirical risks\nφB(hθ) = ∑ a∈A R̄ B a(hθ) + µa(R̄ B a(hθ)− c)+2\nθ ← θ − lr∇θφB(hθ) // Gradient step end if φVal(hθ) < φ∗ // Evaluate improvement on Val and update target risks then\nh∗ ← hθ; φ∗ ← φVal(hθ); p← 0 else\np← p+ 1; end RVal ← R̄Val(h)\nend Return: h∗, RVal\nKamishima. Kamishima et al. (2012) uses logistic regression as a baseline classifier, and requires numerical input (observations), and binary sensitive attribute and target variable. Fairness is controlled via a regularization term with a tuning parameter η that controls trade-off between fairness and overall accuracy. η is optimized via grid search with η ∈ (0, 300) as in the original paper. We report results on the hyperparameter configuration that produces the smallest accuracy disparity between sensitive subgroups.\nFeldman. Feldman et al. (2015) provides a preprocessing algorithm to sanitize input observations. It modifies each input attribute so that the marginal distribution of each coordinate is independent on the sensitive attribute. The degree to which these marginal distributions match is controlled by a λ parameter between 0 and 1. It can handle numerical and categorical observations, as well as non-binary sensitive attributes, and arbitrary target variables. Following Friedler et al. (2019), we train a linear logistic regressor on top of the sanitized attributes. λ is optimized via grid search with increments of 0.05. We report results on the hyperparameter configuration that produces the smallest accuracy disparity between sensitive subgroups.\nZafar. Zafar et al. (2017) Addresses disparate mistreatment via a convex relaxation. Specifically, in the implementation provided in Friedler et al. (2019), they train a logistic regression classifier with a fairness constraint that minimizes the covariance between the sensitive attribute and the classifier decision boundary. This algorithm can handle categorical sensitive attributes and binary target variables, and numerical observations. The maximum admissible covariance is handled by a hyperparameter c, tuned by logarithmic gridsearch with values between 0.001 and 1. We report results on the hyperparameter configuration that produces the smallest accuracy disparity between sensitive subgroups.\nHardt. Hardt et al. (2016) proposes a post-processing algorithm that takes in an arbitrary predictor and the sensitive attribute as input, and produces a new, fair predictor that satisfies equalized odds. This algorithm can handle binary target variables, an arbitrary number of sensitive attributes, and any baseline predictor, but requires test-time access to sensitive attributes. it does not contain any tuning parameter. We apply this method on top of both the Naive Classifier and our Pareto Fair classifier.\nNaive Classifier (Naive). Standard classifier, trained to minimize an expected risk h = arg min h∈H EX,A,Y [L(h(X), Y )]. The baseline classifier class H is implemented as a neural network and varies by experiment as described in Section A.6, the loss function also varies by experiment and is also described in Section A.6. Optimization is done via stochastic gradient descent.\nNaive Balanced (NaiveB). Baseline classifier designed to address undersampling of minority classes, trained to mimimize a class-rebalanced expected risk h = arg min h∈H EA∼U [1,...,|A|],(X,Y )∼P (X,Y |A)[L(h(X), Y )]. Like the Naive classifier, it is implemented as a neural network and optimized via stochastic gradient descent. The sole difference with the Naive classifier is that, during training, training samples are drawn from the new input distribution A ∼ U [1, . . . , |A|]; X,Y |A ∼ P (X,Y |A), which is achieved by re-weighted sampling of the original training dataset.\nPareto Fair. Our proposed methodology, trained to minimize an adaptive loss function using the procedure described in Algorithm 1. Addresses risk disparity minimization without introducing unnecessary harm to any subgroup. The baseline classifier class H is implemented as a neural network and varies by experiment as described in Section A.6, the loss function also varies by experiment and is also described in Section A.6." }, { "heading": "A.4 EVALUATION METRICS", "text": "Here we describe the metrics used to evaluate the performance of all tested methods. We are given a set of test samples Dt = {(xi, yi)}Ni=1 where xi ∈ X is a realization of our model input and yi ∈ Y the corresponding objective. We assume that Y is a finite alphabet, as in a classification problem, and we will represent the one-hot encoding of yi as ~ei. Given a trained model h : X → [0, 1]|Y| the predicted output for a an input xi is a vector h(xi) = ~pi such that (~pi)j ∈ [0, 1],∀j ∈ {1, ..., |Y|} and | ∑|Y| j=1(~pi)j = 1 (e.g.: output of a softmax layer). The predicted class is ŷi = arg maxj(~pi)j and its associated confidence is p̂i = maxj(~pi)j . Ideally ŷi should be the same as yi. Using these definitions, we compute the following metrics.\nAccuracy (AC): 1N ∑N i=1 1(yi = ŷi). Fraction of correct classifications in dataset.\nConfidence (CO): 1N ∑N i=1 p̂i. Average magnitude of the predicted class probability.\nBrier Score (BS): 1N ∑N i=1 ||~ei − ~pi||2 where ~ei is the one-hot representation of the categorical ground truth value yi. This quantity is also known as Mean square error (MSE).\nCross-Entropy (CE): − 1N ∑N i=1 ∑|Y| j=1(~ei)j log(~pi)j also known as negative log-likelihood (NLL) of the multinomial distribution.\nExpected Calibration Error (ECE): 1N ∑M m=1 ∣∣∑ i∈Bm [1(yi = ŷi)− p̂i] ∣∣ where M is the number of bins to divide the interval [0, 1] such that Bm = {i ∈ {1, .., N} : p̂i ∈ (m−1M , m M ]} are the group of samples that our model assigns a confidence (p̂i) in the interval (m−1M , m M ]. Measures how closely the predicted probabilities match the true base rates.\nMaximum Calibration Error (MCE): maxm∈{1,...,M} ∣∣ 1 |Bm| ∑ i∈Bm [1(yi = ŷi)− p̂i] ∣∣. Measures worst-case miscalibration errors.\nThese metrics are computed independently for each sensitive subgroup on the test set and reported in Section A.7." }, { "heading": "A.5 EXPERIMENTS ON REAL DATA", "text": "The following is a description of the data and experiments for each of the real datasets. The information present here is summarized in Table 7.\nMIMIC-III. This dataset consist of clinical records collected from adult ICU patients at the Beth Israel Deaconess Medical Center (MIMIC-III dataset) Johnson et al. (2016). The goal is predicting patient mortality from clinical notes. We follow the pre-processing methodology outlined in Chen et al. (2018), where we analyze clinical notes acquired during the first 48 hours of ICU admission; discharge notes where excluded, as where ICU stays under 48 hours. Tf-idf statistics on the 10, 000 most frequent words in clinical notes are taken as input features.\nWe identify 8 sensitive groups as the combination of age (under/over 55 years old), ethnicity as determined by the majority group (white/nonwhite) and outcome (alive/deceased). Here we will use the term adult to refer to people under 55 years old and senior otherwise. This dataset shows large sample disparities since 56.7% of corresponds to the overall majority group (alive-senior-white) and only 0.4% to the overall minority group (deceased-adult-nonwhite).\nWe used a fully connected neural network as described in table 8 as the baseline classifier for our proposed Pareto Fair algorithm. We compare our results against both the Naive and Naive Balanced algorithms using the same neural network architecture, and crossentropy (CE) as our training loss. We also evaluate the performance of Zafar applied on the feature embeddings learned by the Naive Balanced classifier (Results for Zafar over the original input features were not promising).\nWe report the performance across a 5-fold split of the data, we used a 60/20/20 train-validation-test partition as described on Table 7 and report results over the test set. We denote the overall sensitive attribute as the combination of outcome (A:alive/D:deceased), age (A:adult/S: senior) and ethnicity (W:white, NW:nonwhite) with shorthand notation of the form D/A/W to denote, for example deceased, white adult. We also note that results on both Zafar and Hardt were done over only the sensitive attributes Adult/Senior and White/Nonwhite, outcome was not considered as a sensitive attribute for both methods since Hardt requires test-time access to sensitive attributes, which would not be possible in this case, and Zafar attempts to decorrelate sensitive attributes and classification decision boundaries, which is counterproductive when the sensitive attribute includes the correct decision outcome.\nHAM10000. This dataset contains over 10, 000 dermatoscopic images of skin lesions over a diverse population Tschandl et al. (2018). Lesions are classified in 7 diagnostic categories, and the goal is to learn a model capable of identifying the category from the lesion image. The dataset is highly unbalanced since 81% of the samples correspond to a melanocytic nevi lesion (nv), and 0.5% to dermatofibroma (df).\nHere we chose to use the diagnosis class as both the target and sensitive variable, casting balanced risk minimization as a particular use-case for Pareto fairness.\nWe load a pre-trained DenseNet121 network Huang et al. (2017) and train it to classify skin lesions from dermatoscopic images using our Pareto fairness algorithm. We compared against the Naive and the Naive balanced training setup. Note that in the naive balance approach we use a batch sampler where images from each class have the same probability, this can be seen as a naive oversampling technique. Table 8 shows the details of the implementation.\nWe used the original train-validation-test (80/10/10) split, and report results on the test set. Notation for each group follows the original notation: Actinic keratoses and intraepithelial carcinoma / Bowen’s disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (vasc).\nAdult. The Adult UCI dataset Dua & Graff (2017a) is based on the 1994 U.S. Census and contains data on 32, 561 adults. The data contains 105 binarized observations representing education status, age, ethnicity, gender, and marital status and a target variable indicating income status (binary attribute representing over or under $50, 000). Following Friedler et al. (2019), we take ethnicity and gender as our target sensitive attributes, defining two subgroups (White Males and Other). We also present results taking just the gender as sensitive attribute (Male/Female). To compare our Pareto Fair algorithm evenly against the other methods, we limit our hypothesis class to linear logistic regression.\nGerman. The German credit dataset Dua & Graff (2017a) contains 20 observations collected across 1000 individuals, and a binary target variable assessing the individual’s credit score as good or bad. We take gender (Male/Female) as the sensitive attribute, which is not included in the data but can be inferred. As in the Adult dataset, we limit our hypothesis class to linear logistic regression to compare evenly across methodologies." }, { "heading": "A.6 NEURAL ARCHITECTURES AND PARAMETERS", "text": "Table 8 summarizes network architectures and loss functions for all experiments in Section 5. Note that all networks have a standard dense softmax as their final layer. The training optimizer is standard ADAM Kingma & Ba (2014), loss functions were either crossentropy (CE) or brier score (BS), also known as categorical mean square error (MSE)." }, { "heading": "A.7 SUPPLEMENTARY RESULTS", "text": "The following tables show a quantitative performance comparisons between the various fairness methods described in Section A.3 over the datasets described in Section A.5. All metrics presented are described in Section A.4 and computed per sensitive attribute. We additionally provide results for the overall mean across the dataset (sample mean), as well as results on the group-balanced mean across groups (group mean). We use group max and group min to denote the maximum and minimum metric values attained over all sensitive groups. We denote group max - group min as Discrimination, which is the worst case difference in performance across groups. When the dataset contains multiple splits, we report both split mean and split standard deviation." }, { "heading": "A.8 ANALYSIS OF PARETO-OPTIMAL CLASSIFIERS IN THE INFINITE DATA AND MODEL CAPACITY REGIME", "text": "In this section we analyze the form of Pareto-optimal solutions to classification (and regression) tasks in the asymptotically ideal case where we have infinite capacity hypothesis classes, that is, our hypothesis classH contains every function mapping points from the observation space X to the classification or regression space R|Y|.\nWe additionally assume that the joint distributions between target variables Y and observation variables X given every sensitive attribute are known (i.e., P (Y,X|A) is known), and that the loss function L(h(x), y) is convex with respect to h(x).\nAny joint loss of the form φPA({Ra(h)}) = EX,Y,A[L(h(X), Y ] = ∑ a∈A PaRa(h), (3)\nproduces solutions which are already in the Pareto front of {Ra(h)} for any distribution ofA ∼ PA. Since the loss function is convex with respect to h(X) and the hypothesis class is complete, it can also be shown that for any point in the Pareto front, there exists a value of PA such that that point is reached by minimizing φPA({Ra(h)}) Geoffrion (1968); Koski (1985); Miettinen (2012) We can therefore analyze the Bayes-optimal classifier hPA which minimizes the Naive risk φPA({Ra(h)}), and analytically compute the sub-population risks Ra(hPA) induced. In general, we can write\nE[Y |x] = ∑ a∈A E[Y |x, a]P (a|x),\n=\n∑ a∈A\nE[Y |x,a]P (x|a)Pa∑ a∈A P (x|a)Pa ,\nhPA(X) = arg min η\nE[L(η, Y )|X],\nRa(hPA) = EX,Y [L(hPA(X), Y )|A = a],\n(4)\nand in the particular case where target variable Y is categorical, and the classifier loss is an L2 loss against the one-hot encoding of variable Y the equations reduce to\nE[Y |x] = ∑\na∈A ~P [Y |x,a]P (x|a)Pa∑ a∈A P (x|a)Pa ,\nhPA(X) = E[Y |x], Ra(hPA) = EX [ ∑ y P (y|a,X) ∑ y′(h y′ PA (X)− δ[y − y′])2].\n(5)" } ]
2,019
null
SP:b9eff5f0e2d89e5074e564fcbe7b0183c8c4818b
[ "Overview: This paper describes a shortfall with the DDPG algorithm on a continuous state action space with sparse rewards. To first prove the existence of this shortfall, the authors demonstrate its theoretical possibility by reviewing the behavior of DDPG actor critic equations and the “two-regimes” proofs in the appendices. They then demonstrate the occurrence of the critic being updated faster than the actor, leading to a sub-optimal convergence from which the model can never recover. In this demonstration, they use a very simple environment they created, “1D-Toy”. The 1D-Toy environment is a one-dimensional, discrete-time, continuous state and action problem. Moving to the left at all in 1D-Toy results in a reward and episode end. Episode length was set at 50, as the agent could move to the right forever and never stop the episode. The authors demonstrate how the failure of the agent to obtain 100% success in this simple environment was, in fact, due to the phenomenon mentioned earlier. If the agent managed to obtain a reward very early on in training, it was highly likely the agent would converge on an optimal solution. If not, the actor would drift to a state were it no longer updates, and the critic would similarly no longer update either, resulting in a deadlock and suboptimal policy. The authors then generalize their findings using a helpful figure (Figure 7) which describes the cyclical nature of the phenomenon and how it can happen in any environment. Finally, the authors mention potential solutions to prevent the training failure from occurring, such as avoiding sparse rewards, replacing the critic update to avoid loss, etc.", "The paper investigates why DDPG can sometimes fail in environments with sparse rewards. It presents a simple environment that helps the reader build intuition and supports the paper's empirical investigation. First, the paper shows that DDPG fails on the simple environment in ~6% of cases, despite the solution being trivial—go left from the start state. The paper then augments DDPG with epsilon-greedy-style exploration to see if the cause of these failures is simply inadequate exploration. Surprisingly, in 1% of cases DDPG still fails. The paper shows that even in these failure cases there were still rewarded transitions that could have been learned from, and investigates relationships between properties of individual runs and the likelihood of failure. The paper then explains how these failures occur: the policy drifts to always going right, and the critic converges to a piecewise constant function whose gradient goes to zero and prevents further updates to the policy. The paper then generalizes this deadlock mechanism to other continuous-action actor-critic algorithms like TD3 and discusses how function approximation helps mitigate this issue. Finally, the paper gives a brief overview of some existing potential solutions." ]
In environments with continuous state and action spaces, state-of-the-art actor-critic reinforcement learning algorithms can solve very complex problems, yet can also fail in environments that seem trivial, but the reason for such failures is still poorly understood. In this paper, we contribute a formal explanation of these failures in the particular case of sparse reward and deterministic environments. First, using a very elementary control problem, we illustrate that the learning process can get stuck into a fixed point corresponding to a poor solution. Then, generalizing from the studied example, we provide a detailed analysis of the underlying mechanisms which results in a new understanding of one of the convergence regimes of these algorithms. The resulting perspective casts a new light on already existing solutions to the issues we have highlighted, and suggests other potential approaches.
[]
[ { "authors": [ "Joshua Achiam", "Ethan Knight", "Pieter Abbeel" ], "title": "Towards Characterizing Divergence in Deep Q-Learning. arXiv:1903.08894 [cs], March 2019", "venue": "URL http://arxiv.org/abs/1903", "year": 1903 }, { "authors": [ "Zafarali Ahmed", "Nicolas Le Roux", "Mohammad Norouzi", "Dale Schuurmans" ], "title": "Understanding the impact of entropy on policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "L.C. Baird", "A.H. Klopf" ], "title": "Reinforcement learning with high-dimensional, continuous actions. Technical report, Wright-Patterson Air Force Base Ohio: Wright Laboratory. (Available from the Defense Technical Information", "venue": null, "year": 1993 }, { "authors": [ "Justin A. Boyan", "Andrew W. Moore" ], "title": "Generalization in reinforcement learning: Safely approximating the value function", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Cédric Colas", "Olivier Sigaud", "Pierre-Yves Oudeyer" ], "title": "GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms", "venue": "In International Conference in Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Ian Osband", "Alex Graves", "Vlad Mnih", "Remi Munos", "Demis Hassabis", "Olivier Pietquin", "Charles Blundell", "Shane Legg" ], "title": "Noisy Networks for Exploration", "venue": "URL http://arxiv. org/abs/1706.10295", "year": 2017 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-Policy Deep Reinforcement Learning without Exploration", "venue": "[cs, stat], December 2018a. URL http://arxiv.org/abs/ 1812.02900", "year": 2018 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing Function Approximation Error in Actor-Critic Methods. https://arxiv.org/abs/1802.09477, February 2018b", "venue": null, "year": 2018 }, { "authors": [ "M. Geist", "O. Pietquin" ], "title": "Parametric value function approximation: A unified view", "venue": "IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL),", "year": 2011 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv:1801.01290 [cs, stat", "venue": "January 2018a. URL http://arxiv.org/abs/1801.01290", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel" ], "title": "Soft actor-critic algorithms and applications", "venue": "arXiv preprint arXiv:1812.05905,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke" ], "title": "Qt-Opt: Scalable deep reinforcement learning for vision-based robotic manipulation", "venue": "arXiv preprint arXiv:1806.10293,", "year": 2018 }, { "authors": [ "George Konidaris", "Andrew Barto" ], "title": "Autonomous shaping: Knowledge transfer in reinforcement learning", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "URL http://arxiv.org/abs/1509", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y. Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": null, "year": 1905 }, { "authors": [ "Martin Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom Van de Wiele", "Volodymyr Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by Playing Solving Sparse Reward Tasks from Scratch", "venue": "URL http://arxiv.org/abs/1802.10567", "year": 2018 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic Policy Gradient Algorithms", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Riley Simmons-Edler", "Ben Eisner", "Eric Mitchell", "Sebastian Seung", "Daniel Lee" ], "title": "Q-learning for continuous actions with cross-entropy guided policies", "venue": null, "year": 1903 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": "ISBN 978-0-262-03924-6. Google-Books-ID: 6DKPtQEACAAJ", "year": 2018 }, { "authors": [ "John N. Tsitsiklis", "Benjamin Van Roy" ], "title": "Analysis of temporal-diffference learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 1997 }, { "authors": [ "Hado Van Hasselt", "Marco A. Wiering" ], "title": "Reinforcement learning in continuous action spaces", "venue": "In IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL),", "year": 2007 }, { "authors": [ "Hado van Hasselt", "Yotam Doron", "Florian Strub", "Matteo Hessel", "Nicolas Sonnerat", "Joseph Modayil" ], "title": "Deep Reinforcement Learning and the Deadly Triad", "venue": "[cs],", "year": 2018 }, { "authors": [ "Christopher J.C.H. Watkins" ], "title": "Learning with Delayed Rewards", "venue": "PhD thesis, Psychology Department,", "year": 1989 }, { "authors": [ "Matthieu Zimmer", "Paul Weng" ], "title": "Exploiting the sign of the advantage function to learn deterministic policies in continuous domains", "venue": null, "year": 1906 } ]
[ { "heading": "1 INTRODUCTION", "text": "The Deep Deterministic Policy Gradient (DDPG) algorithm (Lillicrap et al. (2015)) is one of the earliest deep Reinforcement Learning (RL) algorithms designed to operate on potentially large continuous state and action spaces with a deterministic policy, and it is still one of the most widely used. However, it is often reported that DDPG suffers from instability in the form of sensitivity to hyper-parameters and propensity to converge to very poor solutions or even diverge. Various algorithms have improved stability by addressing well identified issues, such as the over-estimation bias in TD3 (Fujimoto et al., 2018b) but, because a fundamental understanding of the phenomena underlying these instabilities is still missing, it is unclear whether these ad hoc remedies truly address the source of the problem. Thus, better understanding why these algorithms can fail even in very simple environments is a pressing question.\nTo investigate this question, we introduce in Section 4 a very simple one-dimensional environment with a sparse reward function where DDPG sometimes fails. Analyzing this example allows us to provide a detailed account of these failures. We then reveal the existence of a cycle of mechanisms operating in the sparse reward and deterministic case, leading to the quick convergence to a poor policy. In particular, we show that, when the reward is not discovered early enough, these mechanisms can lead to a deadlock situation where neither the actor nor the critic can evolve anymore. Critically, this deadlock persists even when the agent is subsequently trained with rewarded samples.\nThe study of these mechanisms is backed-up with formal proofs in a simplified context where the effects of function approximation is ignored. Nevertheless, the resulting understanding helps analyzing the practical phenomena encountered when using actors and critics represented as neural networks. From this new light, we revisit in Section 5 a few existing algorithms whose components provide an alternative to the building blocks involved in the undesirable cyclic convergence process, and we suggest alternative solutions to these issues." }, { "heading": "2 RELATED WORK", "text": "Issues when combining RL with function approximation have been studied for a long time (Baird & Klopf, 1993; Boyan & Moore, 1995; Tsitsiklis & Van Roy, 1997). In particular, it is well known that\ndeep RL algorithms can diverge when they meet three conditions coined as the ”deadly triad” (Sutton & Barto, 2018), that is when they use (1) function approximation, (2) bootstrapping updates and (3) off-policy learning. However, these questions are mostly studied in the continuous state, discrete action case. For instance, several recent papers have studied the mechanism of this instability using DQN (Mnih et al., 2013). In this context, four failure modes have been identified from a theoretical point of view by considering the effect of a linear approximation of the deep-Q updates and by identifying conditions under which the approximate updates of the critic are contraction maps for some distance over Q-functions (Achiam et al., 2019). Meanwhile, van Hasselt et al. (2018) shows that, due to its stabilizing heuristics, DQN does not diverge much in practice when applied to the ATARI domain.\nIn contrast to these papers, here we study a failure mode specific to continuous action actor-critic algorithms. It hinges on the fact that one cannot take the maximum over actions, and must rely on the actor as a proxy for providing the optimal action instead. Therefore, the failure mode identified in this paper cannot be reduced to any of the ones that affect DQN. Besides, the theoretical derivations provided in the appendices show that the failure mode we are investigating does not depend on function approximation errors, thus it cannot be directly related to the deadly triad.\nMore related to our work, several papers have studied failure to gather rewarded experience from the environment due to poor exploration (Colas et al., 2018; Fortunato et al., 2017; Plappert et al., 2017), but we go beyond this issue by studying a case where the reward is actually found but not properly exploited. Finally, like us the authors of Fujimoto et al. (2018a) study a failure mode which is specific to DDPG-like algorithms, but the studied failure mode is different. They show under a batch learning regime that DDPG suffers from an extrapolation error phenomenon, whereas we are in the more standard incremental learning setting and focus on a deadlock resulting from the shape of the Q-function in the sparse reward case." }, { "heading": "3 BACKGROUND: DEEP DETERMINISTIC POLICY GRADIENT", "text": "The DDPG algorithm (Lillicrap et al., 2015) is a deep RL algorithm based on the Deterministic Policy Gradient theorem (Silver et al., 2014). It borrows the use of a replay buffer and target networks from DQN (Mnih et al., 2015). DDPG is an instance of the Actor-Critic model. It learns both an actor function πψ (also called policy) and a critic function Qθ, represented as neural networks whose parameters are respectively noted ψ and θ.\nThe deterministic actor takes a state s ∈ S as input and outputs an action a ∈ A. The critic maps each state-action pair (s, a) to a value in R. The reward r : S × A → R, the termination function t : S ×A→ {0, 1} and the discount factor γ < 1 are also specified as part of the environment. The actor and critic are updated using stochastic gradient descent on two losses Lψ and Lθ. These losses are computed from mini-batches of samples (si, ai, ri, ti, si+1), where each sample corresponds to a transition si → si+1 resulting from performing action ai in state si, with subsequent reward ri = r(si, ai) and termination index ti = t(si, ai).\nTwo target networks πψ′ andQθ′ are also used in DDPG. Their parameters ψ′ and θ′ respectively track ψ and θ using exponential smoothing. They are mostly useful to stabilize function approximation when learning the critic and actor networks. Since they do not play a significant role in the phenomena studied in this paper, we ignore them in the formal proofs given in appendices.\nEquations (1) and (2) define Lψ and Lθ:\nLψ = − ∑\ni\nQθ (si, πψ (si)) (1)\n ∀i, yi = ri + γ(1− ti)Qθ′ (si+1, πψ′ (si+1)) Lθ = ∑\ni\n[ Qθ (si, ai)− yi ]2 . (2)\nTraining for the loss given in (1) yields the parameter update in (3), with α the learning rate:\nψ ← ψ + α ∑\ni\n∂πψ(si)\n∂ψ\nT\n∇aQθ(si, a)|a=πψ(si) . (3)\nAs DDPG uses a replay buffer, the mini-batch samples are acquired using a behaviour policy β which may be different from the actor π. Usually, β is defined as π plus a noise distribution, which in the case of DDPG is either a Gaussian function or the more sophisticated Ornstein-Uhlenbeck noise.\nImportantly for this paper, the behaviour of DDPG can be characterized as an intermediate between two extreme regimes:\n• When the actor is updated much faster than the critic, the policy becomes greedy with respect to this critic, resulting into a behaviour closely resembling that of the Q-LEARNING algorithm. When it is close to this regime, DDPG can be characterized as off-policy.\n• When the critic is updated much faster than the actor, the critic tends towards Qπ(s, a). The problems studied in this paper directly come from this second regime.\nA more detailed characterization of these two regimes in given in Appendix A." }, { "heading": "4 A NEW FAILURE MODE", "text": "In this section, we introduce a simplistic environment which we call 1D-TOY. It is a one-dimensional, discrete-time, continuous state and action problem, depicted in Figure 1.\nDespite its simplicity, DDPG can fail on 1D-TOY. We first show that DDPG fails to reach 100% success. We then show that if learning a policy does not succeed soon enough, the learning process can get stuck. Besides, we show that the initial actor can be significantly modified in the initial stages before finding the first reward. We explain how the combination of these phenomena can result into a deadlock situation. We generalize this explanation to any deterministic and sparse reward environment by revealing and formally studying a undesirable cyclic process which arises in such cases. Finally, we explore the consequences of getting into this cyclic process." }, { "heading": "4.1 EMPIRICAL STUDY", "text": "In all experiments, we set the maximum episode length N to 50, but the observed phenomena persist with other values.\nResidual failure to converge using different noise processes We start by running DDPG on the 1D-TOY environment. This environment is trivial as one infinitesimal step to the left is enough to obtain the reward, end the episode and succeed, thus we might expect a quick 100% success. However, the first attempt using an Ornstein-Uhlenbeck (OU) noise process shows that DDPG succeeds in only 94% of cases, see Figure 2a.\nThese failures might come from an exploration problem. Indeed, at the start of each episode the OU noise process is reset to zero and gives little noise in the first steps of the episode. In order to remove this potential source of failure, we replace the OU noise process with an exploration strategy similar to -greedy which we call ”probabilistic noise”. For some 0 < p < 1, with probability p, the action is randomly sampled (and the actor is ignored), and with probability 1− p no noise is used\n0 20k 40k 60k 80k 100k\nSimulation steps\n85\n90\n95\n100\n% o f\nsu cc\nes sf\nu l\nru n\ns\nProbabilistic noise\nOU noise\n(a) Success rate of DDPG with Ornstein-Uhlenbeck (OU) and probabilistic noise. Even with probabilistic noise, DDPG fails on about 1% of the seeds.\n0 20k 40k 60k 80k 100k\nSimulation steps\n92\n94\n96\n98\n100\n% o f\nsu cc\nes sf\nu l\nru n\ns\nBaseline Switch to π∗ after 20k steps\n(b) Comparison between DDPG with probabilistic noise and a variant in which the behaviour policy is set to the optimal policy π∗ after 20k steps.\nFigure 2: Success rate of variants of DDPG on 1D-TOY over learning steps, averaged over 10k seeds. More details on learning algorithm and success evaluation are given in Appendix E.\nand the raw action is returned. In our tests, we used p = 0.1. This guarantees at least a 5% chance of success at the first step of each episode, for any policy. Nevertheless, Figure 2a shows that even with probabilistic noise, about 1% of seeds still fail to converge to a successful policy in 1D-TOY, even after 100k training steps. All the following tests are performed using probabilistic noise.\nWe now focus on these failures. On all failing seeds, we observe that the actor has converged to a saturated policy that always goes to the right (∀s, π(s) = 0.1). However, some mini-batch samples have non-zero rewards because the agent still occasionally moves to the left, due to the probabilistic noise applied during rollouts. The expected fraction of non-zero rewards is slightly more than 0.1%1. Figure 3a shows the occurrence of rewards in minibatches taken from the replay buffer when training DDPG on 1D-TOY. After each rollout (episode) of n steps, the critic and actor networks are trained n times on minibatches of size 100. So for instance, a failed episode of size 50 is followed by a training on a total of 5000 samples, out of which we expect more than 5 in average are rewarded transitions. More details about the implementation are available in Appendix E.\nThe constant presence of rewarded transitions in the minibatches suggests that the failures of DDPG on this environment are not due to insufficient exploration by the behaviour policy.\nCorrelation between finding the reward early and finding the optimal policy We have shown that DDPG can get stuck in 1D-TOY despite finding the reward regularly. Now we show that when DDPG finds the reward early in the training session, it is also more successful in converging to the optimal policy. On the other hand, when the first reward is found late, the learning process more often gets stuck with a sub-optimal policy.\nFrom Figure 3b, the early steps appear to have a high influence on whether the training will be successful or not. For instance, if the reward is found in the first 50 steps by the actor noise (which happens in 63% of cases), then the success rate of DDPG is 100%. However, if the reward is first found after more than 50 steps, then the success rate drops to 96%. Figure 3b shows that finding the reward later results in lower success rates, down to 87% for runs in which the reward was not found in the first 1600 steps. Therefore, we claim that there exists a critical time frame for finding the reward in the very early stages of training.\nSpontaneous actor drift At the beginning of each training session, the actor and critic of DDPG are initialized to represent respectively close-to-zero state-action values and close-to-zero actions. Besides, as long as the agent does not find a reward, it does not benefit from any utility gradient. Thus we might expect that the actor and critic remain constant until the first reward is found. Actually, we\n110% of steps are governed by probabilistic noise, of which at least 2% are the first episode step, of which 50% are steps going to the left and leading to the reward.\nshow that even in the absence of reward, training the actor and critic triggers non-negligible updates that cause the actor to reach a saturated state very quickly.\nTo investigate this, we use a variant of 1D-TOY called DRIFT where the only difference is that no rewarded or terminal transitions are present in the environment. We also use a stripped-down version of DDPG, removing rollouts and using random sampling of states and actions as minibatches for training.\nFigure 4b shows that even in the absence of reward, the actor function drifts rapidly (notice the horizontal scale in steps) to a saturated policy, in a number of steps comparable to the ”critical time frame” identified above. The critic also has a transitive phase before stabilizing.\nIn Figure 4a, the fact that maxs,a |Q(s, a)| can increase in the absence of reward can seem counterintuitive, since in the loss function presented in Equation (2), |yi| can never be greater than maxs,a |Q(s, a)|. However, it should be noted that the changes made to Q are not local to the minibatch points, and increasing the value of Q for one input (s, a) may cause its value to increase for other inputs too, which may cause an increase in the global maximum of Q. This phenomenon is at the heart of the over-estimation bias when learning a critic (Fujimoto et al., 2018b), but this bias does not play a key role here." }, { "heading": "4.2 EXPLAINING THE DEADLOCK SITUATION FOR DDPG ON 1D-TOY", "text": "Up to now, we have shown that DDPG fails about 1% of times on 1D-TOY, despite the simplicity of this environment. We have now collected the necessary elements to explain the mechanisms of this deadlock in 1D-TOY.\nFigure 5 shows the value of the critic in a failed run of DDPG on 1D-TOY. We see that the value of the reward is not propagated correctly outside of the region in which the reward is found in a single step {(s, a) | s+ a < 0}. The key of the deadlock is that once the actor has drifted to ∀s, π(s) = 0.1, it is updated according to ∇aQθ(s, a)|a=πψ(s) (Equation (3)). Figure 5b shows that for a = π(s) = 0.1, this gradient is zero therefore the actor is not updated. Besides, the critic is updated using yi = r(si, ai) + γQ(s′i, π(s ′ i)) as a target. Since Q(s ′ i, 0.1) is zero, the critic only needs to be non-zero for directly rewarded actions, and for all other samples the target value remains zero. In this state the critic loss given in Equation (2) is minimal, so there is no further update of the critic and no further propagation of the state-action values. The combination of the above two facts clearly results in a deadlock.\nImportantly, the constitutive elements of this deadlock do not depend on the batches used to perform the update, and therefore do not depend on the experience selection method. We tested this experimentally by substituting the behaviour policy for the optimal policy after 20k training steps. Results are presented in Figure 2b and show that, once stuck, even when it is given ideal samples, DDPG stays stuck in the deadlock configuration. This also explains why finding the reward early results in better performance. When the reward is found early enough, π(s0) has not drifted too far, and the gradient of Q(s0, a) at a = π(s0) drives the actor back into the correct direction.\nNote however that even when the actor drifts to the right, DDPG does not always fail. Indeed, because of function approximators the shape of the critic when finding the reward for the first time varies, and sometimes converges slowly enough for the actor to be updated before the convergence of the critic.\nFigure 6 summarizes the above process. The entry point is represented using a green dot. First, the actor drifts to ∀s, π(s) = 0.1, then the critic converges to Qπ which is a piecewise-constant function (Experiment in Figure 5, proof in Theorem 1 in Appendix B), which in turn means that the critic provides no gradient, therefore the actor is not updated (as seen in Equation 3, more details in Theorem 2) 2." }, { "heading": "4.3 GENERALIZATION", "text": "Our study of 1D-TOY revealed how DDPG can get stuck in this simplistic environment. We now generalize to the broader context of more general continuous action actor critic algorithms, including\n2Note that Figure 5 shows a critic state which is slightly different from the one presented in Figure 6, due to the limitations of function approximators.\nat least DDPG and TD3, and acting in any deterministic and sparse reward environment. The generalized deadlock mechanism is illustrated in Figure 7 and explained hereafter in the idealized context of perfect approximators, with formal proofs rejected in appendices.\nEntry point: As shown in the previous section, before the behaviour policy finds any reward, training the actor and critic can still trigger non-negligible updates that may cause the actor to quickly reach a poor state and stabilize. This defines our entry point in the process.\nQ tends towards Qπ: A first step into the cycle is that, if the critic is updated faster than the policy, the update rule of the critic Q given in Equation (2) makes Q converge to Qπ. This is presented in detail in Appendix C.\nQπ is piecewise-constant: In Appendix D, we then show that, in a deterministic environment with sparse terminal rewards, Qπ is piecewise-constant because V π(s′) only depends on two things: the (integer) number of steps required to reach a rewarded state from s′, and the value of this reward state, which is itself piecewise-constant. Note that we can reach the same conclusion with non-terminal rewards, by making the stronger hypothesis on the actor that ∀s, r(s, π(s)) = 0. Notably, this is the case for the actor ∀s, π(s) = 0.1 on 1D-TOY.\nQ is approximately piecewise-constant and ∇aQ(s,a)|a=π(s) ≈ 0: Quite obviously, from Qπ is piecewise-constant and Q tends towards Qπ, we can infer that Q progressively becomes almost piecewise-constant as the cyclic process unfolds. Actually, the Q function is estimated by a function approximator which is never truly discontinuous. The impact of this fact is studied in Section 4.5. However, we can expect Q to have mostly flat gradients since it is trained to match a piecewiseconstant function. We can thus infer that, globally, ∇aQ(s, a)|a=π(s) ≈ 0. And critically, the gradients in the flat regions far from the discontinuities give little information as to how to reach regions of higher values.\nπ barely changes: DDPG uses the deterministic policy gradient update, as seen in Equation (3). This is an analytical gradient that does not incorporate any stochasticity, because Q is always differentiated exactly at (s, π(s)). Thus the actor update is stalled, even when the reward is regularly found by the behaviour policy. This closes the loop of our process." }, { "heading": "4.4 CONSEQUENCES OF THE CONVERGENCE CYCLE", "text": "As illustrated with the red arrows in Figure 7, the more loops performed in the convergence process, the more the critic tends to be piecewise-constant and the less the actor tends to change. Importantly, this cyclic convergence process is triggered as soon as the changes on the policy drastically slow down or stop. What matters for the final performance is the quality of the policy reached before\nthis convergence loop is triggered. Quite obviously, if the loop is triggered before the policy gets consistently rewarded, the final performance is deemed to be poor.\nThe key of this undesirable convergence cycle lies in the use of the deterministic policy gradient update given in Equation (3). Actually, rewarded samples found by the exploratory behaviour policy β tend to be ignored by the conjunction of two reasons. First, the critic is updated using Q(s′, π(s′)) and not Q(s, β(s)), thus if π differs too much from β, the values brought by β are not properly propagated. Second, the actor being updated through (3), i.e. using the analytical gradient of the critic with respect to the actions of π, there is no room for considering other actions than that of π. Besides, the actor update involves only the state s of the sample taken from the replay buffer, and not the reward found from this sample r(s, a) or the action performed. For each sample state s, the actor update is intended to make π(s) converge to argmaxa π(s, a) but the experience of different actions performed for identical or similar states is only available through Q(s, ·), and in DDPG it is only exploited through the gradient of Q(s, ·) at π(s), so the process can easily get stuck in a local optimum, especially if the critic tends towards a piecewise-constant function, which as we have shown happens when the reward is sparse. Besides, since TD3 also updates the actor according to (3) and the critic according to (2), it is susceptible to the same failures as DDPG." }, { "heading": "4.5 IMPACT OF FUNCTION APPROXIMATION", "text": "We have just explained that when the actor has drifted to an incorrect policy before finding the reward, an undesirable convergence process should result in DDPG getting stuck to this policy. However, in 1D-TOY, we measured that the actor drifts to a policy moving to the right in 50% of cases, but the learning process only fails 1% of times. More generally, despite the issues discussed in this paper, DDPG has been shown to be efficient in many problems. This better-than-predicted success can be attributed to the impact of function approximation.\nFigure 8a shows a case in which the critic approximates Qπ while keeping a monotonous slope between the current policy value and the reward. In this case, the actor is correctly updated towards the reward (if it is close enough to the discontinuity). This is the most often observed case, and naturally we expect approximators to smooth out discontinuities in target functions in a monotonous way, which facilitates gradient ascent. However, the critic is updated not only in state-action pairs where Qπ(s, a) is positive, but also at points where Qπ(s, a) = 0, which means that the bottom part of the curve also tends to flatten. As this happens, we can imagine phenomena that are common when trying to approximate discontinuous functions, such as the overshoot observed in Figure 8b. In this case, the gradient prevents the actor from improving." }, { "heading": "5 POTENTIAL SOLUTIONS", "text": "In the previous section, we have shown that actor-critic algorithms such as DDPG and TD3 could not recover from early convergence to a poor policy due to the combination of three factors whose\ndependence is highlighted in Figure 7: the use of the deterministic policy gradient update, the use of Q(s′, π(s′)) in the critic update, and the attempt to address sparse reward in deterministic environments. In this section, we categorize existing or potential solutions to the above issue in terms of which of the above factor they remove.\nAvoiding sparse rewards: Transforming a sparse reward problem into a dense one can solve the above issue as the critic should not converge to a piecewise-constant function anymore. This can be achieved for instance by using various forms of shaping (Konidaris & Barto, 2006) or by adding auxiliary tasks (Jaderberg et al., 2016; Riedmiller et al., 2018). We do not further investigate these solutions here, as they are mainly problem-dependent and may introduce bias when the reward transformation results in deceptive gradient or modifies the corresponding optimal policy.\nReplacing the policy-based critic update: As explained above, if some transition (s, a, s′) leading to a reward is found in the replay buffer, the critic update corresponding to this transition uses Q(s′, π(s′)), therefore not propagating the next state value that the behaviour policy may have found. Of course, when using the gradient from the critic, the actor update should tend to update π to reflect the better policy such that π(s′)→ a′, but the critic does not always provide an adequate gradient as shown before.\nIf performing a maximum over a continuous action space was possible, using maxaQ(s′, a) instead of Q(s′, π(s′)) would solve the issue. Several works start from this insight. Some methods directly sample the action space and look for such an approximate maximum (Kalashnikov et al., 2018; Simmons-Edler et al., 2019). To show that this approach can fix the above issue, we applied it to the 1D-TOY environment. We take a straightforward implementation where the policy gradient update in DDPG is replaced by sampling 100 different actions, finding the argmax over these actions of Q(s, a), and regressing the actor towards the best action we found. We call the resulting algorithm DDPG-argmax, and more details are available in Appendix F.1. Results are shown in Figure 9a, in which we see that the success rate quickly reaches 100%.\nQuite obviously, even if sampling can provide a good enough baseline for simple enough benchmarks, these methods do not scale well to large actions spaces. Many improvements to this can be imagined by changing the way the action space is sampled, such as including π(s) in the samples, to prevent picking a worse action than the one provided by the actor, sampling preferentially around π(s), or around π(s+ ), or just using actions taken from the replay buffer.\nInterestingly, using a stochastic actor such as in the Soft Actor Critic (SAC) algorithm (Haarnoja et al., 2018a;b) can be considered as sampling preferentially around π(s+ ) where is driven by the entropy regularization term. In Figure 9b, we show that SAC also immediately solves 1D-TOY.\nAnother approach relies on representing the critic as the V function rather than the Q function. The same way π(s) tends to approximate argmaxaQ(s, a), V tends to approximate maxaQ(s, a), and is updated when finding a transition that raises the value of a state. Using V , performing a maximum in the critic update is not necessary anymore. The prototypical actor-critic algorithm using a model of V as a critic is CACLA (Van Hasselt & Wiering, 2007). However, approximating V with neural\nnetworks can prove more unstable than approximating Q, as function approximation can be sensitive to the discontinuities resulting form the implicit maximization over Q values.\nReplacing the deterministic policy gradient update: Instead of relying on the deterministic policy gradient update, one can rely on a stochastic policy to perform a different actor update. This is the case of SAC, as mentioned just above. Because SAC does not use Q(s′, π(s′)) in its update rule, it does not suffer from the undesirable convergence process described here.\nAnother solution consists in completely replacing the actor update mechanism, using regression to update π(s) towards any action better than the current one. This could be achieved by updating the actor and the critic simultaneously: when sampling a higher-than-expected critic value yi > Q(si, ai), one may update π(si) towards ai using:\nLψ = ∑\ni\nδyi>Q(si,π(si)) (π(si)− ai) . (5)\nThis is similar to the behaviour of CACLA, as analyzed in Zimmer & Weng (2019).\nLarger benchmarks Whether the deadlock situation investigated so far occurs more in more complex environments is an important question. To investigate this, we performed additional experiments based on more complex environments, namely sparse versions of REACHER-V2 and HALFCHEETAH-V2. Results are depicted in Figure 9c and 9d and more details are presented in Appendix F.2. One can see that DDPG-argmax outperforms DDPG, which seems to indicate that the failure mode we are studying is also at play. However, with higher-dimensional and more complex environments, the analysis becomes more difficult and other failures modes such as the ones related to the deadly triad, the extrapolation error or the over-estimation bias might come into play, so it becomes harder to quantitatively analyze the impact of the phenomenon we are focusing on. On one hand, this point showcases the importance of using very elementary benchmarks in order to study the different failure modes in isolation. On the other hand, trying to sort out and quantify the impact of the different failure modes in more complex environments is our main objective for future work." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In RL, continuous action and sparse reward environments are challenging. In these environments, the fact that a good policy cannot be learned if exploration is not efficient enough to find the reward is well-known and trivial. In this paper, we have established the less trivial fact that, if exploration does find the reward consistently but not early enough, an actor-critic algorithm can get stuck into a configuration from which rewarded samples are just ignored. We have formally characterized the reasons for this situation and we have outlined existing and potential solutions. Beyond this, we believe our work sheds new light on the convergence regime of actor-critic algorithms.\nOur study was mainly built on a simplistic benchmark which made it possible to study the revealed deadlock situation in isolation from other potential failure modes such as exploration issues, the over-estimation bias, extrapolation error or the deadly triad. The impact of this deadlock situation in more complex environments is a pressing question. For this, we need to sort out and quantify the impact of these different failure modes. Using new tools such as the ones provided in Ahmed et al. (2019), recent analyses of the deadly triad such as Achiam et al. (2019) as well as simple, easily visualized benchmarks and our own tools, for future work we aim to conduct deeper and more exhaustive analysis of all the instability factors of DDPG-like algorithms, with the hope to contribute in fixing them." }, { "heading": "7 ACKNOWLEDGEMENTS", "text": "Anonymized for submission." }, { "heading": "A TWO REGIMES OF DDPG", "text": "In this section, we characterize the behavior of DDPG as an intermediate between two extremes, that we respectively call the critic-centric view, where the actor is updated faster, resulting in an algorithm close to Q-LEARNING, and the actor-centric view, where the critic is updated faster, resulting in a behaviour more similar to Policy Gradient.\nA.1 ACTOR UPDATE: THE CRITIC-CENTRIC VIEW\nThe Q-LEARNING algorithm (Watkins, 1989) and its continuous state counterpart DQN (Mnih et al., 2013) rely on the computation of a policy which is greedy with respect to the current critic at every time step, as they simply take the maximum of the Q-values over a set of discrete actions. In continuous action settings, this amounts to an intractable optimization problem if the action space is large and nontrivial.\nWe get a simplified vision of DDPG by considering an extreme regime where the actor updates are both fast enough and good enough so that ∀s, π(s) ≈ argmaxaQ(s, a). We call this the critic-centric vision of DDPG, since the actor updates are assumed to be ideal and the only remaining training is performed on the critic.\nIn this regime, by replacing π(s) with argmaxaQ(s, a) in Equation (2), we get yi = ri + γ(1 − ti)maxaQ(si, a), which corresponds to the update of the critic in Q-LEARNING and DQN. A key property of this regime is that, since the update in based on a maximum over actions, the resulting algorithm is truly off-policy. We can thus infer that most of the off-policiness property of DDPG comes from keeping it close to this regime.\nA.2 CRITIC UPDATE: THE ACTOR-CENTRIC VIEW\nSymmetrically to the previous case, if the critic is updated well and faster than the actor, it tends to represent the critic of the current policy, Qπ . Furthermore, if the actor tends to change slowly enough, critic updates can be both fast and good enough so that it reaches the fixed point of the Bellman equation, that is ∀(s, a, r, t, s′), Q(s, a) = r + γ(1− t)Q(s′, π(s′)). In this case, the optimization performed in (1) mostly consists in updating the actor so that it exploits the corresponding critic by applying the deterministic policy gradient on the actor. This gives rise to a actor-centric vision of DDPG." }, { "heading": "B DEADLOCK IN 1D-TOY", "text": "In this section, we prove that there exists a state of DDPG that is a deadlock in the 1D-TOY environment. This proof directly references Figure 6. Let us define two functions Q and πψ such that:\n∀(s, a), Q(s, a) = 1s+a<0 (6) ∀s ∈ S, πψ(s) = 0.1 (7)\nFrom now on, we will use notation π := πψ . Theorem 1. (Q,πψ) is a fixed point for the critic update.\nProof. The critic update is governed by Equation 2.\nLet (si, ai, ri, ti, s′i) be a sample from the replay buffer. The environment dictates that ri = ti = 1si+ai<0.\nyi = ri + γ(1− ti)Q (s′i, πψ(s′i)) yi = ri + γ(1− ti)Q (s′i, 0.1) by (7) yi = ri by (6) yi = 1si+ai<0 yi = Q(si, ai) by (6)\nTherefore, for each sample yi = Q(si, ai), and L is null and minimal. Therefore θ will not be updated during the critic update.\nTheorem 2. (Q,πψ) is a fixed point for the actor update.\nProof. The actor update is governed by Equation 3.\nLet {(si, ai, ri, ti, s′i)} be a set of samples from the replay buffer. The environment dictates that ∀i, ri = ti = 1si+ai<0.\nψ ← ψ + α ∑\ni\n∂πψ(si)\n∂ψ\nT\n∇aQ(si, a)|a=πψ(si)\nSince Q(si, a) = 1si+a<0,∇aQ(si, a) = 0, so ψ will not be updated during the actor update.\nIn this section, we assume that Q is any function, however in implementations Q is often a parametric neural network Qθ, which cannot be discontinuous. Effects of this approximation are discussed in Section 4.5.\nC CONVERGENCE OF THE CRITIC TO Qπ\nNotation For a state-action pair s, a, we define s1 as the result of applying action a at state s in the deterministic environment. For a given policy π, we define a1 as π(s1). Recursively, for any i ≥ 1, we define si+1 as the result of applying action ai to state si, and ai as π(si).\nDefinition 1. Let (s, a) ∈ S ×A. If (s, a) is terminal, then we set N = 0. Otherwise, we set N to the number of subsequent transitions with policy π to reach a terminal state. Therefore, the transition (sN , aN ) is always terminal. We generalize by setting N =∞ when no terminal transition is ever reached.\nWe define the state-action value function of policy π as:\nQπ(s, a) := r(s, a) +\nN∑\ni=1\nγir(si, ai)\nNote that when N =∞, the sum converges under the hypothesis that rewards are bounded and γ < 1.\nIf π is fixed,Q is updated regularly via approximate dynamic programming with the Bellman operator for the policy π. Under strong assumptions, or assuming exact dynamic programming, it is possible to prove (Geist & Pietquin (2011)) that the iterated application of this operator converges towards a unique function Qπ, which corresponds to the state-action value function of π as defined above. It is usually done by proving that the Bellman operator is a contraction mapping, and also applies in deterministic cases.\nHowever, when using approximators such as neural nets, no theroretical results of convergence exists, to the best of our knowledge. In this paper we assume that this convergence is true, and in the experimental results we did not observe any failures to converge towards Qπ. On the contrary, we observe that this converge occurs, and can be what starts the deadlock cycle studied in Section 4.3.\nD PROOF THAT Qπ IS PIECEWISE-CONSTANT\nIn this section, we show that in deterministic environments with terminal sparse rewards, Qπ is piecewise-constant.\nDefinition 2. In this article, for I ⊂ Rn, we say a function f : I → R is piecewise-constant if ∀x0 ∈ I , either∇x f(u)|x=x0 = 0, or f has no gradient at x0.\nTheorem 3. In a deterministic environment with terminal sparse rewards, for any π, Qπ is piecewiseconstant.\nProof. Note that this proof can be trivialized by assuming that around any point where the gradient is defined, there exists a neighbourhood in which the function is continuous. In this case, the intermediate value theorem yields an uncountable set of values of the function in this neighbourhood, which contradicts the countable number of possible discounted rewards.\nThe crux of the following proof is that even when no such neighbourhood exists, the gradient is either null of non-existent. This behavior is shown in Figure 10.\nUsing the notations of Definition 1 and the theorem hypothesis that rewarded transitions are also terminal, we can write Qπ(s, a) as:\nQπ(s, a) = \nr(s, a) if N = 0 γNr(sN , aN ) if N is finite 0 otherwise.\nWe promote N to a function S×A→ N∪{+∞}, and we define a function u : S×A→ R as given in Equation (8).\nu(s, a) = r(s, a) if N = 0 r ( sN(s,a), aN(s,a) ) if N > 0 finite\n0 otherwise. (8)\nNow we have ∀(s, a) ∈ S ×A,Qπ(s, a) = γN(s,a)u(s, a). Let R be the finite set of possible reward values.\nTherefore values of Qπ are in a set M = {γnr | n ∈ N, r ∈ R}. Let M+ = M ∩ R+∗ be the set of positive values of M . Since R ⊂ R is finite, we order all non-zero positive possible rewards in increasing order r1, r2, · · · rk. Let M+k = {γnr | n ∈ N, r ∈ Rk} where Rk = {r1, · · · , rk}. We prove the following by recurrence over the number of possible non-zero rewards:\nH(k) : ∃νk > 0,∀δ > 0,∃ consecutive b, a ∈M+k , δνk < a− b and b < a < δ\nInitialization When k = 1, M+ = {r1γn | n ∈ N}. Let ν = γ 2\n1−γ . Let δ > 0. Let n = blogγ δr1 c+ 1. We have:\nlogγ δ\nr1 − 1 < n− 1 ≤ logγ\nδ\nr1\nlogγ δ\nr1 < n < logγ\nδ\nr1 + logγ(1− γ) + 2− logγ(1− γ)\nlogγ δ\nr1 < n < logγ δ(1− γ) r1 + logγ ν\nlogγ δ\nr1 < n < logγ δν(1− γ) r1\nδν(1− γ) r1 < γn < δ r1\nδν(1− γ)2 < r1γn(1− γ) < δ(1− γ) δν < r1γ n − r1γn+1 and r1γn < δ\nLet a = r1γn ∈M+ and b = r1γn+1 ∈M+. δν < a− b and b < a < δ therefore H(1) is verified.\nRecurrence Let k ≥ 1, and assume H(k) is true. Let νk be the ν chosen for H(k). Let νk+1 = νk2 . Let δ > 0. Let bk, ak a consecutive pair chosen in M+k such that δνk < ak − bk and bk < ak < δ. Since Rk+1 contains only one more element than Rk, which is larger than all elements in Rk, we know that there is either one or zero elements c ∈ M+k+1 that are strictly between ak and bk. If ak − c < c− bk then let ak+1 = c and bk+1 = bk, otherwise ak+1 = ak and bk+1 = c. If ak and bk are still consecutive in M+k+1, then ak+1 = ak and bk+1 = bk.\nThis guarantees that [bk+1, ak+1] as at least half as big as [bk, ak]. Therefore, 12 (ak − bk) < ak+1 − bk+1, which means that δνk+1 < ak+1 − bk+1 and bk < ak < δ. Therefore H(k + 1) is verified. This also gives the general expression of ν, valid for all k: ν = ( γ2\n1−γ\n)|R| .\nMain proof Using the result above, we prove that Qπ(s, a) cannot have any non-null derivatives.\nTrivially, Qπ cannot have a non-null derivative at a point (s, a) where Qπ(s, a) = q0 6= 0. Indeed, there exists a neighbourhood of q0 ∈M in which there is a single value. Let x0 = (s, a) such that Q(s, a) = 0. Let v be a vector of the space S × A. Let f : R → R be defined as f(h) = Qπ(x0 + hv). In the following, we show that f(h) |h| cannot converge to a non-null value when h→ 0. We use the ( , δ) definition of limit. If f had a non-null derivative l at 0, we owould have ∀ > 0,∃δ > 0,∀h, |h| < δ =⇒ ∣∣∣ f(h)|h| − l ∣∣∣ < . Instead, we will show the opposite: ∃ > 0,∀δ > 0,∃h, |h| < δ and ∣∣∣ f(h)|h| − l ∣∣∣ ≥ .\nUsing the candidate derivative l and the ν value computed above that only depends on γ and |R|, we set = lν2 .\nLet δ > 0.\nThere exists consecutive b, a in M such that δlν ≤ a− b and b < a < δl. We set h = a+b2l . Note that a+b 2 < δl therefore h < δ.\nf(h) is in M , but hl is the center of the segment [b, a] of consecutive points of M . Therefore, the distance between f(h) and hl is at least a−b2 .\n|f(h)− hl| ≥ a− b 2 ≥ δlν 2\nSince h < δ, 1h > δ.\n∣∣∣∣ f(h) h − l ∣∣∣∣ ≥ lν 2 = .\nE IMPLEMENTATION DETAILS\nHere is the complete rollout and training algorithm, taken from the Spinup implementation of DDPG.\nResult: Policy πψ , number of steps before success πψ, Qθ ← Xavier uniform initializer env steps← 0 for t← 1 to 10000 do\na← πψ(s) if rand() < 0.1 then\na← rand(−0.1, 0.1) end Step the environment using action a, get a transition (s, a, r, t, s′) Store (s, a, r, t, s′) in the replay buffer env steps← env steps + 1 if t = 1 or env steps > N then\nReset the environment for k ← 1 to env steps do\nSample a mini-batch of size 100 from the replay buffer Train πψ and Qθ on this replay buffer with losses (1) and (2).\nend env steps← 0\nend if t mod 1000 = 0 then\nif last 20 episodes were successes then Terminate the algorithm, and return success after steps= t.\nend end\nend" }, { "heading": "F PROPOSED SOLUTION TO THE DEADLOCK PROBLEM", "text": "F.1 DESCRIPTION OF DDPG-ARGMAX\nIn this paper, we identified a deadlock problem and tracked its origin to the actor update described in Equation (1). In Section 5, we proposed a new actor update mechanism in an algorithm called DDPG-argmax, which we describe in more details here.\nInstead of relying on the differentiation of Qθ (si, πψ (si)) to update ψ in order to maximize Q(s, π(s)), we begin by selecting a set of N potential actions (bj)0≤j<N . Then, we compute Qθ (si, bj) for each sample si and each potential action bj , and for each sample si we find the best potential action ci = bargmaxj Qθ(si,bj). Finally, we regress πψ(si) towards the goal ci. This process is summarized in Equation (9), where Unif(A) stands for uniform sampling in A.\n (bj)0≤j<N ∼ Unif(A) ci = bargmaxj Qθ(si,bj) minimize ∑\ni\n(πψ(si)− ci)2 w.r.t. ψ (9)\nF.2 EXPERIMENTS ON LARGER BENCHMARKS\nIn order to test the relevance of using DDPG-argmax on larger benchmarks, we constructed sparse reward versions of REACHER-V2 and HALFCHEETAH-V2.\nREACHER-V2 was modified by generating a step reward of 1 when the distance between the arm and the target is less than 0.06, and 0 otherwise. The distance to the target was removed from the observations, and the target was fixed to a position of [0.19, 0], instead of changing at each episode. We also removed the control penalty.\nHALFCHEETAH-V2 was modified by generating a step reward of 2 when the x component of the speed of the cheetah is more than 2. We also removed the control penalty. Since the maximum episode duration is 1000, the maximum possible reward in this modified environment is 2000.\nIn both cases, the actor noise uses the default implementation of the Spinup implementation of DDPG, which is an added uniform noise with an amplitude of 0.1.\nRunning DDPG and DDPG-argmax on these environments yields the results shown in Figures 9c and 9d. Experiments on HALFCHEETAH-V2 have been conducted using six different seeds. In Figure 9d, the main curves are smoothed using a moving average covering 10 episodes (10k steps), and the shaded area represents the average plus or minus one standard deviation.\nOn HALFCHEETAH-V2, both DDPG and DDPG-argmax are able to find rewards despite its sparsity. However, DDPG-argmax outperforms DDPG in this environment. Since the only difference between these algorithms is the actor update, we conclude that even in complex environments, the actor update is the main weakness of DDPG. We have shown that replacing it with a brute-force update improves performance dramatically, and further research aiming to improve the performance of deterministic actor-critic algorithms in environments with sparse rewards should concentrate on improving the actor update rule.\nFigure 9d shows that DDPG is able to find the reward without the help of any exploration except the uniform noise built in the algorithm itself. However, to prove that state-space exploration is not the issue here, we constructed a variant in which the current actor is backed up and replaced with a pre-trained good actor every 20 episodes. This variant achieves episode returns above 1950 (as a\nreminder, the maximum episode return is 2000). In the next episode, the backed up policy is restored. This guarantees that the replay buffer always contains all the transitions necessary to learn a good policy. We call this technique priming.\nResults of this variant are presented in Figure 11. Notice that DDPG performs much better than without priming, but the performance of DDPG-argmax is unchanged. However, DDPG still fails to completely solve the environment, proving that even when state-space exploration is made trivial, DDPG underperforms on sparse-reward environments due to its poor actor update." } ]
2,019
URES IN DETERMINISTIC ENVIRONMENTS WITH SPARSE REWARDS
SP:902b1484ef76a82c7a43a9eac6e65c5e08f8345a
[ "The authors introduce ReSWAT, a method for transformation-resilient watermarking of images via adversarial training. The high level idea is to learn a watermark/detector pair (W,D). W can be any transformation (in this paper, an l-infty bounded perturbation) that imputes an imperceptible distortion to a given input, while D is a detector that distinguishes watermarked from non-watermarked images. There is an additional requirement that the detector should be robust to simple transformations such as rotations, cropping, flipping, and contrast enhancement. ", "This paper is about a novel method to add watermarks to images and audio that is highly robust to several transformations that is closely related to gan methods. The idea is that the watermark signal is learned concurrently to the detector network, which share similarities to a generator and detector networks. Five standard attack transformations are considered and a specific optimization to reduce the transferability of the watermark is considered. The method is compared against Broken arrows on Cifar10 and Imagenet. It shows similar or better performance for Gaussian noise attack given the same amount of perturbance allowed in a signal while very much better performance for the other attacks. Going beyond the five attacks, the paper also includes an estimation of the probability of confidence of finding watermarked images given a fixed l2 norm radius. Finally the method is also tested on audio on a proprietary dataset with a deepspeaker architecture which still shows very good performance and it is confirmed by a human evaluation where participants found the watermarked audio not significantly worse or degraded." ]
Advancements in deep generative models have made it possible to synthesize images, videos and audio signals that are hard to distinguish from natural signals, creating opportunities for potential abuse of these capabilities. This motivates the problem of tracking the provenance of signals, i.e., being able to determine the original source of a signal. Watermarking the signal at the time of signal creation is a potential solution, but current techniques are brittle and watermark detection mechanisms can easily be bypassed by doing some post-processing (cropping images, shifting pitch in the audio etc.). In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations. Our detection method can be applied to domains with continuous data representations such as images, videos or sound signals. Experiments on watermarking image and audio signals show that our method can reliably detect the provenance of a synthetic signal, even if the signal has been through several post-processing transformations, and improve upon related work in this setting. Furthermore, we show that for specific kinds of transformations (perturbations bounded in the `2 norm), we can even get formal guarantees on the ability of our model to detect the watermark. We provide qualitative examples of watermarked image and audio samples in the anonymous code submission link.
[]
[ { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "arXiv preprint arXiv:1707.07397,", "year": 2017 }, { "authors": [ "Jihane Bennour", "J-L Dugelay", "Federico Matta" ], "title": "Watermarking attack: Bows contest", "venue": "In Security, Steganography, and Watermarking of Multimedia Contents IX,", "year": 2007 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Robert Chesney", "Danielle Keats Citron" ], "title": "Deep fakes: A looming challenge for privacy, democracy, and national security", "venue": null, "year": 2018 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": "arXiv preprint arXiv:1907.02544,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Nal Kalchbrenner", "Erich Elsen", "Karen Simonyan", "Seb Noury", "Norman Casagrande", "Edward Lockhart", "Florian Stimberg", "Aaron van den Oord", "Sander Dieleman", "Koray Kavukcuoglu" ], "title": "Efficient neural audio synthesis", "venue": "arXiv preprint arXiv:1802.08435,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Chao Li", "Xiaokong Ma", "Bing Jiang", "Xiangang Li", "Xuewei Zhang", "Xiao Liu", "Ying Cao", "Ajay Kannan", "Zhenyao Zhu" ], "title": "Deep speaker: an end-to-end neural speaker embedding system", "venue": "arXiv preprint arXiv:1705.02304,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Francesco Marra", "Diego Gragnaniello", "Luisa Verdoliva", "Giovanni Poggi" ], "title": "Do gans leave artificial fingerprints", "venue": "arXiv preprint arXiv:1812.11842,", "year": 2018 }, { "authors": [ "Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio" ], "title": "Samplernn: An unconditional end-to-end neural audio generation model", "venue": "arXiv preprint arXiv:1612.07837,", "year": 2016 }, { "authors": [ "Lakshmanan Nataraj", "Tajuddin Manhar Mohammed", "BS Manjunath", "Shivkumar Chandrasekaran", "Arjuna Flenner", "Jawadul H Bappy", "Amit K Roy-Chowdhury" ], "title": "Detecting gan generated fake images using co-occurrence matrices", "venue": null, "year": 1903 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "Waveglow: A flow-based generative network for speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Robert C Streijl", "Stefan Winkler", "David S Hands" ], "title": "Mean opinion score (mos) revisited: methods and applications, limitations and alternatives", "venue": "Multimedia Systems,", "year": 2016 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "Ning Yu", "Larry Davis", "Mario Fritz" ], "title": "Attributing fake images to gans: Analyzing fingerprints in generated images", "venue": "arXiv preprint arXiv:1811.08180,", "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative models have contributed to impressive advancements in content generation and representation learning in both digital image and audio domains (Brock et al. (2018); Kalchbrenner et al. (2018); Mehri et al. (2016); Zhu et al. (2017); Prenger et al. (2019); Donahue & Simonyan (2019); Oord et al. (2016); Goodfellow et al. (2014); Kingma & Welling (2013)). However, as generative models learn to better match a target distribution, the distinction between natural signals and synthetic signals generated by a model has blurred, leading to a raft of concerns over the potential misuse of these models (Chesney & Citron (2018)). For example, synthetic videos that are indistinguishable from natural videos, sometimes referred to as deep fakes, have the potential to cause widespread distrust in traditional media.\nWhile the problem of detecting synthetic signals is interesting in its own right, it is challenging to do in a manner that is independent of the model used to generate the signal. Instead, we consider the problem of provenance detection via watermarking, a technique that involves injection of a carefully chosen (but imperceptible) perturbation (watermark) into the signal at the time of creation. The presence of the perturbation in the signal can be later used to detect the provenance (or ultimate source) of this signal. While the simplicity of the technique makes watermarking an appealing technique for detecting synthetic (as well as naturally generated) signals, it is susceptible to adversarial actors that can systematically try to remove the watermark by transforming the signal (so as to obfuscate the provenance). In the case of images, this may take the form of cropping, addition of Gaussian noise or blurring, rotating images etc. Many watermarking schemes break down under these types of transformations.\nIn this paper, we propose a novel transformation-resilient watermarking scheme- Resilient Signal Watermarking via Adversarial Training (ReSWAT ) that is able to detect the presence of a watermark\neven after the signal has been through systematic attempts to remove the watermark. We use ideas from the literature on adversarial training (Madry et al., 2017) to learn to synthesize watermarks (encoding the provenance of a signal) that can be detected even after the watermarked signal has been through transformations that seek to remove the watermark. Our detection method can operate in any domain with continuous data representations. Experimental results demonstrate that our method is effective and substantially improves over existing methods - it can reliably detect the provenance of a synthetic signal even if the signal has been deformed or manipulated." }, { "heading": "1.1 CONTRIBUTIONS", "text": "1 We formulate the transformation resilient watermarking problem and show how it can be reduced to an empirical risk minimization problem with a minimax loss function.\n2 We develop a transformation resilient watermarking scheme, named ReSWAT (Resilient Signal Watermarking via Adversarial Training) and show that the learning procedure is closely related to adversarial training techniques (Madry et al., 2017; Athalye et al., 2017).\n3 Empirically, we show across a set of image and audio datasets that our scheme can produce imperceptible watermarks that can be detected even after the signal has been through a series of adversarial transformations that preserve fidelity to the original signal. We do this using both standard metrics for signal watermarking and via human evaluation on audio signals. Furthermore, we formal provable guarantees of detection if an adversarial transformation has a bounded `2 norm." }, { "heading": "1.2 RELATED WORK", "text": "The problem of detecting synthetic signals has recently received a significant amount of attention. Marra et al. (2018) investigate how a generative adversarial network (GAN) (Goodfellow et al., 2014) leaves an identifiable fingerprint in the image it generates, while Nataraj et al. (2019) detect if an image was generated by a GAN by training a detection model on co-occurence matrices of the RGB channels of images. Similarly, Yu et al. (2018) also train a model that learns to detect if an image was generated by a GAN, and attribute a generated image to its source model. Unfortunately, these detection methods suffer from some limitations: (1) they are specifically designed to operate only in the image domain and (2) they are not designed to be resilient to transformations that an attacker could apply.\nDigital watermarking consists of a two stage process: An embedding stage, where the original signal is combined with a hidden message (also referred to as the watermark), producing a watermarked version of the signal. Secondly, a decoding stage, where the hidden message is retrieved from the watermarked signal. Watermarking schemes are typically evaluated on fidelity of the watermarked signal to the original signal, the resilience of the watermark to various post-processing transformations and the false positive rates (i.e. whether the decoder detects watermarks in non-watermarked signals). Robustness of watermarking techniques is measured against a number of practical attacks that aim to remove or degrade the watermark signal. For example, adding Gaussian noise, image cropping and image compression represent watermark degrading, watermark removal and watermark geometric (repositioning) attacks, respectively. In this paper, we are primarily interested in zero-bit watermarking, where one is simply interested in detecting the presence of a watermark (rather than decoding a hidden message from the watermark). The state-of-the-art zero-bit watermarking scheme in the image domain is referred to as Broken Arrows (BA) (Furon & Bas, 2008) and won the “Break Our Watermarking System” (BOWS) competition (Bennour et al., 2007). BA has a provable minimum false positive rate under Gaussian noise, however the scheme does not provide robustness against geometric attacks (like rotations and cropping)." }, { "heading": "2 FORMULATION OF THE ROBUST WATERMARKING PROBLEM", "text": "We study signals that live in a space X and are generated by a probabilistic source Ps - our framework applies regardless of whether Ps is a natural source (images of natural scenes) or an artificial source (samples from a VAE or a GAN). We are interested in developing a scheme to watermark signals generated by this source, with the requirement that:\n1 The detector should detect a watermark in any watermarked signal from source Ps and not detect a watermark in any other signal.\n2 Even if post-processing transformations (for example, compression/cropping/frequency shift etc) are applied to the signal, the detector detects the watermark, or the absence thereof.\nFormally, we define the robust watermarking problem as follows:\nDefinition 2.1 (Transformation resilient watermarking scheme). Consider a watermarking scheme (W,D) where W : X 7→ X is a watermarking routine and D : X 7→ {0, 1} is a watermark detector. Let T be a space of transformations with T : X 7→ X for each T ∈ T . We consider the watermarking scheme resilient with respect to a set of transformations T for a source Ps if D ( T ( W (s) )) = 1, D ( T (s) ) = 0,∀T ∈ T with high probability for s ∼ Ps.\nThis formulation suggests an empirical risk minimization approach for training D. Given a fixed W , we can train D to minimize the empirical risk\nE s∼Ps [ max T∈T ` ( D ( T ( W (s) )) , 1 ) + ` ( D ( T (s) ) , 0 )]\nwhere ` is a loss function measuring the discrepancy between the prediction of D and the desired label (0 for non-watermarked signals and 1 for watermarked signals).\nThe inner maximization transformations resembles the objective used in adversarial training (Madry et al., 2017), motivating our approach to learning a transformation resilient watermarking. We develop this idea in the following section." }, { "heading": "3 RESWAT : RESILIENT WATERMARKING VIA ADVERSARIAL TRAINING", "text": "We parameterize the detector D as a neural network fθ, parameterized by θ. If we fix the detector, the watermark embedding process W should embed a watermark that is “strongly detected” by the detector, i.e., the watermark embedding mechanism pushes the watermark deep inside the decision boundary of D. However, we also want the watermark to be imperceptible so we require that the watermark does not significantly change the original signal according to some distance measure. While it is challenging to define the space of imperceptible distortions of the input, a convenient proxy is to limit the change in terms of the `∞ norm between the watermarked and the original signal. Thus, we construct the watermark, δ, by solving the following optimization problem:\nW (s) = min ‖δ‖∞≤ `(fθ(s+ δ), 1)\nThis optimization problem can be solved efficiently using a projected gradient descent (PGD) method, and is mathematically very similar to computing an adversarial example for a neural network (however, in this case, the adversary is friendly and actually pushes the example to minimize the loss of the detector). The parameter controls the perceptibility of the watermark. If is too large, the watermark would be clearly perceptible while if it is too small, the watermark may easily be washed out by post processing steps. Thus, the right choice of achieves the optimal trade-off between perceptibility and transformation resilient detection. Note that we use the `∞ norm of the perturbation instead of another norm since this distributes the watermark over the entire signal, and so will be resilient to geometric attacks (that black out patches of the input, or obfuscate inputs).\nWith this watermark embedding method, we can now train the detector to perform transformation resilient watermark detection as follows:\nminimize θ,‖δ‖∞≤ E s∼Ps [ max T∈T ` ( fθ ( T (s+ δ) ) , 1 ) + ` ( fθ ( T (s) ) , 0 )]\n(1)\nThis is similar to the expectation over transformation attack (Athalye et al. (2017)), that aims to construct adversarial examples that persist through a set of transformations. During training, we only\nsample from differentiable transformations, this is so we do not have to approximate the gradient, which would be prohibitively slow to training.\nIn practice, at each iteration in training we approximate finding maxT∈T with max{T1,T2,...,Tn}, where Ti ∈ T , i ∈ {1, ..., n}. Theoretically, if we sample more transformations we achieve a better approximation for the true gradient over the full transformation distribution, however, empirically, we found it too expensive to trade more parallel transformations for longer training time. We explore how resilience to transformations is affected by the number of sampled transformations at each step in training in appendix C. All following experimental results are taken from a model trained with a sample size of one at each iteration of training." }, { "heading": "3.1 OPTIMIZING FOR NON-TRANSFERABLE WATERMARKS", "text": "Traditional watermarking schemes are based on a secret key that ensures that only parties with access to the key can create watermarked content; an attacker that can watermark content without access to the key represents an integrity attack on the scheme. We refer to this kind of attack as a specificity attack that attempts to introduce false positives. In our case, the classifier detecting/constructing the watermark can be thought of as an approximation of a secret key.\nWe will now propose a method to construct watermark perturbations for a given watermark classifier, fθ, which do not transfer to other watermark classifiers trained on the same data.\nRecall that given a signal s, to create a watermark perturbation we use PGD which takes steps in the direction of\n−∇s max T∈T\n[ ` ( fθ(T (s)), 1 )] (2)\nTo encourage that the generated perturbation doesn’t transfer to other watermark classifiers trained on the same data, we generate the watermark that is robust against an ensemble of models. Let {f iθ : X → {0, 1}, i ∈ {1, ..., n}} be an ensemble of watermark classifiers trained on the same problem as fθ. To limit the transferability of watermarks on fθ, we create watermark perturbations by stepping in the direction of\n− ( ∇s max\nT∈T\n[ ` ( fθ(T (s)), 1 )] +∇s max\ni,T∈T\n[ ` ( f iθ(T (s)), 0 )]) (3)\nTaking steps in the direction of this loss will increase the probability that watermark perturbations are only classified as watermarked by fθ and do not transfer to other models." }, { "heading": "4 EVALUATION", "text": "Here, we give experimental results that our scheme is robust to a variety of attacks. We first evaluate the scheme on image data in section 4.1 and section 4.2, and then on audio data in section 4.3." }, { "heading": "4.1 EVALUATION PROTOCOL ON IMAGES", "text": "During training we sample from a set of transformations, to which the scheme should be robust. In this evaluation we use transformations detailed in table 1. We evaluate watermark detection robustness in the image space on both Cifar10 (Krizhevsky et al.) and ImageNet (Deng et al., 2009). We defer evaluation of Cifar10 to appendix A and detail only ImageNet experiments here.\nImageNet. We train a standard ResNet152 classifier (He et al. (2016)), modified for binary prediction, for the watermark detector. We replace batch normalization with instance normalization (Ulyanov et al. (2016)), as we noticed that batch statistics were heavily skewed by the transformations applied during training. We train with a mini-batch size of 32 for 60,000 steps, with an initial learning rate of 0.1 and decaying this by a factor of 10 every 20,000 steps.\nAll images are normalized into the range [0,1], and a batch of watermarked images are constructed with five PGD steps at each iteration. With respect to the set of transformations, during training, we set σ to 0.25, r to π/2, ch and cw to 10, and b to 0.1. We trained four classifiers under these hyperparameters, two where at each step we randomly sample a single transformation, and watermark with = 5/255 and = 10/255, referred to as f 5 and f 10 , respectively, and two where we apply a composition of all transformations, and watermark with = 5/255 and = 10/255, referred to as f 5comp and f 10comp, respectively.\nAll watermark detection classifiers achieved 100% test set accuracy, where the test set consists of 10,000 non-watermarked and 10,000 watermarked ImageNet test set images. A detailed comparison of the differences between these models is given in appendix A, however, we found that models trained under a composition of transformations are more robust to all attacks and so remaining experiments will use only f 5comp and f 10 comp. For evaluation, we measure the success of an attack with respect to the distortion introduced by the attack measured by structural similarity (SSIM) score (Wang et al. (2004))." }, { "heading": "4.2 EVALUATING PERFORMANCE AGAINST ATTACKS", "text": "We now describe experiments measuring the quality of our watermarking scheme under various transformations seeking to induce a misclassification in the watermark detector: We study both attacks that seek to remove a watermark from given watermarked signals (thus inducing a false negative for the detector) and attacks that seek to make the detector detect a watermark in a non-watermarked image (thus inducing a false positive)." }, { "heading": "4.2.1 SIGNAL TRANSFORMATION ATTACKS (FALSE NEGATIVES)", "text": "We investigate the trade-off in distortion of the watermarked image when we require perfect detection under various transformations. Given a watermarked signal s, we sample from a space of transformations to create n copies of this input under, referred to as si, i ∈ {1, ..., n}. We increase the watermark value until the accuracy of the detection model on si, i ∈ {1, ..., n} is >99%. We measure the perceptibility of the watermark and the perceptibility of the transformation used in the attack, both in terms of SSIM, and compare with Broken Arrows (BA), where for each transformation we set the number of random samples, n, to 1 million.\nFigure 1 shows that for various distributions of transformations the amount of distortion introduced by the watermarking scheme is dominated by the amount of distortion introduced by the transformation. For Gaussian noise, the necessary distortion introduced by the watermark (for perfect detection under a transformation) using ReSWAT is approximately equivalent to BA and substantially smaller than the distortion introduced by the transformation. While for cropping and rotation attacks, our watermarks incur negligible levels of distortion while BA fails to watermark content without incurring large distortions. We show qualitative examples in fig. 2 for a Gaussian noise attack, and analogous plots of fig. 1 using Peak-Signal-to-Noise ratio as the distortion metric in appendix B." }, { "heading": "4.2.2 SPECIFICITY ATTACKS (FALSE POSITIVES)", "text": "Using the ImageNet dataset, we trained a model with the same hyperparameters as f 5comp, however the model was optimized by constructing watermarks using eq. (3) instead of eq. (2), using five other pre-trained watermark classifiers; we denote this model by f̂ 5comp. Given 20 pre-trained watermark classifiers only differing from f 5comp by random initialization of weights, we compare how well 1,000 watermark’s constructed using each of these models transfers to f 5comp and f̂ 5 comp, representing an attack that attempts to introduce false positives. Figure 3 shows the difference in average false positive rate between f 5comp and f̂ 5 comp. At a watermark SSIM score of 0.60, f̂ 5 comp has a false positive rate of below 20% while the false positive rate on these inputs is nearly 50% for f 5comp. Clearly, optimizing eq. (3) improves resilience to specificity attacks." }, { "heading": "4.2.3 CERTIFIED ROBUSTNESS", "text": "The previous sets of experiments present results on the robustness of the watermarking scheme against “best effort” attacks, i.e, we attempt to compute attacks that break the watermarking system and declare success if we fail to do so. However, it is possible that our attack algorithm failed to find the worst case transformation that would break the watermark detection. Thus, it is desirable to have a stronger guarantee against all attacks within a certain transformation space.\nWhile provable guarantees are hard to obtain in general, we can leverage recent work on randomized smoothing techniques (Cohen et al., 2019) that are able to obtain certified guarantees against transformations constrained in the `2 norm, i.e., the adversary can transform the signal by any amount within a distance in the `2 norm. On 200 examples from the ImageNet test set that are watermarked\nwith = 20/255, we estimate a lower bound, p, on the probability of the most-likely class under Gaussian noise parameterized by N (0, σ2I), using 10,000 random samples. Given p, the detector is robust to adversarial perturbations, γ, if‖γ‖2 < σΦ−1(p) (as proven in Cohen et al. (2019)), up to a confidence level of α which we set to 0.99. Figure 4 shows the certified accuracy of these examples as a function of the certified radius in the `2-norm. For = 20/255 nearly all 200 watermarked images are robust to any perturbation with an `2-norm smaller than 1.5. We also include results on a non-robust watermark detection model - a model trained without sampling from a distribution of transformations. This model has a comparatively small certified region of robustness, implying that training on a distribution of transformations does improve robustness." }, { "heading": "4.2.4 OUT-OF-DISTRIBUTION EVALUATION", "text": "The previous section focused on adversarial attacks on signals drawn from the same distribution as the model was trained on. A natural question to ask is, would the system fail when signals are drawn from a different distribution? To evaluate this, we verified that, given a model trained on Cifar10 images, all SVHN test images are correctly classified to the non-watermarked class. Secondly, we watermarked 500 images generated by BigGAN (Brock et al., 2018) using a model trained on ImageNet samples (f 10comp). We then measured resilience to transformations as detailed in section 4.2.1 – we attacked both non-watermarked and watermarked images with Gaussian noise, rotation and cropping such that the average SSIM of these images is 0.6. In comparison, the average SSIM of watermarked images was 0.82. The false positive rate (attacking non-watermarked images) was 1.2% and the false negative rate (attacking watermarked images) was 0%. BigGAN watermarked images exhibited comparative levels of resilience to transformations despite the model being trained on ImageNet samples." }, { "heading": "4.3 EVALUATION ON TEXT-TO-SPEECH DATASET", "text": "To evaluate ReSWAT on audio data, we train a watermark detection model using a proprietary dataset composed of hours of high quality speech data, where each audio sample is a short speech utterance lasting between 1 and 10 seconds. We use a DeepSpeaker architecture (Li et al., 2017), modified for binary prediction. The pre-processing stage takes as input, a variable length waveform, normalizes values between -0.5 and 0.5, and outputs the fixed length mel-spectrogram which is then used as input to the model.\nWe train the detector model for 100,000 steps, with an exponentially decaying learning rate initialized at 0.001 and decayed by a factor of 0.9 every 1000 steps. The watermarking value is initialized at 0.04 and decreased by 0.00001 whenever the detection rate is 100% based on a moving average of the previous 100 steps. We use five PGD steps at each iteration to create watermarked inputs. During training we sample from a set of transformations, described in table 2, to which the scheme should be robust.\nAt test time we achieve perfect detection rate on 200 watermarked audio samples using = 4 × 10−4. Similarly to section 4.2.1, we measure the robustness of watermarked audio samples with respect to the amount of distortion introduced by transformations, under the requirement that detection accuracy is >99%. For a given transformation and input, we randomly sample the input 100 times under the transformation, this creates a test set of 20,000 data points on which we measure the accuracy of the detector. Table 2 shows the results of attacking watermarked inputs for different values using various transformations. Watermarks at = 4 × 10−4 are almost imperceptible, while a faint background noise can be heard for = 4.8 × 10−3. The detection model is robust to a large number of transformations, for example, 78% of audio can be obscured without decreasing detection accuracy.\nHuman evaluation.\nHere, we conduct human evaluation of watermark perceptibility using both mean opinion score (MOS) and A/B tests. MOS (Streijl et al., 2016) is a commonly used measure for audio quality; it is expressed as a single rational number, typically in the range 1–5,\nwhere 1 is lowest perceived quality, and 5 is the highest perceived quality. The MOS is calculated as the arithmetic mean over single ratings performed by human subjects. Due to the large number of audio samples used in the study, we use a single rating per audio sample. Our dataset consisted of 2000 watermarked and non-watermarked audio samples, that were all correctly classified by the water-\nmark detector model under transformations listed in table 2. The average rating of non-watermarked content was 4.595±0.576, and the average rating of watermarked content (at = 4 × 10−4) was 4.530±0.530. Clearly, human participants did not perceive the watermarked audio as significantly worse or degraded.\nFor A/B tests we took a subset of 660 watermarked and non-watermarked audio samples. We played both the non-watermarked and watermarked audio sample to participants and asked if the watermarked audio sample was worse or better than the original. Results are shown in fig. 5; over 80% of human participants rated the quality of watermarked audio samples as the same as non-watermarked audio samples." }, { "heading": "5 CONCLUSION", "text": "We presented a general solution, that leverages imperceptible watermark, to the problem of detecting the provenance of a signal. In a departure from related work, our watermarking schemes attempts to learn constructions of watermarks that are resilient to adversarial transformations. Our solution can be applied in numerous environments such as in image, audio and video domains. Results presented both on images and audio suggest that it is possible to construct watermarks that simultaneously maintain a high level of signal fidelity, and are resilient to adversarial transformations, with a minimal false positive rate." }, { "heading": "A EVALUATION ON CIFAR10 & FURTHER EVALUATION ON IMAGENET AND TEXT-TO-SPEECH DATA", "text": "Cifar10 (Krizhevsky et al.). We use a wide ResNet classifier (He et al. (2016)) for the watermark detector. We replace batch normalization with instance normalization (Ulyanov et al. (2016)). We trained with a mini-batch size of 32 for 60,000 steps, with an initial learning rate of 0.01 and decaying this by a factor of 10 every 20,000 steps. All images are normalized into the range [0,1]. We set the maximum size of the watermark perturbation, , to 20/255, and the watermark perturbation is constructed with five PGD steps at each iteration during training. With respect to the set of transformations, we set σ to 0.5, r to π/2, ch and cw to 2, and b to 0.25. We trained two classifiers under these hyperparameters, one where at each step we randomly sample a single transformation, and another classifier where we apply a composition of all transformations, which shall be denoted by f1CIFAR and f 2 CIFAR, respectively. Both classifiers achieved 100% test set accuracy, where the test set consists of 10,000 non-watermarked images and 10,000 watermarked images.\nGiven a transformation function tθ : X → X , parameterized by θ which encodes some randomness, and an input s ∈ X with class y ∈ {0, 1}, we create n copies of this input under tθ, si, i ∈ {1, ..., n}. We then measure both the SSIM of the watermarked images and SSIM of both transformed watermarked and transformed non-watermarked images as a function of the attack success over these n inputs. Figure 6 shows the results of a transformation attack: for each of 2,000 Cifar10 test set images (1,000 non-watermarked and 1,000 watermarked) we create 1000 new images under a transformation; each marker in the figure is the average attack success over 1 million test images. We evaluate the attack success for a number of different watermarking values. Large increase the perceptibility of the watermark, and so correspondingly decrease the SSIM score. Note, attacking non-watermarked content is unaffected by the value chosen for watermarking.\nFor = 1/255, the SSIM score of watermarked content is ≈ 0.99, and thus the watermark is highly imperceptible. However, a number of attacks on watermark content succeed with high SSIM score, such as for example transformations that modify the brightness of an image. Slightly increasing the perceptibly of the watermark to = 12/255, reduces the attack success of nearly all transformations to\nzero. Overall, the classifier trained on a composition of transformations is more robust to attacks on non-watermarked content.\nSimilar effects can be observed on ImageNet shown in fig. 7, and the propriety audio dataset in fig. 8. Interestingly, watermark classifiers on ImageNet seem to be more robust at smaller values than on Cifar10, we conjecture this is because there is a larger space in which to distribute the watermark." }, { "heading": "B FURTHER COMPARISON WITH BROKEN ARROWS", "text": "Figure 9 gives analogous plots to fig. 1 when the measure of distortion introduced by both the watermark and a transformation is Peak-Signal-to-Noise ratio (PSNR). As one may expect, results exhibited here mirror those of fig. 1, ReSWAT and BA are comparable under Gaussian noise transformations, while ReSWAT is substantially better than BA under rotation and cropping transformations." }, { "heading": "C MEASURING HOW THE NUMBER OF SAMPLES FROM THE TRANSFORMATION DISTRIBUTION USED DURING TRAINING AFFECTS RESILIENCE TO ATTACKS", "text": "Here, we evaluate the increase in resilience to transformations when we optimize ReSWAT using eq. (1) with 25 samples from the transformation distribution at each step of training, as opposed to a single sample. Figure 10 shows the resilience improvements of ReSWAT against a Gaussian noise transformation on the Cifar10 dataset. We evaluate test accuracy on 50,000 watermarked and\nnon-watermarked images with various levels of Gaussian noise applied. However, the improvement in resilience was at the expense of approximately a 5× increase in training time." } ]
2,019
null
SP:1b32bd6b0a6c8672c109415f6fcbbad4c13c40f4
[ "In a paper a new way to compute anomality score (for a test point) is suggested. A paper is purely experimental, based on existing techniques to dimension reduction (beta-VAE and t-SNE). Given trained beta-VAE, latent vectors, obtained for training set, are feed into t-SNE algorithm. The overall anomality score for a test point is combined from 1-NN distances on t-SNE plot and reconstruction error of beta-VNE. ", "This paper presents a novel deep anomaly detection model. It combines two existing models: B-VAE and t-SNE. The B-VAE is trained unsupervised and learns an encoder and decoder which provide both an embedding and a reconstruction. Using t-SNE to reduce its dimensionality, the embedding is projected into a 2 dimensional space. An anomaly score function is defined that combines the reconstruction error and the distance in t-SNE space to the K nearest neighbor(s). Experiments are conducted with several image datasets (MNIST,FMNIST,CIFAR10,SmallNORB) and one timeseries dataset (Arrhythmia). For the image sets, the B-VAE model is implemented with a CNN, while for timeseries, a TCN is used. Comparisons are conducted showing the approach to be beat other SOT unsupervised methods, AnoGAN and ADGAN, by 63% and 22% respectively for MNIST and 8% and 2% for FMNIST (in terms of error reduction). For CIFAR-10 and FMNIST it is even demonstrated to beat a supervised SOT method CapsNET. Another experiment shows that t_SNE dramatically improves the performance over B-VAE alone. For the timeseries, the approach is not compared to other SOT approaches as the authors only provide an experiment showing that TCN beats CNN and LSTM for the implementation of the B-VAE. In addition the authors study the effect of the various parameters of the system, in particular the effect of the B in B-VAE and of alpha, the mixing factor between reconstruction error and kNN distance in t_SNE. 3D plots give a good idea on how to select optimal values for the various datasets. The impact of B is also shown on the t-SNE map for MNIST. Finally an ablation studies compares on MNIST the performance of the approach with t-SNE alone, reconstruction alone, and latent distance. On average over 4 digits taken as anomaly, the proposed approach dramatically outperforms the others." ]
Identifying anomalous samples from highly complex and unstructured data is a crucial but challenging task in a variety of intelligent systems. In this paper, we present a novel deep anomaly detection framework named AnoDM (standing for Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning). The disentanglement learning is currently implemented by β-VAE for automatically discovering interpretable factorized latent representations in a completely unsupervised manner. The manifold learning is realized by t-SNE for projecting the latent representations to a 2D map. We define a new anomaly score function by combining β-VAE’s reconstruction error in the raw feature space and local density estimation in the t-SNE space. AnoDM was evaluated on both image and time-series data and achieved better results than models that use just one of the two measures and other deep learning methods.
[ { "affiliations": [], "name": "MANIFOLD LEARNING" } ]
[ { "authors": [ "Jinwon An", "Sungzoon Cho" ], "title": "Variational autoencoder based anomaly detection using reconstruction probability", "venue": "Technical report,", "year": 2015 }, { "authors": [ "Jerone T.A. Andrews", "Edward J. Morton", "Lewis D. Griffin" ], "title": "Detecting anomalous data using auto-encoders", "venue": "International Journal of Machine Learning and Computing,", "year": 2016 }, { "authors": [ "Shaojie Bai", "J. Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": "ArXiv, pp", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Christopher P. Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in β-VAE", "venue": "ArXiv, pp. arXiv:1804.03599v1,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Dzmitry Bahdanau", "Yoshua Bengio" ], "title": "On the properties of neural machine translation: Encoder-decoder approaches", "venue": "ArXiv, pp. arXiv:1409.1259,", "year": 2014 }, { "authors": [ "Yann N. Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "ArXiv, pp", "year": 2016 }, { "authors": [ "Lucas Deecke", "Robert Vandermeulen", "Lukas Ruff", "Stephan Mandt", "Marius Kloft" ], "title": "Image anomaly detection with generative adversarial networks", "venue": "In Machine Learning and Knowledge Discovery in Databases,", "year": 2019 }, { "authors": [ "Sarah M. Erfani", "Sutharshan Rajasegarar", "Shanika Karunasekera", "Christopher Leckie" ], "title": "Highdimensional and large-scale anomaly detection using a linear one-class SVM with deep learning", "venue": "Pattern Recognition,", "year": 2016 }, { "authors": [ "Shayan Fazeli" ], "title": "Ecg heartbeat categorization dataset, 2018", "venue": "URL https://www.kaggle.com/ shayanfazeli/heartbeat", "year": 2019 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Yann N. Dauphin" ], "title": "A convolutional encoder model for neural machine translation", "venue": "ArXiv, pp", "year": 2016 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N. Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nico Gornitz", "Marius Kloft", "Konrad Rieck", "Ulf Brefeld" ], "title": "Toward supervised anomaly detection", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Klaus Greff", "Rupesh Kumar Srivastava", "Jan Koutnı́k", "Bas R. Steunebrink", "Jürgen Schmidhuber" ], "title": "LSTM: A search space odyssey", "venue": "ArXiv, pp. arXiv:1503.04069,", "year": 2015 }, { "authors": [ "Ryuhei Hamaguchi", "Ken Sakurada", "Ryosuke Nakamura" ], "title": "Rare event detection using disentangled representation learning", "venue": "ArXiv, pp", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "ArXiv, pp. arXiv:1512.03385,", "year": 2015 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Geoffrey E. Hinton", "Sam Roweis" ], "title": "Stochastic neighbor embedding", "venue": "In Advances in Neural Information Processing Systems", "year": 2002 }, { "authors": [ "Geoffrey E. Hinton", "Simon Osindero", "Yee-Whye Teh" ], "title": "A fast learning algorithm for deep belief nets", "venue": "Neural Computation,", "year": 2006 }, { "authors": [ "Sepp Hochreiter", "Jrgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Matthew D. Hoffman", "David M. Blei", "Chong Wang", "John Paisley" ], "title": "Stochastic variational inference", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Matthew D. Hoffman", "Carlos Riquelme", "Matthew J. Johnson" ], "title": "The β-VAE’s implicit priors", "venue": "In Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Rafal Jozefowicz", "Wojciech Zaremba", "Ilya Sutskever" ], "title": "An empirical exploration of recurrent network architectures", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Mohammad Kachuee", "Shayan Fazeli", "Majid Sarrafzadeh" ], "title": "ECG heartbeat classification: A deep transferable representation", "venue": "ArXiv, pp", "year": 2018 }, { "authors": [ "W. Karush" ], "title": "Minima of Functions of Several Variables with Inequalities as Side Constraints", "venue": "Masters thesis, Univ. of Chicago,", "year": 1939 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Department of Computer Science,", "year": 2009 }, { "authors": [ "A.W.H.W. Kuhn" ], "title": "Tucke. Nonlinear programming", "venue": "In Berkeley Symposium,", "year": 1951 }, { "authors": [ "Yann LeCun", "Bernhard E. Boser", "John S. Denker", "Donnie Henderson", "Richard E. Howard", "Wayne E. Hubbard", "Lawrence D. Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yann LeCun", "Fu Jie Huang", "Léon Bottou" ], "title": "Learning methods for generic object recognition with invariance to pose and lighting", "venue": "In IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2004 }, { "authors": [ "Dan Li", "Dacheng Chen", "Jonathan Goh", "See-Kiong Ng" ], "title": "Anomaly detection with generative adversarial networks for multivariate time series", "venue": "ArXiv, pp. arXiv:1809.04758,", "year": 2018 }, { "authors": [ "Xiaoyan Li", "Iluju Kiringa", "Tet Yeap", "Xiaodan Zhu", "Yifeng Li" ], "title": "Exploring deep anomaly detection methods based on capsule", "venue": "net. International Conference on Machine Learning", "year": 2019 }, { "authors": [ "Yifeng Li", "Xiaodan Zhu" ], "title": "Exploring Helmholtz machine and deep belief net in the exponential family perspective", "venue": "In International Conference on Machine Learning 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models, July 2018a", "year": 2018 }, { "authors": [ "Yifeng Li", "Xiaodan Zhu" ], "title": "Exponential family restricted Boltzmann machines and annealed importance sampling", "venue": "In International Joint Conference on Neural Networks,", "year": 2018 }, { "authors": [ "Yifeng Li", "Fang-Xiang Wu", "Alioune Ngom" ], "title": "A review on machine learning principles for multiview biological data integration", "venue": "Briefings in Bioinformatics,", "year": 2018 }, { "authors": [ "Emile Mathieu", "Tom Rainforth", "N. Siddharth", "Yee Whye Teh" ], "title": "Disentangling disentanglement in variational autoencoders", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Leland McInnes", "John Healy", "James Melville" ], "title": "UMAP: Uniform manifold approximation and projection for dimension reduction", "venue": "ArXiv, pp", "year": 2018 }, { "authors": [ "Gábor Melis", "Chris Dyer", "Phil Blunsom" ], "title": "On the state of the art of evaluation in neural language models", "venue": "ArXiv, pp", "year": 2017 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t know", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Daehyung Park", "Yuuna Hoshi", "Charles C. Kemp" ], "title": "A multimodal anomaly detector for robotassisted feeding using an LSTM-based variational autoencoder", "venue": "IEEE Robotics and Automation Letters,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In Neural Information Processing Systems Autodiff Workshop,", "year": 2017 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning, pp. II–1278–II–1286,", "year": 2014 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E. Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Thomas Schlegl", "Philipp Seebock", "Sebastian M. Waldstein", "Ursula Schmidt-Erfurth", "Georg Langs" ], "title": "Unsupervised anomaly detection with generative adversarial networks to guide marker discovery", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2017 }, { "authors": [ "David M.J. Tax", "Robert P.W. Duin" ], "title": "Support vector domain description", "venue": "Pattern Recognition Letters,", "year": 1999 }, { "authors": [ "Aäron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew W. Senior", "Koray Kavukcuoglu" ], "title": "WaveNet: a generative model for raw audio", "venue": "In Speech Synthesis Workshop,", "year": 2016 }, { "authors": [ "Laurens van der Maaten", "Geoffrey E. Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Alexander Waibel", "Toshiyuki Hanazawa", "Geoffrey E. Hinton", "Kiyohiro Shikano", "Kevin J. Lang" ], "title": "Readings in Speech Recognition", "venue": null, "year": 1990 }, { "authors": [ "Martin Wattenberg", "Fernanda Vigas", "Ian Johnson" ], "title": "How to use t-SNE effectively. Distill, 2016", "venue": "doi: 10.23915/distill.00002. URL http://distill.pub/2016/misread-tsne", "year": 2016 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms", "venue": "ArXiv, pp", "year": 2017 }, { "authors": [ "Cheng Zhang", "Judith Butepage", "Hedvig Kjellstrom", "Stephan Mandt" ], "title": "Advances in variational inference", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2008 }, { "authors": [ "LEARNING Higgins" ], "title": "2017) proposed a novel deep generative model, named β-VAE, a modification of VAE by introducing an adjustable hyperparameter β to learn an interpretable disentangled representation of the data generative latent factors. Specifically, β functions as a controller to trade off between the extent of learning constraints and reconstruction accuracy. The constraints impose a limit on", "venue": null, "year": 2017 }, { "authors": [ "Higgins" ], "title": "2017) assumed that an image x is generated by the true world simulator using ground truth data generative factors: p(x|v,w) = Sim(v,w), where v is set of conditionally independent factors and w is set of conditionally dependent factors. Therefore, the joint distribution of the data x and a set of generative latent factors z is: p(x|z) ≈ p(x|v,w) = Sim(v,w)", "venue": null, "year": 2017 }, { "authors": [ "Mathieu" ], "title": "2019) explained that most recent work for learning disentangled representations", "venue": null, "year": 2019 }, { "authors": [ "Burgess" ], "title": "Eqφ(z|x)[log pθ(x|z)]− βKL(qφ(z|x)||p(z))− αD(qφ(z), p(z))", "venue": null, "year": 2018 }, { "authors": [ "Hamaguchi" ], "title": "Lact encourages activation of common features to avoid a trivial solution. After common features of paired images are separated, the means of common features from two images are fed into event detector (a classifier) for training", "venue": null, "year": 2018 }, { "authors": [ "Bai" ], "title": "2018) employed a basic architecture which is essentially same as the time delay neural network proposed by Waibel et al. (1990) to ensure outputs of same length as inputs and no leakage from the future into the past", "venue": null, "year": 1990 }, { "authors": [ "Bai" ], "title": "2018) also utilized a generic residual module (He et al., 2015) in place of a convolutional", "venue": null, "year": 2015 }, { "authors": [ "Bai" ], "title": "2018) further discussed the advantages of TCN (including parallelism, flexible receptive", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Detecting anomalies in data flow of modern intelligent systems is an important but challenging problem. Formally speaking, anomaly detection problems can be statistically viewed as identifying outliers having low probabilities from the modelling of data distribution p(x). Practically, since statistical modelling of the data is often difficult, it degenerates to domain description (Tax & Duin, 1999) or supervised prediction (Gornitz et al., 2013) problems in some cases. The exact explanation of an anomalous data point depends on the specific domain of focus. In data centers, it probably indicates an attempt of cyber intrusion. In recognition systems, it could be an adversarial attack. In biomedical information systems, it means possible onset of certain diseases. In Internet of Things (IoT) systems, it may represent a hardware failure or alarming event captured by sensors. An anomalous sample is not always associated with negativity. Sometimes, it leads to novel discoveries in scientific explorations.\nHowever, from the data analytics perspective, anomaly detection is a difficult task due to the following reasons. (1) Many forms of data, e.g., images, text, and other types of sequences, are often highly unstructured and complex. How can these data be well represented and high-level information be extracted by an algorithm? (2) The sample sizes of modern data sets are often extremely large and most of them are unlabelled. Unfortunately, traditional methods do not scale and perform well on these data. (3) When data of multiple modalities are naturally available for same events in a system, a robust and precise algorithm needs to be designed to integrate these information for system diagnosis or decision making. (4) Many intelligent systems, such as IoTs, require real-time detection and reaction of abnormal events to avoid costly and irrevocable damages. Thus, anomaly monitoring algorithms to be designed in these platforms must be highly efficient. In summary, anomaly detection raises challenges in representability, scalability, multimodality, and time complexity.\nDeep learning (LeCun et al., 2015) offers great potentials to overcome these challenges. (1) Representation learning mechanisms (such as convolution for images, embedding for discrete symbols, and recurrence for time-series) have been developed in supervised and unsupervised deep models to consider the nature of specific types of input samples and encode them into vectors of continuous values as corresponding latent representations. (2) Most deep learning models are trained using stochastic gradient descent that splits a giant training set into mini-batches. Thus, learning be-\ncomes unrestricted and blessed by a large sample size. Particularly, stochastic variational inference (Hoffman et al., 2013; Zhang et al., 2018) has successfully enabled scalable learning and inference for deep generative models (DGMs) on a vast amount of unlablled data. (3) The development of deep learning programming packages, such as PyTorch (Paszke et al., 2017) and TensorFlow (Abadi et al., 2016), greatly eases the assembly of multiple network components (corresponding to different modalities) together for multimodal representation learning (Li et al., 2018b). (4) Once a deep model is learned, the inference or encoding step is very efficient, thanks to the highly parallel computing architectures and techniques.\nIn some applications, if the domain of anomalous and normal samples is well defined, anomaly detection can be reduced to binary classification problems. However, in many situations, either the domain of anomalous samples cannot be fully understood or modelled, or the domain of the normal samples is too complicated to be modelled in one class. DGMs are more suitable than supervised methods in such cases. DGMs are concerned with the joint distribution of visible and latent variables with a hierarchy of stochastic (and deterministic) layers. With proper emphasis on disentanglement of latent representations, DGMs have the potential of dissecting hidden factors that are key to sample generation. Unsupervised disentangled representation learning (Bengio et al., 2013) renders several benefits. (1) It helps better understand our data, providing a path towards explainable AI. (2) It gives a better control on the generation process of novel samples. (3) The disentanglement of latent factors may provide an opportunity to distinguish anomalies based on the landscape of latent space, which is our interest in this paper. It has been shown that the likelihood of a data point p(x) estimated in DGM is not a reliable measure for detecting abnormal samples (Nalisnick et al., 2019). Instead, reconstruction error is widely used as an anomaly score function (An & Cho, 2015).\nAs a variant of variational autoencoder (VAE) (Kingma & Welling, 2014), β-VAE (Higgins et al., 2017) is designed for unsupervised discovery of interpretable factorized latent representations from raw image data. An adjustable hyperparameter β is introduced to balance the extent of learning constraints (a limit on the capacity of the latent information channel and an emphasis on learning statistically independent latent factors) and reconstruction accuracy. It was demonstrated that βVAE with appropriately tuned value of β (when β > 1) qualitatively outperforms VAE (when β = 1, β-VAE is exactly VAE). Burgess et al. (2018) proposed a modification to the training regime of βVAE by progressively increasing the information capacity of the latent code during training. This modification facilitates the robust learning of disentangled representations in β-VAE, without the previous trade-off in the reconstruction accuracy. Hoffman et al. (2017) introduced a reformulation of β-VAE for 0 < β < 1. They argued that, within in this range, training β-VAE is equivalent to optimizing an approximate log-marginal likelihood bound of VAE under an implicit prior.\nManifold learning is a family of nonlinear dimensionality reduction techniques. The t-distributed stochastic neighbor embedding (t-SNE) (van der Maaten & Hinton, 2008) is an unsupervised manifold learning method primarily used for data exploration and visualization by approximating highdimensional data distribution using a two or three-dimensional map that could preserve local and certain global structures of the data. The use of t-SNE for anomaly detection has been sceptical (van der Maaten & Hinton, 2008). However, no comprehensive investigation has been made in this topic. Taking advantages of both disentangled representation learning (using β-VAE as an implementation) and low-dimensional manifold learning (using t-SNE as an implementation), we propose a novel anomaly detection approach named AnoDM, standing for Anomaly detection based on unsupervised Disentangled representation learning and Manifold learning. We introduce a new anomaly score function by combining: (1) β-VAE’s reconstruction error, and (2) distances between latent representations of test points and training points in t-SNE map. AnoDM is a general framework, thus any disentangled representation learning and manifold learning techniques can be applied. The choice of a lower-level encoding scheme in β-VAE depends on data type of interest. For image data, deterministic convolutional network (CNN) is used in the encoder. In case of time series (sequence) data, we design an improved version of β-VAE by replacing CNN with temporal convolutional network (TCN) (Bai et al., 2018), a generic architecture for convolutional sequence prediction, in the encoder. We incorporate TCN as part of the encoder, because Bai et al. (2018) have shown that TCN outperforms canonical recurrent networks such as LSTMs (Hochreiter & Schmidhuber, 1997) across a range of supervised learning tasks and recommended that CNN should be regarded as the first method to try for sequence modeling tasks. Regarding the decoding architecture, we simply choose CNN, because by choosing a simpler CNN architecture as a part of the decoder, the model can achieve a comparable even better performance but take much less running time.\nThe contributions of this paper are summarized as follows. (1) We comprehensively explore the capacity of unsupervised disentangled representation learning, using β-VAE as an implementation, for anomaly detection. (2) We thoroughly investigate the potential of manifold learning for outlier identification by taking the disentangled latent representations from β-VAE as input to t-SNE. To the best of our knowledge, this is the first attempt to explore t-SNE for anomaly detection. (3) For sequence anomaly detection, instead of using prevailed recurrent networks (such as LSTM), as a practical contribution, we adopt an improved convolution architecture (TCN) to capture the temporal dependency in the encoder in unsupervised way." }, { "heading": "2 RELATED WORK", "text": "In the big data era, the development of deep learning models, especially DGMs, flourishes due to the need of modelling and analyzing massive amount of unstructured data (such as images, timeseries, graphs, text, etc.) generated in many application domains. Designing DGM-based solutions for anomaly detection becomes an important topic. Since DGMs, such as VAE (Kingma & Welling, 2014), and deep belief net (DBN) (Hinton et al., 2006; Li & Zhu, 2018a), aim at modelling the joint distribution of visible and latent variables (that is p(x,h)), their likelihood p(x) by marginalizing out h may serve as an abnormality indicator. However, unlike exponential family restricted Boltzmann machines (exp-RBMs) (Li & Zhu, 2018b), exact likelihood is unavailable for most DGMs. Alternatively, reconstruction error serves as an abnormality measure based on the intuition that outof-distribution samples can be reconstructed badly (An & Cho, 2015). Some deep hybrid methods (e.g. VAE+OCSVM (Andrews et al., 2016) and DBN+OCSVM (Erfani et al., 2016)), successfully combine classical one-class support vector machine (OCSVM; or kernel-based support vector domain description (SVDD)) with DGMs by using DGMs to learn latent representations of samples and using OCSVM to detect abnormal data points. However, these methods face the challenge of scalability, because the size of kernel matrices in dual form of SVDD is quadratic of sample size.\nGenerative adversarial net (GAN) has also been applied for anomaly detection (Schlegl et al., 2017). Since there is no encoder in GAN, Deecke et al. (2019) presented the ADGAN algorithm based on the availability of a good representation of a sample in latent space of its generator by assuming that the generator is able to effectively capture the distribution of the training data. Li et al. (2018a) proposed the GAN-AD method for cyber-physical systems (CPSs). It distinguishes fake data from actual data by taking into consideration of both discrimination loss calculated by the trained discriminator and residual loss between reconstructed and actual test data.\nFurthermore, DGM-based algorithms are also devised to detect anomaly problem on sequence data (e.g. LSTM-VAE (Park et al., 2017) and GAN-AD (Li et al., 2018a)). Conventionally, canonical recurrent networks (such as LSTM and GRUs (Cho et al., 2014)) are considered as the dedicated methods for sequence modeling. Some recent studies have also claimed that there was no architecture that could consistently beat LSTM in some typical sequence modelling tasks (Jozefowicz et al., 2015; Greff et al., 2015; Melis et al., 2017). On the other hand, some other researchers insist that CNN (LeCun et al., 1989) should be considered as more appropriate choice for sequences. Inspired by more recent CNN-based sequence modelling (such as machine translation (Gehring et al., 2016; 2017) and language modeling (Dauphin et al., 2016)), Bai et al. (2018) conducted a systematic evaluation of generic convolutional and recurrent architectures for sequence modelling across a broad range of tasks that are commonly used to benchmark recurrent networks, and concluded that convolutional networks, rather than recurrent networks, should be respected as a “natural starting point for sequence modelling tasks”." }, { "heading": "3 METHOD", "text": "In this paper, we propose a novel generic anomaly detection framework named AnoDM, which the first time combines unsupervised disentangled representation learning (implemented using β-VAE as an example) and low-dimensional manifold learning (currently using t-SNE as implementation) together to detect outliers via effectively taking the advantages of reconstruction at raw feature space and disentangled latent distribution in t-SNE map. Figure 1 shows the architecture of AnoDM which includes two main phases: (1) unsupervised disentangled representation learning and (2) anomaly detector. After β-VAE is learned using unlablled training normal samples, it then can be employed\nby the anomaly detector to efficiently obtain latent encoding and reconstructed version of a sample. Once latent embeddings of both training samples (or a representative subset from the training data) and a test sample are obtained, t-SNE is used to map the latent representations of these samples further to the 2-dimensional space (called t-SNE space or map), such that the average distance between the 2D representation of the test sample and its k nearest neighbors from the 2D representations of training samples is calculated. Finally, this distance is combined with the reconstruction error of the test sample to define its anomaly score. The essential parts of this framework are discussed below in details. Full AnoDM approach is given in Algorithm 1 in Appendix E." }, { "heading": "3.1 UNSUPERVISED DISENTANGLED REPRESENTATION LEARNING", "text": "The unsupervised disentangled representation learning component in our architecture is implemented but not limited by β-VAE (Higgins et al., 2017; Burgess et al., 2018; Hoffman et al., 2017; Mathieu et al., 2019). A self-inclusive description of β-VAE is provided in Appendix B. The objective function to be maximized for β-VAE is defined as (Burgess et al., 2018):\nL = Eqφ(z|x)[log pθ(x|z)]− β|DKL(qφ(z|x)||p(z))− C|. (1)\nThe first term of this objective function corresponds to reconstruction error in the raw feature space. The KL divergence characterizes the discrepancy between approximate posterior and isotropic prior of latent representations. A small discrepancy between them indicates high disentanglement of latent representations in independent variables. C is a hyperparameter which is used to improve the quality of reconstructed images. The loss function of the original β-VAE proposed in (Higgins et al., 2017) does not have this hyperparameter. The value of β trades off reconstruction error and disentanglement. Unlike (Higgins et al., 2017) and (Burgess et al., 2018), we consider β > 0 rather than just β > 1, because it is unnecessary to bound the value of β by 1, β > 0 allows us search for a more appropriate disentanglement. The special case β = 0 could make the model learning very unstable, because the variance of inference distribution loses control. It worth highlighting that other unsupervised disentanglement models can be used as well in AnoDM. For example, Mathieu et al. (2019) interpreted disentanglement as decomposition instead of independence by adding an additional regularization term to reduce the discrepancy between the aggregate posterior and a desired structured\nprior. However, designing a properly structured prior could be practically challenging. The adoption of β-VAE in our framework is sufficient to prove the concept that unsupervised disentanglement helps anomaly detection." }, { "heading": "3.2 EFFECTIVITY OF T-SNE ALGORITHM", "text": "In addition to β-VAE’s reconstruction error, we use the average distance between a test sample and its k-nearest neighbors from the collection of training samples in the t-SNE map to score the outlierness of a test sample. As discussed in Appendix C, t-SNE is significantly influenced by perplexity. The nature of complexity in data distributions makes it impossible to utilize a uniform criteria to define optimal perplexity for all data. Moreover, Wattenberg et al. (2016) mentioned several weaknesses of t-SNE, for examples, (1) it naturally expands dense clusters and contracts sparse ones, evening out cluster sizes, and (2) distances between clusters might not reflect global geometry. However, it is likely that k-nearest neighbors still work for local clumps, because, with proper value of perplexity, local topological information of latent distributions can be preserved by the t-SNE plot. Thus, in t-SNE space, the measure of k-nearest neighbors is more suitable than full density estimation which is very sensitive with the sizes of clusters. Furthermore, as to be shown in Section 4, distancing in the 2D t-SNE map could be more robust than distancing in β-VAE’s latent space where many non-determinative factors may influence the calculation of distances. Finally, it worth clarifying that we do not directly learn a 2D latent representation from β-VAE, because it will bottleneck too much the information flow for reconstruction. Instead, a lower-dimensional representation is learned through t-SNE for density estimation only." }, { "heading": "3.3 DETERMINISTIC OR STOCHASTIC LATENT REPRESENTATIONS FOR T-SNE", "text": "Either the mean µ or a sample z from the approximate inference distribution q(z|x) can be passed to t-SNE to calculate the k-NN distance of a test sample. There is trivial difference between performances achieved by these two methods in our framework. Generally, the µ-based method achieved slight better performance. The comparison of these two methods can be found in Table 2 in appendices. Furthermore, from the latent representations’ t-SNE maps (see Figure 9 in appendices), one can interestingly see that when β is small (not overly large), the t-SNE maps for both methods are quite similar. As β becomes overly large, some normal classes can still form their own clusters (even though some similar classes, such as classes 3 and 8 in MNIST, tend to mingle together) in µ-based method, but in z-based method all classes are entangled with each other. Same phenomena can be observed on the other datasets (see Figures 10 and 11 in appendices). Therefore, the µ-based method is used in current design of AnoDM." }, { "heading": "3.4 TCN ENCODER FOR UNSUPERVISED SEQUENCE MODELLING", "text": "Bai et al. (2018) distilled superior design in convolutional network into a simple architecture and referred it as a temporal convolutional network (TCN) with two distinctive characteristics: (1) the convolutions in the architecture are causal, and (2) the architecture can take a sequence of any length and map it to an output vector of fixed length, just as with an RNN. Bai et al. (2018) also explained that TCNs capture significantly longer history than recurrent networks. Inspired by Bai et al. (2018), we replace CNN with TCN in the encoder of β-VAE when evaluating the proposed AnoDM framework on time-series data, while we still use CNN in the decoder, because our preliminary experiments demonstrated that keeping decoder as simpler CNN can help achieve comparable even better results, and take much less computing time. The architecture of TCN used in this paper is depicted in Figure 7 of appendices. In this TCN, we set kernel size to 4 and dilation factors to [1, 2, 4, 8, 16, 32]. In Section 4, the comparison among TCN, CNN, and LSTM encoders in our framework also shows that TCN outperforms CNN and particularly LSTM to a great extent for ECG signal anomaly detection." }, { "heading": "3.5 ANOMALY SCORE FUNCTION IN ANODM", "text": "In the anomaly detector, the reconstruction error of a test sample in the original feature space and the average distance from its k-nearest-neighbors in training samples within the 2D t-SNE map are\ncombined as a final anomaly score function:\nSβVAE+tSNE(xte) = αDRE(xte) + (1− α)DktSNE(xte), (2) where the first term is defined using normalized squared error (NSE):\nDRE(xte) , NSE(xte,x′te) = ‖xte − x′te‖22 ‖xte‖2 , (3)\nwhere xte is a test sample, and x′te is its reconstructed version by sending the stochastic latent encoding through the decoder of β-VAE. The second term in Equation (2) is defined using β-VAE’s deterministic latent encoding (mean from the encoder of learned β-VAE) as input to t-SNE:\nDktSNE(x (i) te ) ,\n1\nk ∑ j∈N(i,k) ‖l(i)te − l (j) tr ‖2, (4)\nwhere l(i)te is the 2D representation of the i-th test sample in t-SNE map, N(i, k) is the set of indices of l(i)te ’s k nearest neighbors from training samples’ 2D representations ltr in t-SNE map. In Equation (2), α ∈ [0, 1] is the combination hyperparameter such that the two terms can effectively complement each other. To allow the anomaly score function to achieve its full potential, α value should be sensitively searched, because the values of DRE and DktSNE can stay at different magnitudes, a very small change of α value may dramatically alter the contributions of these two terms. Alternatively, the distance score in Equation 4 can be normalized by average distance of training samples in t-SNE map, which may alleviate the magnitude difference, thus ease search of optimal value of α. The use of this normalized distance is investigated in Appendix F." }, { "heading": "4 EXPERIMENTS", "text": "We evaluated our framework on four public image datasets, including MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), Small-Norb (LeCun et al., 2004) and CIFAR-10 (Krizhevsky, 2009), as well as one collection of ECG heartbeat categorization time-series data named Arrhythmia (Fazeli, 2018). Brief descriptions of these data sets can be found in Appendix G. The detail of β-VAE architecture is given in Table 3 in appendices. The number of epochs was set to 20 for experiments on MNIST and Fashion-MNIST datasets, and 50 for CIFAR-10, Small-Norb, and Arrhythmia datasets; batch size was set to 100 for experiments on all these datasets. When using t-SNE, the dimension of t-SNE map was set as 2 for all datasets, perplexity 30, the learning rate 200, and maximum number of iterations 1000. The value of α in the anomaly score function is searched from set {0.0, 0.005, 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 1.0}. We used k = 1 for calculating k-nearest-neighbors distance in t-SNE map (we also tried to set k to 3 or 5, the results were similar)." }, { "heading": "4.1 COMPARISON WITH CAPSNET, GANS, AND VAE", "text": "We compared AnoDM with state-of-the-art algorithms including a supervised method – CapsNet (Li et al., 2019), and two types of generative models – GANs (including AnoGAN and ADGAN) (Deecke et al., 2019) and β-VAE (implemented by setting α = 1 thus using only reconstruction error as anomaly score). As shown in Table 1, on average, AnoDM achieved either comparable (on MNIST) or better (on Fashion-MNIST and CIFAR-10) performance in terms of receiver operating characteristic curve (auROC). On Fashion-MNIST, CapsNet (prediction-probability-based), as the best benchmark method, obtained an average auROC of 0.765, while AnoDM achieved 0.883. On MNIST, both AnoDM and CapsNet obtained the highest performance. However, CapsNet is a supervised method that takes advantages of class information, while ours is completely unsupervised that is more suitable in many practices as class information is either incomplete or unavailable. Furthermore, by comparing AnoDM with β-VAE that only considers reconstruction error as anomaly score, AnoDM dramatically improved the performance in all cases. In other words, t-SNE makes a prominent contribution to improve β-VAE for anomaly detection problems. However, all generative models did not work well on Small-Norb, mainly because these models used convolution to extract features from image, but convolution is only able to capture translation but not other affine transformations. Although CapsNet learns these transformations as a supervised method, it worth exploring unsupervised learning of affine transformations as a future topic." }, { "heading": "4.2 IMPACT OF BETA TO PERFORMANCE", "text": "As discussed in Higgins et al. (2017), β (> 1) functions as a controller to encourage most efficient latent representation learning via limiting the capacity of latent information channel. Mathieu et al. (2019) however interpreted the objective of β as tuning a proper level of overlap of encodings by working with another term that regularizes the divergence of the aggregate posterior qφ(z) and the desired prior p(z). The main intuition is that purely increasing β induces too much overlap which actually discourages the disentanglement of data (information which is necessary for expressing desired structure is lost). Higgins et al. (2017) demonstrated that β-VAE with β > 1 leads to interesting results when learning interpretable factorized latent representations on a variety of datasets. Surprisingly, our investigation demonstrates that by setting 0 < β < 1, it actually achieved the state-of-theart results for anomaly detection problems on a range of datasets, such as MNIST, Fashion-MNIST and Arrhythmia. Figure 2 illustrates the impact of values of β and α in our anomaly score function. Interestingly, best performances were achieved when β < 1 and the performances generally degrade as β increases, resonating with Mathieu et al. (2019) that overly large values of β actually causes a mismatch between qφ(z) and p(z) (resulting in inappropriate level of overlap in the latent space). This phenomenon can be further seen in the t-SNE maps of latent embeddings in Figure 3. When β becomes extremely larger than the appropriate value, the anomalous class becomes entangled with its normal neighboring classes and the boundaries between normal classes become unclear. Theoretically, adding the divergence of the aggregate posterior qφ(z) and the desired structured prior p(z) is an effective way to limit the level of overlap when β is too large. However, it is practically challenging to design an appropriate structured prior. Therefore, in our investigation, we focused on exploring the full range of β’s value in β-VAE for the impact of disentanglement to anomaly\ndetection. Since our framework is quite general, it can be easily extended to other unsupervised disentangled representation learning models for anomaly detection." }, { "heading": "4.3 IMPACT OF BETA TO T-SNE REPRESENTATIONS", "text": "The t-SNE plots in Figure 3 reflect the impact of β’s value on latent representations in the case of identifying anomalous digit 7 from MNIST. In this example, the best performance (auROC = 0.975) was achieved when β = 0.01. Clearly, as β increases, all latent clusters become less dense, and more anomalous latent data points move to neighboring clusters. Furthermore, Figure 3 also corroborates that, even though in t-SNE maps distances between clusters might not reflect global geometry and cluster sizes might not mirror the true sizes (Wattenberg et al., 2016), using averaged distance from a test sample to its k nearest normal data points represented in t-SNE space to qualify outlierness still is a very effective way for distinguishing anomalous samples when β is tuned properly." }, { "heading": "4.4 EVALUATION OF ANOMALY SCORE FUNCTION", "text": "In order to better evaluate our anomaly score function, as formulated in Equation (2), we conducted a comprehensive comparison with methods only based on either distance in t-SNE map (DktSNE) or reconstruction-error in raw feature space (DRE). To see the contribution of t-SNE, it is also compared with the method that calculates nearest neighbor distance directly in latent space of β-VAE. Figure 4 displays the ROC curves of these four approaches when assuming anomalous classes are respectively 1 (“Trouser/pants”), 3 (“Dress”), 5 (“Sandal”), and 7 (“Sneaker”) on Fashion-MNIST. It is obvious that AnoDM achieves best results among them by taking advantages of both β-VAE reconstruction and t-SNE embedding. The β-VAE reconstruction reflects whether useful information is captured by the model through recovering the input x; the t-SNE embedding indicates the disentanglement of latent representations z. Both measures effectively complement each other. Besides, comparing the auROCs between t-SNE-based and latent-distance-based score functions, one can clearly see that the former dramatically outperforms the latter. Same conclusion can be drawn for MNIST, CIFAR-10, and Arrhythmia, as displayed in Figures 12, 14, 16, 17, and 18 in appendices. To further show that the optimal values of α are close to 1 (see Figure 2) is due to magnitude difference rather than less usefulness of t-SNE, we replaced the distance score (Equation (4)) with normalized distance score in the weighted final anomaly score function (Equation (2)). We found that the optimal values of α shift to the lower end of the spectrum (see Figure 8). It implies that t-SNE does play a critical role in our framework." }, { "heading": "4.5 ANODM FOR TIME-SERIES", "text": "As mentioned in Section 3, our method uses a TCN encoder in β-VAE for time-series anomaly detection. Figure 5 displays the comparison among TCN, CNN and LSTM encoders in the AnoDM framework on Arrhythmia. As a special case of LSTM-β-VAE, LSTM-VAE was presented in (Park et al., 2017) for state-of-the-art sequence modelling. For the five classes in Arrhythmia, iteratively one class was treated as anomalous class, while the other classes were used as normal classes. The TCN-encoder-based method outperforms the other two methods significantly in all five cases. Even though the CNN encoder achieved impressive results when detecting anomalous class “S”, “V”, “F” and “Q” respectively, it did not work quite well when class “N” was treated as anomaly. One\npossible reason might be that comparing with TCN and LSTM, the performance of CNN is more sensitive on the training sample size. Taking the above case as an example, as class 0 (“N”) accounts for over 80% of training data, when considering it as anomaly, normal training data hence become less sufficient for learning β-VAE. Nevertheless, in TCN-based β-VAE, each hidden unit of the\nlast deterministic hidden layer before latent encoding at the bottleneck is calculated based on much longer sequence dependency, such that it is less sensitive to the limitation of small sample size. Conclusively, as mentioned in (Bai et al., 2018), TCN should be regarded as a natural starting point for sequence modeling tasks.\nAs a case study, Figure 6a shows the original EGG signals and reconstructed signals by TCN-based β-VAE when considering class “Q” as anomaly. Normal samples can be reconstructed very well, whereas abnormal samples suffer from larger reconstruction errors. Meanwhile, the corresponding t-SNE plot in Figure 6b displays two distinctive clusters of abnormal samples. The combination of these two measures thus leads to the best performance as seen in Figure 6c." }, { "heading": "5 CONCLUSIONS", "text": "We propose a new methodology which successfully integrates t-SNE with disentangled representation learning for anomaly detection. This approach achieved state-of-the-art performances on both image data (MNIST, Fashion-MNIST and CIFAR-10) and Arrhythmia time-series data. Specifically, best performance is accomplished when 0 < β < 1 for almost all cases involving β-VAE. We also defined an anomaly score function by effectively taking the advantages of both low-dimensional t-SNE embedding and β-VAE reconstruction. Our algorithm demonstrated that t-SNE plays an essential role for measuring abnormality. This research initiates the research on anomaly detection using unsupervised disentangled representation learning and lower-dimensional manifold learning. Besides, our model uses TCN network as encoding architecture for detecting anomalous time-series data and the experimental results convince us that TCN consistently outperforms CNN and LSTM. As a proof of concept, our current framework automatically inheres advantages of deep learning to address anomaly detection’s issues in representability and scalability as discussed in the beginning of this paper. The extension of our framework to multimodal data is straightforward. It is also possible that a neural t-SNE component could be designed and integrated into the learning of β-VAE to achieve real-time efficiency. Other new well-performing manifold learning methods, such as UMAP (McInnes et al., 2018) which is faster and keeps global topologies, could be employed as replacement of t-SNE." }, { "heading": "B BETA-VAE AND BEYOND FOR DISENTANGLED REPRESENTATION LEARNING", "text": "Higgins et al. (2017) proposed a novel deep generative model, named β-VAE, a modification of VAE by introducing an adjustable hyperparameter β to learn an interpretable disentangled representation of the data generative latent factors. Specifically, β functions as a controller to trade off between the extent of learning constraints and reconstruction accuracy. The constraints impose a limit on the capacity of the latent information channel and an emphasis on learning statistically independent latent factors. Higgins et al. (2017) demonstrated that β-VAE with appropriately tuned β (β > 1) qualitatively outperforms VAE (β = 1) as well as state of the art unsupervised (InfoGAN) and semisupervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs).\nHiggins et al. (2017) assumed that an image x is generated by the true world simulator using ground truth data generative factors: p(x|v,w) = Sim(v,w), where v is set of conditionally independent factors and w is set of conditionally dependent factors. Therefore, the joint distribution of the data x and a set of generative latent factors z is: p(x|z) ≈ p(x|v,w) = Sim(v,w). The aim of this generative model is then to ensure that the inferred latent factors from qφ(z|x) capture the generative factors v in a disentangled manner. The conditionally dependent data generative factors w can remain entangled in a separate subset of z that is not entangled with v . Considering the prior p(z) is set to be an isotropic unit Gaussian p(z) = N (0, I), a constraint δ is introduced to encourage the matching between qφ(z|x) and p(z) such that the disentangling property in the inferred qφ(z|x) can be realized.\nFollowing the same incentive as in VAE: maximizing the probability of generating real data, while minimizing the distance between the generative and approximate posterior distributions, as formulated below\nmax φ,θ\nEx∼X [ Eqφ(z|x)[log pθ(x|z)] ] (20)\ns.t. DKL ( qφ(z|x)||p(z) ) < δ, (21)\nwhere X = {x1,x2, . . . ,xn} is the training data set. Rewriting this equation as a Lagrangian under the KKT conditions (Kuhn & Tucke, 1951; Karush, 1939), the following equation F(θ, φ, β) is obtained:\nF(θ, φ, β) = Eqφ(z|x)[log pθ(x|z)]− β(DKL ( qφ(z|x)||p(z))− δ ) = Eqφ(z|x)[log pθ(x|z)]− βDKL ( qφ(z|x)||p(z) ) − βδ\n≥ Eqφ(z|x)[log pθ(x|z)]− βDKL ( qφ(z|x)||p(z) ) . (22)\nThe objective function to be maximized in β-VAE is thus defined as: Lβ(θ, φ) = Eqφ(z|x)[log pθ(x|z)]− βDKL ( qφ(z|x)||p(z) ) , (23)\nwhere the Lagrangian multiplier β is the regularisation coefficient that constrains the capacity of the latent information channel z and puts implicit independence pressure on the learnt posterior due to the isotropic nature of the Gaussian prior p(z). When β = 1, β-VAE corresponds to the original VAE formulation of Kingma & Welling (2014). When β > 1, it applies stronger constraint which limits the capacity of z and encourages the model to learn the most efficient representation of the\ndata. Theoretically, a higher β encourages more efficient latent encoding and further encourages the disentanglement. However, a higher β may lead to poorer reconstructions due to the loss of high frequency details when passing through a constrained latent bottleneck.\nBurgess et al. (2018) proposed an improvement to β-VAE, by progressively increasing the information capacity of the latent code during training. It facilitates the robust learning of disentangled representations in β-VAE without the previous trade-off in reconstruction accuracy. The objective function to be maximized in β-VAE is redefined as:\nLβ(θ, φ) = Eqφ(z|x)[log pθ(x|z)]− β|DKL(qφ(z|x)||p(z))− C|, (24) where the hyperparameter β controls how heavily to penalise the deviation of DKL ( qφ(z|x)||p(z) ) and a controllable value C. By gradually increasing C from zero to a large value, good-quality reconstruction can be obtained.\nHoffman et al. (2017) had also made a deeper research on β-VAE considering β < 1. They argued that optimizing this partially regularized ELBO is equivalent to performing variational expectation maximization (EM) with an implicit prior r(z) (r(z) ∝ qφ(z)(1−β)p(z)β) that depends on the marginal (aggregate) posterior q(z) , 1N ∑N i=1 q(z|x(i)), and further derived some approximations to examine this prior.\nMathieu et al. (2019) explained that most recent work for learning disentangled representations of data with deep generative models has focused on capturing purely independent factors of variation by employing regularizers explicitly encouraging independence in the representations. They argued that such an approach is not generalisable, and quite restrictive for complex models where the true generative factors are not independent, very large in number, or where a set of true generative factors cannot be well defined. Overly large β is not universally beneficial for disentanglement, Since this in turn causes a mismatch between marginal posterior qφ(z) and the prior p(z). Thus they proposed a generalization of disentanglement in VAE by explicitly separating such a decomposition as two tasks: a) the latent encoding of data should achieved an appropriate level of non-negligible overlap in aggregate encoding qφ(z), and b) the aggregate encoding of data qφ(z) should match the prior p(z) which demonstrates the desired dependency structure between latent variables. By improving the match of qφ(z) and p(z), the overlap is only up to an appropriate level. Mathieu et al. (2019) developed a new objective that incorporates both a) and b) by introducing an additional divergence term D ( qφ(z), p(z) ) . The objective of this improved β-VAE is defined as below:\nLα,β(x) = Eqφ(z|x)[log pθ(x|z)]− βKL(qφ(z|x)||p(z))− αD(qφ(z), p(z)). (25) By appropriately setting β and α, it allows direct control over the level of overlap and the regularization between the marginal posterior and the prior. However, a practical challenge of this method is how to define a proper structured prior when the structure of real hidden factors is poorly known. For this reason, our computational experiment in this paper is based on Burgess et al. (2018)’s β-VAE.\nInstead of employing the regularization controller β to disentangle the independent data factors by putting implicit independence pressure in the representation, Hamaguchi et al. (2018) presented a new method to learn disentangled representation by introducing two additional loss functions Lsim (a similarity loss function) and Lact (an activation function). This technique aims to detect trivial events in an image resulting from environmental changes (such as illumination changes, background motions and shadows) by disentangling each image into two kinds of features, specific and common. The functionality of Lsim constrains common features to represent invariant factors between two paired images. Lact encourages activation of common features to avoid a trivial solution. After common features of paired images are separated, the means of common features from two images are fed into event detector (a classifier) for training. As mentioned in Hamaguchi et al. (2018), this method cannot achieve good disentanglement when dealing with complicated scenes, since the activation of the units in the common features are degenerated a certain value." }, { "heading": "C T-SNE FOR MANIFOLD LEARNING", "text": "The t-distributed stochastic neighbor embedding (t-SNE), introduced by van der Maaten & Hinton (2008), is a popular technique for nonlinear dimensionality reduction that is particularly well suited for visualizing similarity of high-dimensional data that lie on several different, but related, lowdimensional manifolds. It is an improvement of stochastic neighbor embedding (SNE) presented\nby Hinton & Roweis (2002) with easier optimization and better visualization via replacing the SNE cost function with a symmetrized version and using a Student-t distribution instead of a Gaussian to compute the similarity between two points in the low-dimensional space.\nC.1 STOCHASTIC NEIGHBOR EMBEDDING (SNE)\nSNE measures similarities (in both original high-dimensional space and mapped low-dimensional space) between points by converting Euclidean distances between data points into conditional probabilities under Gaussian distributions. In original high-dimensional space, for example, the similarity of a pair of data points xi and xj is represented by pj|i under a Gaussian centered at xi. High pj|i means xi and xj are close to each other, and vice versa. pj|i is defined as:\npj|i = exp(−||xi − xj ||2/2σ2i )∑ k 6=i exp(−||xi − xk||2/2σ2i )) , (26)\nwhere σi is the variance of the Gaussian that is centered on data point xi. It is influenced by the perplexity Perp(Pi), which is defined as follows,\nPerp(Pi) = 2H(Pi), (27)\nwhere Pi corresponds particular σi and H(Pi) is the Shannon entropy of Pi measured in bits.\nFor the low-dimensional counterparts yi and yj of the high-dimensional data points xi and xj , the similarity of these two points is denoted as qj|i, and similarly defined as:\nqj|i = exp(−||yi − yj ||2)∑ k 6=i exp(−||yi − yk||2) , (28)\nwhere the variance of the Gaussian is set to 1√ 2 . Since the focus is modeling pairwise similarities, both pi|i and qi|i are set to zero.\nC.2 COST FUNCTION OF T-SNE\nKullback-Leibler divergence is utilized to measure the difference between the low-dimensional data representation distribution qj|i and high-dimensional data distribution pj|i. The cost function of SNE, which is sum of Kullback-Leibler divergences over all data points, to be optimized by a gradient descent method, is given by\nC = ∑ i DKL(Pi||Qi) = ∑ i ∑ j pj|i log pj|i qj|i . (29)\nAiming to alleviate the difficult optimization problem of the cost function above and “crowding problem” (comparing nearby data points, moderate distant data points will not occupy reasonably large area in the low-dimensional map), van der Maaten & Hinton (2008) introduced symmetric SNE and employed a Student t-distribution with one degree of freedom (which is the same as a Cauchy distribution) as the heavy-tailed distribution in the low-dimensional map. The cost function is redefined as:\nC = DKL(P ||Q) = ∑ i ∑ j pij log pij qij , (30)\nwhere again, pii and qii are also set to zero, and\npij = exp(−||xi − xj ||2/2σ2)∑ k 6=l exp(−||xk − xl||2/2σ2) , (31) qij = (1 + ||yi − yj ||2)−1∑ k 6=l(1 + ||yk − yl||2)−1 . (32)\nThe gradient of the Kullback-Leibler divergence between P and the Student-t based joint probability distribution Q is given by\nδC δyi = 4 ∑ i (pij − qij)(yi − yj)(1 + ||yi − yj ||2)−1. (33)" }, { "heading": "D TEMPORAL CONVOLUTIONAL NETWORKS", "text": "Bai et al. (2018) employed a basic architecture which is essentially same as the time delay neural network proposed by Waibel et al. (1990) to ensure outputs of same length as inputs and no leakage from the future into the past.\nTCN = 1D FCN + causal convolutions.\nHowever, since a simple causal convolutions is not able to achieve a long effective history size, Bai et al. (2018) employed dilated causal convolutions (same architecture as WaveNet (van den Oord et al., 2016), please refer to Figure 7a) that enable an exponentially large receptive field. The dilated convolution operation F on element s of the sequence is defined as:\nF (s) = (x ∗d f)(s) = k−1∑ i=0 f(i) · xs−d·i, (34)\nwhere x ∈ Rn is a 1-D sequence input, f : 0, . . . , k − 1→ R is a filter, d is the dilation factor, k is the filter size, and s− d · i accounts for the direction of the past. When d = 1, a dilated convolution reduces to a regular convolution. By choosing larger filter sizes k and increasing the dilation factor d, the receptive field of TCN can be enlarged.\nBai et al. (2018) also utilized a generic residual module (He et al., 2015) in place of a convolutional layer. A residual block is defined as:\no = Activation(x + F(x)).\nThe outputs of a series of transformations F are added to the input x of the block. To tackle with discrepancy of input and output widths in a standard residual block, an additional 1× 1 convolution is used to ensure that element wise addition ⊕ receives tensors of the same shape (see Figure 7b). Bai et al. (2018) further discussed the advantages of TCN (including parallelism, flexible receptive field size, stable gradients, low memory requirement for training and variable length inputs) and its disadvantages (including possibly high memory requirement for evaluation and potential parameter change for a transfer of domain)." }, { "heading": "E THE FULL ANODM ALGORITHM", "text": "Algorithm 1: AnoDM Algorithm Result: Anomaly scores of test samples Inputs: Xtr: training samples,Xte: test samples, β > 0: hyperparameter for β-VAE\n1 while epoch no more than training iterations do 2 Encoder net mapsXtr into µtr and σtr; 3 Ztr = µtr + σtr , ∼ N (0, I); 4 Decoder net reconstructsXtr toX′tr using Ztr; 5 Update β-VAE’s parameters θ and φ; 6 end 7 ForXtr andXte, obtain µtr and µte respectively using trained β-VAE; 8 Use t-SNE to map µtr and µte to 2D representations ltr and lte; 9 for x(i)te withinXte do\n10 DRE(x(i)te ) , NSE(x (i) te ,x ′(i) te ) =\n‖x(i)te −x ′(i) te ‖ 2 2\n‖x(i)te ‖2 ; // reconstruction error\n11 DktSNE(x (i) te ) , 1 k ∑ j∈N(i,k)‖l (i) te − l (j) tr ‖2 ; // N(i, k) is the set of indices of l(i)te ’s\nk nearest neighbors from ltr\n12 SβVAE+tSNE(x(i)te ) = αDRE + (1− α)DktSNE ; // α ∈ [0, 1] 13 end" }, { "heading": "F ANOMALY SCORES WITH NORMALIZED K-NN DISTANCE IN T-SNE MAPS", "text": "Since the normalised reconstruction error in input space and the k-NN distance in t-SNE maps may have very different magnitudes (as mentioned in Section 3.5), some values of α in Equation (2) are close to (but not exactly equal to) 1. The reader may have the intuition that t-SNE is not useful in AnoDM. A simple way, to find out whether large α value is due to magnitude difference or useless of t-SNE, is to replace k-NN distance of a test sample in t-SNE map (Equation (4)) with a k-NN distance normalized by average distance among training samples in the t-SNE map as defined below:\nN ktSNE(x (i) te ) ,\nDktSNE(x (i) te )\nc ∗ DtSNE(xtr) , (35)\nDtSNE(xtr) , 1\nn ∑ i,j∈{1,2,...,n} ‖x(i)tr − x (j) tr ‖2, (36)\nwhere c > 0 is normalization hyperparameter (we set it to 0.5 in our experiment); xtr is a training sample; n is the total number of training samples in a t-SNE map. The heatmaps in Figure 8 depict performances of AnoDM on Fashion-MNIST (“Dress”class is considered as anomaly) and Arrhythmia (“F” class is treated as anomaly) using normalized k-NN distance in t-SNE maps in combination reconstruction error. By comparing Figure 8 with Figure 2, one can see that the optimal values of α shift upper right corner area to the upper left corner area. It thus implies that, the optimal\nvalue of α is affected by the magnitude difference, and t-SNE indeed plays an essential role in AnoDM." }, { "heading": "G DESCRIPTIONS OF DATA SETS", "text": "• MNIST: It contains a training set of 60000 gray scale digit images of 28 × 28 and a test set of 10000 same resolution gray scale examples from approximately 500 different writers (LeCun et al., 1998).\n• Fashion-MNIST: It is a dataset of Zalando’s article images, comprising 70000 MNISTlike labeled fashion images of 28× 28, with 7000 images per category (Xiao et al., 2017). The training set has 60000 images and the test set has 10000 images. The samples come from 10 classes: T-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot.\n• Small-Norb: It contains 24300 96 × 96 grayscale images pairs of 50 toys belonging to 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars (LeCun et al., 2004). The objects were imaged by two cameras under 6 lighting conditions, 9 elevations , and 18 azimuths. As in (Sabour et al., 2017), the images were resized to 48×48; random 32×32 crops of them were obtained during training process. Central 32×32 patches of test images were used during test.\n• CIFAR-10: It consists of 60000 32 × 32 colour images in 10 classes, with 6000 images per class. There are 10000 test images which include exactly 1000 randomly-selected images from each class and 50000 training images (with 5000 images from each class) are randomly grouped into 5 batches (Krizhevsky, 2009).\n• Arrhythmia: It is derived from Physionet’s MIT-BIH Arrhythmia Datase which consists of ECG recordings from 47 different subjects recorded at the sampling rate of 360Hz . This dataset is created by five different heart beat categories: “N”, “S”, “V”, “F” and “Q”, in accordance with Association for the Advancement of Medical Instrumentation (AAMI) EC57 standard (Kachuee et al., 2018). The meanings of these categories are explained below.\n1. “N”: normal, left or right bundle branch block, atrial escape and nodal escape.\n2. “S”: atrial premature, aberrant atrial premature, nodal premature and supra-ventricular premature.\n3. “V”: premature ventricular contraction and ventricular escape. 4. “F”: fusion of ventricular and normal. 5. “Q”: paced, fusion of paced and normal and unclassifiable." } ]
2,019
null
SP:0ae436583d8ace9acd5810d146933893f229ba9b
[ "The authors proposed a new gradient-based architecture search method that tries to find more efficient alternatives starting from the pre-trained model. The approach is similar to DARTS (Liu et al., 2019) with a budget constraint, such as size and throughput. One major difference is to modify the update of architectural parameters, i.e., mixing weights of all candidate operations, to induce the sparsity rather than to keep the weighted sum of all possible operations. Another difference is that it starts from a well-defined architecture with pre-trained weights. It is simple to apply, but hard to tell. The recognized strengths and concerns are as follows.", "This paper proposes a new architecture search method called \"DARC\" that utilizes a differentiable objective function. Since a naive formulation of architecture search is reduced to a combinatorial optimization which is not differentiable, the optimization requires much computational cost. To overcome this difficulty, this paper proposes a L1-norm relaxation and apply such relation in a layer-wise manner. The method shares a similar spirit with NAS, but the proposed model is more like \"model selection\" from a fixed candidates, and thus there is a Rademacher complexity guarantee. The effectiveness of DARC is justified by thorough numerical experiments." ]
In many learning situations, resources at inference time are much more constrained than resources at training time. This paper studies a general paradigm, called Differentiable ARchitecture Compression (DARC), that combines model compression and architecture search to learn models that are resource-efficient at inference time. Given a resource-intensive base architecture, DARC utilizes the training data to learn which sub-components can be replaced by cheaper alternatives. The high-level technique can be applied to any neural architecture, and we report experiments on state-of-the-art convolutional neural networks for image classification. For a WideResNet with 97.2% accuracy on CIFAR-10, we improve single-sample inference speed by 2.28× and memory footprint by 5.64×, with no accuracy loss. For a ResNet with 79.15% Top1 accuracy on ImageNet, we improve batch inference speed by 1.29× and memory footprint by 3.57× with 1% accuracy loss. We also give theoretical Rademacher complexity bounds in simplified cases, showing how DARC avoids overfitting despite over-parameterization.
[]
[ { "authors": [ "Hessam Bagherinezhad", "Mohammad Rastegari", "Ali Farhadi" ], "title": "Lcnn: Lookup-based convolutional neural network", "venue": "In Proc. IEEE CVPR,", "year": 2017 }, { "authors": [ "Peter L. Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "J. Mach. Learn. Res.,", "year": 2003 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Yu Cheng", "Duo Wang", "Pan Zhou", "Tao Zhang" ], "title": "Model compression and acceleration for deep neural networks: The principles, progress, and challenges", "venue": "IEEE Signal Processing Magazine,", "year": 2018 }, { "authors": [ "Corinna Cortes", "Mehryar Mohri", "Umar Syed" ], "title": "Deep boosting", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Yunchao Gong", "Liu Liu", "Ming Yang", "Lubomir Bourdev" ], "title": "Compressing deep convolutional networks using vector quantization", "venue": "arXiv preprint arXiv:1412.6115,", "year": 2014 }, { "authors": [ "Ariel Gordon", "Elad Eban", "Ofir Nachum", "Bo Chen", "Hao Wu", "Tien-Ju Yang", "Edward Choi" ], "title": "Morphnet: Fast & simple resource-constrained structure learning of deep networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Max Jaderberg", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Speeding up convolutional neural networks with low rank expansions", "venue": "In Proceedings of the British Machine Vision Conference. BMVA Press,", "year": 2014 }, { "authors": [ "Kirthevasan Kandasamy", "Willie Neiswanger", "Jeff Schneider", "Barnabas Poczos", "Eric Xing" ], "title": "Neural architecture search with bayesian optimisation and optimal transport", "venue": "arXiv preprint arXiv:1802.07191,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoff Hinton" ], "title": "Convolutional deep belief networks on cifar-10", "venue": "Unpublished manuscript,", "year": 2010 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Anastasios Kyrillidis", "Stephen Becker", "Volkan Cevher", "Christoph Koch" ], "title": "Sparse projections onto the simplex", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "arXiv preprint arXiv:1608.08710,", "year": 2016 }, { "authors": [ "Ping Li", "Syama Sundar Rangapuram", "Martin Slawski" ], "title": "Methods for sparse and low-rank recovery under simplex constraints", "venue": "arXiv preprint arXiv:1605.00507,", "year": 2016 }, { "authors": [ "Darryl Lin", "Sachin Talathi", "Sreekanth Annapureddy" ], "title": "Fixed point quantization of deep convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu" ], "title": "An entropy-based pruning method for cnn compression", "venue": "arXiv preprint arXiv:1706.05791,", "year": 2017 }, { "authors": [ "Jian-Hao Luo", "Jianxin Wu", "Weiyao Lin" ], "title": "Thinet: A filter level pruning method for deep neural network compression", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Mert Pilanci", "Laurent E Ghaoui", "Venkat Chandrasekaran" ], "title": "Recovery of sparse probability measures via convex programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Adam Polyak", "Lior Wolf" ], "title": "Channel-level acceleration of deep face representations", "venue": "IEEE Access,", "year": 2015 }, { "authors": [ "Adriana Romero", "Nicolas Ballas", "Samira Ebrahimi Kahou", "Antoine Chassang", "Carlo Gatta", "Yoshua Bengio" ], "title": "Fitnets: Hints for thin deep nets", "venue": "arXiv preprint arXiv:1412.6550,", "year": 2014 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V Le" ], "title": "MnasNet: Platform-aware neural architecture search for mobile", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Bichen Wu", "Alvin Wan", "Xiangyu Yue", "Peter Jin", "Sicheng Zhao", "Noah Golmant", "Amir Gholaminejad", "Joseph Gonzalez", "Kurt Keutzer" ], "title": "Shift: A zero flop, zero parameter alternative to spatial convolutions", "venue": "arXiv preprint arXiv:1711.08141,", "year": 2017 }, { "authors": [ "Bichen Wu" ], "title": "FBnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": null, "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Xiangyu Zhang", "Jianhua Zou", "Kaiming He", "Jian Sun" ], "title": "Accelerating very deep convolutional networks for classification and detection", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2016 }, { "authors": [ "Aojun Zhou", "Anbang Yao", "Yiwen Guo", "Lin Xu", "Yurong Chen" ], "title": "Incremental network quantization: Towards lossless cnns with low-precision weights", "venue": "arXiv preprint arXiv:1702.03044,", "year": 2017 }, { "authors": [ "Zhuangwei Zhuang", "Mingkui Tan", "Bohan Zhuang", "Jing Liu", "Yong Guo", "Qingyao Wu", "Junzhou Huang", "Jinhui Zhu" ], "title": "Discrimination-aware channel pruning for deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "In machine learning, resources at inference time are often much more constrained than at training time. For example, while neural networks for computer vision and natural language processing (NLP) are routinely trained using GPUs, trained networks are often deployed on embedded systems or mobile devices with limited memory and computational power. As another example, it is common to train a model that will be applied continuously in production; while training occurs for a limited time, the machine performing inference may run indefinitely, and so learning a more efficient model can directly reduce costs associated with hardware or energy usage. As a result, many recent papers have studied deep model compression and acceleration. Most of these papers provide resource efficient model components (Jaderberg et al., 2014; Zhang et al., 2016; Wu et al., 2017) or methods to prune or quantize parameters (LeCun et al., 1990; Polyak & Wolf, 2015; Li et al., 2016a; He et al., 2017; Luo & Wu, 2017; Luo et al., 2017; Zhuang et al., 2018).\nWe propose a general paradigm called Differentiable ARchitecture Compression (DARC) for learning in the context of constrained resources at inference-time. Rather then suggesting a specific cheap component and using it blindly throughout a neural network, or trying to tune layer hyperparameters as in network pruning or quantization, our approach is inspired by Neural Architecture Search (NAS) (Zoph & Le, 2016; Liu et al., 2018a; Pham et al., 2018; Kandasamy et al., 2018; Liu et al., 2018b). DARC starts with a resource-intensive network design and uses training data to learn which components can be replaced with efficient alternatives, while maintaining model output quality. The resource requirement of the final model is controlled via a regularization term, which can be flexibly defined depending on the objective, such as minimizing inference time or memory footprint.\nDARC has a clear intuitive advantage when compared to methods that quantize or prune parameters; these approaches are inherently restricted in their search space. They cannot replace a layer with a structurally different layer, or replace two or more layers with a shallow alternative. As examples, it might be the case that a convolutional layer cannot be pruned without hurting performance but can be replaced with a depthwise-separable convolution, or an LSTM layer might not be amenable to weight pruning, but could be replaced with a more efficient self-attention layer. DARC, applied to deep networks, offers a way to search a rich space in the context of model compression.\nThe high-level idea is to partition the network into components, and explore alternatives to these components simultaneously. Replacing all components simultaneously is crucial, as it provides a data-driven way to decide which components can be replaced with cheap alternatives. Indeed, replacing layers blindly may sacrifice too much prediction performance, as can be seen in Mo-\nbileNets (Howard et al., 2017; Sandler et al., 2018), which exclusively use depthwise-separable convolutions; although computationally efficient, even the most accurate MobileNet model is far less accurate at ImageNet classification than, say, ResNet50.\nWe frame the problem of learning good replacement components as a sparse ensemble learning problem. This view allows us to draw guidelines from simpler analyzable cases, leading to a simple, gradient-based learning scheme that avoids over-fitting and is fast enough to apply DARC directly to large datasets such as ImageNet. This contrasts from most NAS methods, which first learn an architecture on a small dataset and then fit this architecture on the larger dataset.\nWe present experiments on networks commonly used in computer vision. By applying our techniques on the ResNet Architecture (He et al., 2016), we outperform state-of-the-art models on both ImageNet and CIFAR-10 datasets, in terms of accuracy vs. throughput and accuracy vs. model size. A few results from our compression framework: for a WideResNet model achieving 97.2% accuracy on CIFAR-10, we improve single-sample inference speed by 2.28× and memory footprint by 5.64×, with no loss in accuracy. For a modified ResNet50 model with 79.15% Top1 accuracy on ImageNet, we improve inference speed by 1.29× and memory footprint by 3.57× with 1% loss in accuracy. Both base models are publicly available from the GluonCV Model Zoo (Mod, 2018). Our experiments empirically demonstrate an intuitive observation that ‘you get what you optimize for’, in that models minimizing model size tend to be quite different from those maximizing throughput.\nWe note that, while our experiments are limited to image classification, DARC is applicable to any deep learning architecture, including models with recurrent cells or transformers, or indeed any sufficiently modular learning algorithm, as described in Section 2, and any task with a well-defined objective function in which we would like to reduce inference costs. We see this work as a proof-ofconcept for capabilities of the DARC framework, and the results in this paper give a strong indication that DARC can be applied to NLP architectures, or optimized for metrics other than those we try here (e.g. for latency on devices other than GPU, or energy consumption)." }, { "heading": "2 GENERAL SETTING AND DARC ESTIMATOR", "text": "The intuition and motivation for our method starts with a task of model selection. Given a task and J function families corresponding to candidate models, we are interested in finding the best model type for the task. We relax this combinatorial optimization problem to a more tractable differentiable optimization problem (whence the name “differentiable architecture compression”), by allowing convex combinations of these candidates. This can be thought of as a constrained form of ensemble learning in which weights are restricted to represent a convex combinations of individual learners.\nThen, we posit a budget constraint: each model type has an associated cost (e.g., memory consumption or latency), and the overall cost of the ensemble is the sum of costs of the used models. Our task is then to learn a convex ensemble over a subset of candidates, with total cost within budget. We now formalize this approach, with modifications to address technical challenges as they arise.\nIn the sequel, for any positive integer J , [J ] = {1, 2, ..., J} denotes the set of positive integers at most J , and ∆J := { x ∈ [0, 1]J : ∑ j∈[J] xj = 1 } denotes the probability simplex over J elements.\nConsider the conventional supervised learning setting, in which we have an i.i.d. training dataset (X1, Y1), ..., (Xn, Yn)\nIID∼ P(X,Y ) from some joint distribution P(X,Y ) on X × Y . Fix a loss function L : R × Y → [0,∞], and a hypothesis class H of R-valued functions. We would like to learn a function h : X → R, h ∈ H that minimizes the risk R(h) := E(X,Y )∼P(X,Y ) [L (h(X), Y )]. The usual empirical risk minimization (ERM) estimator is ĥERM := arg minh∈H R̂(h) where, for any hypothesis h ∈ H, R̂(h) := 1n ∑n i=1 L (h(Xi), Yi) denotes the empirical risk. To derive our resource-constrained objective, we impose a few structural assumptions on our hypothesis classH: (A1) H = Conv ⋃ j∈[J]Hj is the convex hull of a union of J classesH1, ...,HJ .\n(A2) Each classHj class has a known cost Cj ≥ 0 of using a hypothesis hj ∈ Hj at test-time. (A3) Costs are additive: hypothesis h = ∑J j=1 αjhj ∈ H has cost C`0(α) = ∑J j=1 Cj1{αj>0}. (A4) We have a known budget B ≥ 0 for the final model at test-time.\nAs we show in Section 3, these assumptions arise naturally in architecture compression. Given assumptions equation A1-equation A4, the constrained ERM estimate is ĝ = ∑ j∈[J] α̂j ĥj , where\n(α̂0, ĥ0) := arg min α∈∆J ,ĥj∈Hj R̂ ∑ j∈[J] αjhj , subject to C`0(α) ≤ B. (1) The above estimator is difficult (NP-hard) to compute, due to the non-smooth, non-convex budget constraint C`0(α) ≤ B. Since this constraint bounds the `0 norm of α (weighted by C), the usual remedy would be to relax the constraint to one on the `1 norm of α (weighted by C), namely C`1(α) := ∑ j∈[J] Cjαj ≤ B. Unfortunately, due to the constraint that α lies in the probability\nsimplex ∆J (which implies ∑ j∈[J] αj = 1), the `1 constraint is insufficient to induce sparsity on α. Fortunately, sparse optimization on ∆J is well-studied, with many solutions proposed (Pilanci et al., 2012; Kyrillidis et al., 2013; Li et al., 2016b). Due to ease of implementation, we adopt a simple but effective solution proposed by (Kyrillidis et al., 2013), which involves alternating gradient updates with a projection operation P∆J : RJ\\RJ− → ∆J , given by P∆J (α) = α+ ‖α+‖1 , where α+ = (max{0, α1}, ...,max{0, αj}) ∈ RJ+.\nP∆J is easy to compute, enforces the simplex constraint α ∈ ∆J exactly, and induces sparsity on α. For an intuition of how this works, one can note that ∇‖α‖22/2 = α, so that the update α/‖α‖1 = α − (1− 1/‖α‖1)α can be viewed as a gradient step for minimizing −‖α‖22/2 with adaptive step size (1− 1/‖α‖1). As a technicality, we note that the projection P∆J (α) is undefined when α ∈ RJ− has no positive components. However, for realistic gradient step sizes η, this never occurs, since, after each gradient update, ∑ j∈[J] αj ≥ 1−O(η).\nFinally, a natural initial point for our procedure is one where C`1(α) > B, hence we reexpress the constraint C`1(α) ≤ B as a penalty λC`1(α). Since the value of λ corresponding to B is not known a priori, we iteratively increase λ until the solution of the optimization problem satisfies the budget constraint. The resulting DARC procedure is shown Algorithm 1. We note that the “stopping criterion” for the inner loop can be as simple as a fixed number of training epochs (as in our experiments), or a more sophisticated early-stopping criterion.\nAlgorithm 1: DARC algorithm for general hypotheses Data: Training Data {(Xi, Yi)}ni=1, J candidates h1,w1 , ..., hj,wj with initial parameters w1, ..., wj and costs c1, ..., cj ≥ 0, initial cost penalty parameter λ0 > 0, budget B. Result: α,w1, ..., wj such that h = ∑ j∈[J] αjhj,wj has small risk R(h) and cost C`0(α) ≤ B\n1 α← (1/J, ..., 1/J), λ← λ0 ; 2 while C`0(α) > B do 3 while stopping criterion is not met do 4 (α,w1, ..., wJ)← (α,w1, ..., wJ)− η ( ∇α,w1,...,wJ R̂ (∑ j∈[J] αjhj,wj ) + C`1(α) ) ; 5 α← P∆J (α); 6 end 7 λ← 2λ 8 end" }, { "heading": "3 APPLYING DARC TO DEEP NETWORKS", "text": "DARC can be applied in a myriad of ways to compress deep neural networks. In all of these ways, the basic premise is to intelligently replace components of the network with cheaper components.\nConsider a Neural Network (NN) with L layers. For layer `, let W` be the parameters of the layer, and g` be the function mapping inputs and parameters to the output (in layers having no parameters, W can be an empty token). For example, for a fully connected layer, W is a matrix, the input x is a vector, and g is the matrix-vector multiplication function. We can write the NN as a function:\nf(x) = gL(WL, gL−1(WL−1, · · · g1(W1, x) · · · )), (2) To apply DARC, we consider a set of replacement candidates (g`,2,W`,2), . . . , (g`,J` ,W`,J`) for each layer ` (with g`,1,W`,1 denoting the original function and weight of the layer). For each candidate j in layer `, DARC takes as input an associated cost C`,j ≥ 0. Examples of such costs include\nparameters count, FLOPs, or latency, which are usually easy to calculate or estimate experimentally. Applying DARC to neural network compression then involves four main steps: 1. Layerwise Continuous Relaxation: First, we replace each g`,W` with a weighted average g̃`(W̃`, α`, x) = ∑J` j=0 α`,jg`,j(W`,j , x), where α` ∈ ∆J` . The original network is replaced\nby f̃(x) = g̃L(W̃L, αL, · · · g̃1(W̃1, α1, x) · · · ). 2. DARC Model Initialization: Before training the DARC model, we need to initialize the α\nweights and the parameters of the compression candidates. We initialized the α parameters as uniform vectors α` = (1/J`, 1/J`, ..., 1/J`). The other option we considered was to put all weight on the original candidate (α` = (1, 0, ..., 0)), so that the initial model was equivalent to the original model being compressed. However, this makes the gradient of the loss 0 with respect to all parameters of the compression candidate, preventing these from training. Furthermore, the non-convex regularization discourages the weights of α` to shift towards a value that makes use of the compression candidates. As for candidate parameters, we initialized each compression candidate to mimic the original layer, which we know gives good prediction results. In some cases, this can be done analytically (e.g. via PCA for lower dimensional fully-connected layers); more generally, this can be done via SGD, training the new candidate to minimize squared loss between its outputs and those of the original layer g`,1(W`,1, x). Since this is only for initialization, it suffices to use a small training sample and crude optimization procedure.\n3. Training the Relaxed Model: We minimize the empirical risk, simultaneously over the mixture weights (αs) and the candidate weights (W̃ s) as described in Algorithm 1.\n4. Selecting a Sub-Model: As discussed above, for sufficiently large λ, Algorithm 1 converges to a solution with small (weighted) `0 norm; i.e., α` will have a small number of non-zero entries. Thus, we remove candidate g`,j (and its weight α`,j) from the network if α`,j = 0.\nDuring optimization, we jointly optimize α and the model parameter on the same data, contrasting from other gradient based NAS approaches (Liu et al., 2018b) that split data into two training sets, optimize model parameters on one and α weights on the other. In Section 4 we analyze the Rademacher complexity of our procedure in a simple setting and show that under the condition that the original model class defined by g`,1 is richer than the alternatives, optimizing all parameters jointly does not hurt generalization guarantees when compared to the original optimization objective where J` = 1. Unlike for NAS, this condition holds naturally for model compression.\nEfficient Approximate Convolutions Computation in most deep networks used in computer vision problems, such as image classification, image segmentation, and object detection, is dominated by convolutional layers. This has motivated several papers on efficient approximations to convolution, such as depthwise-separable convolution (Jaderberg et al., 2014; Zhang et al., 2016; Howard et al., 2017), bottleneck convolution (Sandler et al., 2018), and shifts (Wu et al., 2017).\nIn a standard convolution layer we have k × k filters for every input and output channel. Denoting the output channels by Yi and the input channels by Xj , the i’th output channel is defined as Yi = ∑ j Xj ∗ Fi,j . Here Fi,j is the appropriate filter and ∗ is the convolution operator. Restricting the discussion to the setting where the number of output and input channels are the same, a fully-grouped convolution is a more constrained alternative in which the filter is k × k but each output is computed based on a single input channel. A depthwise-separable convolution consists of a full-grouped convolution followed by a standard 1 × 1 convolution. In most setting this operation requires less compute and memory resources. A shift layer is an even cheaper alternative to depthwise-separable where the Fi’s are fixed and have only a single non-zero element, resulting in computational complexity equivalent to a single 1 × 1 convolution. We use DARC to compress CNNs by considering alternatives from among the above options, for each convolution layer." }, { "heading": "4 THEORETICAL RESULTS", "text": "Here, we discuss generalization power models learned by DARC. We restrict our attention to the simple case of learning an ensemble of models; as described below, the result has implications for our algorithm for training DARC. This setting actually applies not only for DARC but also for various NAS methods such as DARTS (Liu et al., 2018b) or ENAS (Pham et al., 2018). Indeed, these methods aim to choose one out of several options in each layer. While these methods differ in how this ensemble is learned, our generalization bound is independent of the learning technique.\nRecall that DARC learns a convex combination of functions from J classes H1, . . . ,Hj . Here, we analyze generalizability of this process via Rademacher complexity Bartlett & Mendelson (2003):\nDefinition 1 (Rademacher Complexity). Let H be a class of functions mapping X → R and let n ∈ N. Denote by Xn1 = (X1, ..., Xn) n IID samples from X . Let σ a uniform random vector in {−1, 1}n. The Rademacher complexity ofH is R(H) = Eσ,Xn1 [ suph∈H 1 n ∑ i∈[n] σih(Xn,i) ] .\nIt is well known that the Rademacher complexity of a classH is equal to that of the convex hull ofH. Based on this fact, for h = (h1, . . . , hJ) ∈ ∏J j=1Hj , α ·h = ∑J j=1 αjhj , we have a generalization\nbound on the difference between true risk R and empirical risk R̂: Theorem 1. Suppose we jointly estimate α, h1, ..., hJ ; i.e., ( α̂, ĥ1, ..., ĥJ ) :=\narg minhj∈Hj ,α∈∆J :C·α≤B ∑n i=1 L (α · h(Xi), Yi). Let L(h(x), y) = 1{f(x)6=y} be 0-1 loss.\nThen, w.p. ≥ 1−δ (over n training samples), R ( α̂ · ĥ ) − R̂ ( α̂ · ĥ ) ≤ R (⋃ j∈[J]Hj ) + √ log 1/δ n .\nSince Theorem 1 follows from standard Rademacher generalization bounds (e.g., (Bartlett & Mendelson, 2003, Theorem 5(b))), we omit its proof. According to Theorem 1, generalization error depends on a standard √ (log 1/δ)/n term and Rademacher complexity of the union of classes\nH1, ...,HJ . If H1, ...,HJ are diverse, this union can be quite rich and so R ( ∪j∈[J]Hj ) might be large, leading to overfitting. However, consider an example where H1 is the family of full convolutions,H2 is the family of depthwise-separable convolutions,H3 is the family of sparse convolutions, etc. Here, we actually have H1 = ⋃ j∈[J]Hj ; thus, Rademacher complexity is simply that of the original model (i.e., with J = 1). Formally:\nCorollary 1. Suppose that every sub-model is contained in H1; i.e., H2, ...,HJ ⊆ H1. Then, the Rademacher complexity of DARC is at most that of the original model: R (H) ≤ R (H1).\nEven ifH1 ( ⋃ j∈[J]Hj , in the setting of model compression the alternative familiesHj , j > 1 are\ncheaper replacements for H1 suggesting that R(H1) ≈ R( ⋃ j∈[J]Hj). This observation motivates our learning framework – it shows us that there is no need to split the training set, train the model parameters on one split and the control parameters on the other, as in many NAS papers (Cai et al., 2018; Liu et al., 2018b).\nWe note that does not motivate a change in the learning framework of NAS. A key difference between NAS and DARC is that candidate models H1, ...,HJ in NAS are intentionally diverse; their union is much richer than any individual. This translates to large R (⋃ j∈[J]Hj ) , motivating a need to avoid jointly optimizing α and model parameters on a single training set. When keeping a validation set aside for training α, given the limited number of update steps typical in NAS papers, generalization error may be closer to the setting of fixed α, wherein Rademacher complexity is bounded by ∑ j αjR(Hj) (Cortes et al., 2014), potentially much smaller than in Theorem 1." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "We applied DARC to a number of deep networks from the GluonCV Model Zoo (Mod, 2018) for image classification on the CIFAR-10 and ImageNet (Russakovsky et al., 2015) datasets; this section presents quantitative and qualitative results. We report three performance metrics (model size, single-sample throughput, and batch throughput), and we specifically considered using DARC to minimize two of these (model size and batch throughput). Below, “DARC(S)” denotes DARC with a model (S)ize penalty, and “DARC(T)” denotes DARC with batch (T)hroughput penalty.\nChoice of Compression Candidates For each convolutional layer, besides the original (full) convolution, we considered 3 compression candidates: a (fully-grouped) depthwise-separable convolution with 3 × 3 kernels (abbreviated henceforth as “3x3DS”), a full convolution with 1 × 1 kernels (“1x1FC”), and a 2-layer candidate consisting of a 3x3DS layer followed immediately by a 1x1FC layer (“3x3+1x1”). In ResNet50, which was already implemented using bottleneck convolutions (He et al., 2016) consisting of a sequence of 1× 1, 3× 3, and 1× 1 convolutions, the entire bottleneck\nsub-network was treated as a single component (i.e., all 3 convolutions were replaced with a single block from the above mentioned alternatives). We intentionally limited the choice of alternatives to maintain a simple system that can enjoy high throughput without special implementation. Our precise choice of alternatives is motivated by them already being an established component for deep networks, proven to work in some settings even without architecture search.\nDARC Training Details To maximize fairness when comparing with models in the GluonCV Model Zoo (Mod, 2018), most aspects of training DARC were based on the training scripts provided publicly by the Model Zoo1. Due to space constraints, these implementation details are decribed in Appendix A; a few specific differences from these scripts are described below.\nStudent-Teacher Initialization: As described in Section 3 we initialized compression candidates to mimic original layers using student-teacher training. While this training had to be performed separately for each compression candidate in each layer of the original model, since each compression candidate has few parameters, each candidate’s training converged quite quickly. Thus, in CIFAR-10 experiments, we simply ran 1 epoch of the entire training dataset; in ImageNet experiments, we ran only 1000 batches. A relatively large step size of 0.1 was used, since teacher-student initialization is only for initialization and fine-tuning can be performed during model-selection.\nMain Training Phase: As noted in Algorithm 1, training occurred in blocks of epochs (20 epochs/block for CIFAR-10, 10 for ImageNet), with the compression penalty λ increased after each block, to obtain a spectrum of compressed models. For each dataset, penalty type, and model size, the initial value of λ was selected to roughly balance the orders of magnitude of the empirical loss and the regularization term at the beginning of training. After each block, we: 1) remove candidates j with αj = 0l, 2) save (for evaluation later) a copy of the DARC model, in which, in any layer with multiple non-zero α entries, all but the most expensive remaining candidate are removed, and 3) decrease learning rate η and increase compression penalty λ (each by a factor of 2).\nThis “iterative compression” was repeated until only one compression candidate per layer remained in the DARC model. Finally, each saved model was fine-tuned for 20 epochs using only prediction loss (i.e., with λ = 0). This procedure enabled us to obtain a sequence of compressed models at progressively increasing compression levels. Moreover, this “warm-starting” improved compression speed since we only perform a total of 30 epochs per λ value, rather than the > 100 epochs needed for convergence at high levels of compression." }, { "heading": "5.1 CIFAR-10 RESULTS", "text": "By all metrics, DARC gave the best results when applied to very wide models such as the WideResNet series (specifically, the WideResNet16 10, WideResNet28 10, and WideResNet40 8 models (Mod, 2018)). Moreover, unlike results on other ResNets, results on WideResNets were relatively similar for both DARC(S) and DARC(T); both versions of DARC selected the 3x3+1x1 candidate for every layer. The reason for this is that very wide convolutions in WideResNet models can be replaced by depthwise separable convolutions with essentially no loss in accuracy, and improvement in throughput for both batches and for single samples (1.4-1.6× for each) and memory footprint (4-6×). In the case of WideResNet16 10, DARC produces a model with latency (single sample throughput) comparable to on of the fastest CIFAR-10 model (ResNet20 v1; see Figure 1), while having accuracy within 0.8% of the best model (ResNeXt29 16x64d), 3.2% accuracy points above the performance of ResNet20 v1. For complete CIFAR-10 results see Appendix tables 3-4." }, { "heading": "5.2 IMAGENET RESULTS", "text": "For ImageNet we compressed several ResNet models. To present the size compression results we provide Table 2 comparing accuracy change as a function of parameter reduction. We compared to previous published results compressing ResNet50 on ImageNet. To our knowledge these are the state-of-the-art results among those compressing ResNet50 on ImageNet. For compression as aggressive as 3×, we incur an accuracy drop of 0.86% while the baseline suffers a drop of 3.26%. Table 1 gives throughputs of our compressed ResNet34 and ResNet50 models and throughputs of state-of-the-art competing models, on identical hardware; our compressed models outperform these competing models, due to our initialization based on pre-trained ResNet34 and ResNet50 models.\n1train imagenet.py, train mixup cifar10.py\nWhile further details, including other compressed versions, are available in the Appendix (Figures 2 and 3 and Table 5), Table 2 compares prediction performance of DARC(S) with that of state-of-theart network pruning techniques applied to ResNet50 on ImageNet, at various compression levels. In the light (1.5×) compression regime, the result of (Zhuang et al., 2018) outperforms ours. The work of (Zhuang et al., 2018) complements ours, in that their novelty is in warm-starting the compressed alternative not only to mimic the original, but also to be informative w.r.t. the label. Since this work does not have an architecture search component, we suggest that future work combine this clever warm-start with an architecture search component such as DARC. Once the compression becomes more aggressive, DARC outperforms the baseline, likely since in that regime (smaller network with more training epochs), a good architecture is more important than a good warm-start." }, { "heading": "5.3 DISCUSSION OF COMPRESSED ARCHITECTURES", "text": "In both ResNets and WideResNets, model size tends to be dominated by a small number of the largest layers in the model, and is relatively insensitive to the depth of the network. Thus, significant compression can be achieved by replacing these large layers with 3x3+1x1 candidates, which offer compression of nearly 9× (for 3× 3 convolutions). Since the sizes (number of convolutional kernels) of ResNet layers increases from bottom to top (i.e., from input to output) DARC(S) tends to first replace the top-most layers (i.e., layers closest to the output) of the network with 3x3+1x1 candidates, proceeding towards the bottom of the network as the compression parameter λ increases.\nIn contrast, model latency is relatively uniformly distributed throughout the layers of the network – the time taken to compute each convolutional layer scales only weakly with the\nnumber, size, and grouping of filters. Thus, in ResNets (but not in WideResNets) 3x3+1x1 candidates, which replace 1 layer with 2 smaller layers, tend to offer little or no benefit in throughput; of the compression candidates we considered, only the smallest (3x3DS) candidates offer significant acceleration (typically of about 2×) over full convolutions. As a result, the acceleration offered by DARC(T) scales primarily with the number of layers that can be replaced by 3x3DS layers. Moreover, the replaced layers tend to be scattered throughout the network (rather than clustered near the output of the network). As noted earlier, in WideResNets, each layer is so large that the 3x3+1x1 candidate is much faster than full convolution, and DARC(T) selects this candidate for all layers.\nOverall, we see that the optimal compression strategy depends on whether one is optimizing for size or for speed. This implies that some sort of intelligent replacement of components, as in DARC, is needed to obtain reliable performance improvements (as opposed to, say random baselines that have been shown to perform well in pruning). This parallels recent work (Cai et al., 2018) showing (for NAS) that optimizing for speed on different hardware (GPU, CPU, or mobile) leads to different\nmodels. While we focused on GPU throughput, we note that the speedup of depthwise-separable convolution over full convolution is typically larger on CPU and embedded devices than on GPU devices. Thus, DARC should also produce efficient architectures for these alternative hardware." }, { "heading": "6 RELATED WORK", "text": "Work on compressing deep networks has abounded in recent years, with diverse approaches including pruning (LeCun et al., 1990; Polyak & Wolf, 2015; Li et al., 2016a; He et al., 2017; Luo & Wu, 2017; Luo et al., 2017; Zhuang et al., 2018), low-rank factorization (Jaderberg et al., 2014; Zhang et al., 2016; Howard et al., 2017), fast approximate convolutions (Bagherinezhad et al., 2017; Wu et al., 2017), knowledge distillation (Hinton et al., 2015; Romero et al., 2014), and quantization (Gong et al., 2014; Han et al., 2015; Zhou et al., 2017; Lin et al., 2016); (Cheng et al., 2018) survey common approaches. In contrast, DARC searches a richer space of alternative models and has the ability to replace multiple components by a single one (e.g. replacing a bottleneck sub-network of 3 convolutions with a single convolution). Indeed, though beyond the scope of this paper, DARC can incorporate or complement many of these methods. Our initialization of compression candidates by mimicking the original layer is also reminiscent of knowledge distillation.\nAnother closely-related line of papers concerns Neural Architecture Search (NAS) (Pham et al., 2018; Kandasamy et al., 2018; Liu et al., 2018b; Gordon et al., 2018; Cai et al., 2018). The most relevant papers are (Liu et al., 2018b), (Gordon et al., 2018), and (Cai et al., 2018), who all use a sparse linear component weighting scheme similar to DARC. (Liu et al., 2018b) focused on pure architecture search, in which the goal is simply to find an architecture maximizing prediction performance, without consideration of inference-time efficiency. (Gordon et al., 2018) do not aim to replace layers with general alternatives but rather discover parameters in a data-driven way; their experiments are restricted to results in channel pruning. Recently, (Cai et al., 2018) performed NAS with a latency regularization term similar to ours.\nOur methods differ from these NAS papers in two main ways (summarized in Table 3). First, our architecture search is guided by an established base model that was already tested and proven useful. This distinction allows a simpler and more efficient learning scheme (motivated in Section 4) that avoids iterating between training sets, to optimize model and α parameters. Furthermore, starting with a pre-trained model allows us not only to reach an effective architecture, but to warm start the weight parameters. As evidence of the advantage of starting with a pretrained model, our compressed model ResNet50(T) on ImageNet has Top1 accuracy ≥ 3% more than models obtained by these NAS papers; thus, it seems that starting architecture search with a highly accurate base model can improve the efficiency/accuracy trade-off of the learned model. Another distinction from gradient-based NAS results is our use of sparsity-inducing regularization. Previous methods make the choice between the candidates via a softmax layer; this restricts the output to be a convex combination of inputs without optimizing sparsity. Since these methods also aim to find a sparse combination, they might benefit from a non-convex regularization term as in DARC." }, { "heading": "7 CONCLUSIONS AND FUTURE WORK", "text": "We have shown that even a simple DARC implementation, with only depthwise-separable approximations as compression candidates, can be used to compress large state-of-the-art deep networks, improving inference speed and memory footprint. Intelligently making only some layers of the net-\nwork depthwise-separable results in compressed models with much better predictive performance than simply making all convolutions depthwise-separable, as in (Howard et al., 2017).\nWhile depthwise-separable convolutions are easily implemented in existing deep learning packages and already offer substantial compression, future work may benefit from more sophisticated approximate convolutions, with efficient implementations. For example, shift operations (Wu et al., 2017) are promising, as they require no stored parameters and replace the slow multiplications in convolution with fast indexing. Another venue worth pursuing is to compress a model into a shallower version. Although there are a few ways this could be attempted, such as an Identity candidate or replacing entire blocks of layers, it is unclear which technique would work best. Finally, we hope to apply DARC to models other than CNNs (e.g., models with recurrent cells or transformers)." }, { "heading": "A DARC IMPLEMENTATION DETAILS", "text": "In this section, we provide further details about the implementation of DARC used in our experiments.\nEnvironment Details We implemented DARC in Apache MXNet 1.3.1 using Python 3.6 and CUDA 9.0. Experiments were run on AWS EC2 p3.8xlarge and p3.16xlarge machines, which respectively features 4 and 8 NVIDIA Tesla-V100 GPUs. CIFAR-10 models were each trained with 1 GPU. Smaller ImageNet models (ResNet18 and ResNet34) were trained using 4 GPUs, while the larger ResNet50 was trained using 8 GPUs.\nTraining Details Following the original script used to train models in the GluonCV Model Zoo, we utilized mixup training (Zhang et al., 2017), and optimized cross-entropy loss with Nesterov accelerated stochastic gradient descent (NAG) with (default) momentum parameter 0.9. As noted in the main paper, for student-teacher initialization, we used a relatively large learning rate η = 0.1. Thereafter, for model-selection, we began with an initial learning rate of η = 0.01, which was then halved after each traning block.\nTo minimize training time, training batch sizes were selected to be as large as possible without exceeding GPU memory during training. This resulted in batch sizes (per training GPU) of 256 for ResNets on CIFAR-10, 128 for WideResNets on CIFAR-10, 64 for ResNet18 and ResNet34 on ImageNet, and 32 for ResNet50 on ImageNet.\nFor each dataset, penalty type, and model size, the initial value of λ was selected to roughly balance the orders of magnitude of the empirical loss and the regularization term at the beginning of training. For CIFAR-10 experiments with size penalization, the initial value of the λ compression penalty was set to λ = 10−5×L, where L is the number of layers to which DARC was applied (i.e., the number of full convolutions in the original model). For CIFAR-10 experiments with latency penalization, we used λ = 104 × L. For ImageNet experiments, we used λ = 10−8 × L with size penalization and L = 104 × L for latency penalization.\nA.1 MEASURES OF MODEL PERFORMANCE\nComputational Performance As an estimate of model size, we report the size (on disk) of the parameter file created by MXNet when saving the model; this correlates well with both the number of parameters in the model and the footprint of the model in RAM or GPU memory. Since throughputs are inherently noisy, we report average inference times over 1000 batches. Though multiple GPUs were used for training DARC, all inference times were computed using a single Tesla V100 GPU. We used batch size 1 to estimate single-sample throughput and batch size 256 to estimate batch throughput. The cost of each compression candidate (i.e., number of parameters for DARC(S) or latency for DARC(T)) was calculated or estimated based on the student model trained during initialization.\nPrediction Performance On CIFAR-10, we used standard (“Top1”) prediction accuracy. On ImageNet, we additionally used (“Top5”) accuracy, the fraction of test images for which the correct label is among the five labels considered most probable by the model. We note that these are the standard performance used for these datasets (Krizhevsky & Hinton, 2010; Krizhevsky et al., 2012)." }, { "heading": "B SUPPLEMENTARY RESULTS", "text": "This section provides detailed numerical results of our experiments:" } ]
2,019
null
SP:1a02536d11f939c007732cb7b8619107170e8c64
[ "The paper proposed an adaptive learned bloom filter. Rather than setting a threshold of prediction score, the paper partitions the score into several intervals; for query insider each interval, the paper either uses a group of independent hash functions to hash the query in one unified bloom filter or introduce an independent bloom filter. The paper proposes efficient ways to tune the hyper-parameters, and provides the analysis of the error. Experiments on two applications show the effectiveness of the proposed methods. ", "This paper extends the Bloom filter learning by using the complete spectrum of the scores regions. It uses multiple thresholds and then varies the number of hash functions among different scores regions to obtain better trade-off. Detailed theoretical analysis provides guaranteed superiority over learned Bloom filter under some conditions. The experiments also show the two proposed methods outperform learned Bloom filter in FPR and memory usage." ]
Recent work suggests improving the performance of Bloom filter by incorporating a machine learning model as a binary classifier. However, such learned Bloom filter does not take full advantage of the predicted probability scores. We proposed new algorithms that generalize the learned Bloom filter by using the complete spectrum of the scores regions. We proved our algorithms have lower False Positive Rate (FPR) and memory usage compared with the existing approaches to learned Bloom filter. We also demonstrated the improved performance of our algorithms on real-world datasets.
[]
[ { "authors": [ "Burton H Bloom" ], "title": "Space/time trade-offs in hash coding with allowable errors", "venue": "Communications of the ACM,", "year": 1970 }, { "authors": [ "Andrei Broder", "Michael Mitzenmacher" ], "title": "Network applications of bloom filters: A survey", "venue": "In Internet Mathematics. Citeseer,", "year": 2002 }, { "authors": [ "Jehoshua Bruck", "Jie Gao", "Anxiao Jiang" ], "title": "Weighted bloom filter", "venue": "IEEE International Symposium on Information Theory,", "year": 2006 }, { "authors": [ "Larry Carter", "Robert Floyd", "John Gill", "George Markowsky", "Mark Wegman" ], "title": "Exact and approximate membership testers", "venue": "In Proceedings of the tenth annual ACM symposium on Theory of computing,", "year": 1978 }, { "authors": [ "Peter C. Dillinger", "Panagiotis Manolios" ], "title": "Bloom filters in probabilistic verification", "venue": "Formal Methods in Computer-Aided Design,", "year": 2004 }, { "authors": [ "Laura Feinstein", "Dan Schnackenberg", "Ravindra Balupari", "Darrell Kindred" ], "title": "Statistical approaches to ddos attack detection and response", "venue": "In Proceedings DARPA information survivability conference and exposition,", "year": 2003 }, { "authors": [ "Chen-Yu Hsu", "Piotr Indyk", "Dina Katabi", "Ali Vakilian" ], "title": "Learning-based frequency estimation algorithms", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jon Kleinberg" ], "title": "Bursty and hierarchical structure in streams", "venue": "Data Mining and Knowledge Discovery,", "year": 2003 }, { "authors": [ "Tim Kraska", "Alex Beutel", "Ed H Chi", "Jeffrey Dean", "Neoklis Polyzotis" ], "title": "The case for learned index structures", "venue": "In Proceedings of the 2018 International Conference on Management of Data,", "year": 2018 }, { "authors": [ "Michael Mitzenmacher" ], "title": "Compressed bloom filters", "venue": "IEEE/ACM Transactions on Networking (TON),", "year": 2002 }, { "authors": [ "Michael Mitzenmacher" ], "title": "A model for learned bloom filters and optimizing by sandwiching", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jack W Rae", "Sergey Bartunov", "Timothy P Lillicrap" ], "title": "Meta-learning neural bloom filters", "venue": "arXiv preprint arXiv:1906.04304,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Bloom filter (BF) is a widely used data structure for low-memory and high-speed approximate membership testing (Bloom, 1970). Bloom filters compress a given set S into bit arrays, where we can approximately test whether a given element (or query) x belongs to a set S, i.e., x ∈ S or otherwise. Several applications, in particular caching in memory constrained systems, have benefited tremendously from BF (Broder et al., 2002).\nBloom filter ensures a zero false negative rate (FNR), which is a critical requirement for many applications. However, BF does not have a non-zero false positive rate (FPR) (Dillinger and Manolios, 2004) due to hashing collisions, which measures the performance of BF. There is a known theoretical limit to this reduction. To achieve a FPR of , BF costs at least n log2(1/ ) log2 e bits (n = |S|), which is log2 e ≈ 44% off from the theoretical lower bound (Carter et al., 1978). Mitzenmacher (2002) proposed Compressed Bloom filter to address the suboptimal space usage of BF, where the space usage can reach the theoretical lower bound in the optimal case.\nTo achieve a more significant reduction of FPR, researchers have generalized BF and incorporated information beyond the query itself to break through the theoretical lower bound of space usage. Bruck et al. (2006) has made use of the query frequency and varied the number of hash functions based on the query frequency to reduce the overall FPR. Recent work (Kraska et al., 2018; Mitzenmacher, 2018) has proposed to improve the performance of standard Bloom filter by incorporating a machine learning model. This approach paves a new hope of reducing false positive rates beyond the theoretical limit, by using context-specific information in the form of a machine learning model (Hsu et al., 2019). Rae et al. (2019) further proposed Neural Bloom Filter that learns to write to memory using a distributed write scheme and achieves compression gains over the classical Bloom filter.\nThe key idea behind Kraska et al. (2018) is to use the machine learning model as a pre-filter to give each query x a score s(x). s(x) is usually positively associated with the odds that x ∈ S. The assumption is that in many practical settings, the membership of a query in the set S can be figured out from observable features of x and such information is captured by the classifier assigned score s(x). The proposal of Kraska et al. uses this score and treats query x with score s(x) higher than a pre-determined threshold τ (high confidence predictions) as a direct indicator of the correct membership. Queries with scores less than τ are passed to the back-up Bloom filter.\nCompared to the standard Bloom filter, learned Bloom filter (LBF) uses a machine learning model to answer keys with high score s(x). Thus, the classifier reduces the number of the keys hashed into the Bloom filter. When the machine learning model has a reliable prediction performance, learned Bloom filter significantly reduce the FPR and save memory usage (Kraska et al., 2018). Mitzenmacher (2018) further provided a formal mathematical model for estimating the performance of LBF. In the same paper, the author proposed a generalization named sandwiched learned Bloom filter (sandwiched\nLBF), where an initial filter is added before the learned oracle to improve the FPR if the parameters are chosen optimally.\nWastage of Information: For existing learned Bloom filters to have a lower FPR, the classifier score greater than the threshold τ should have a small probability of wrong answer. Also, a significant fraction of the keys should fall in this high threshold regime to ensure that the backup filter is small. However, when the score s(x) is less than τ , the information in the score s(x) is never used. Thus, there is a clear waste of information. For instance, consider two elements x1 and x2 with τ > s(x1) s(x2). In the existing solutions, x1 and x2 will be treated in the exact same way, even though there is enough prior to believing that x1 is more likely positive compared to x2.\nStrong dependency on Generalization: It is natural to assume that prediction with high confidence implies a low FPR when the data distribution does not change. However, this assumption is too strong for many practical settings. First and foremost, the data distribution is likely to change in an online streaming environment where Bloom filters are deployed. Data streams are known to have bursty nature with drift in distribution (Kleinberg, 2003). As a result, the confidence of the classifier, and hence the threshold, is not completely reliable. Secondly, the susceptibility of machine learning oracles to adversarial examples brings new vulnerability in the system. Examples can be easily created where the classifier with any given confidence level τ , is incorrectly classified. Bloom filters are commonly used in networks where such increased adversarial false positive rate can hurt the performance. An increased latency due to collisions can open new possibilities of Denial-of-Service attacks (DoS) (Feinstein et al., 2003).\nMotivation: For a binary classifier, the density of score distribution, f(s(x)) shows a different trend for elements in the set and outside the set S. We observe that for keys, f(s(x)|x ∈ S) shows ascending trend as s(x) increases while f(s(x)|x /∈ S) has an opposite trend. To reduce the overall FPR, we need lower FPRs for groups with a high f(s(x)|x /∈ S). Hence, if we are tuning the number of hash functions differently, more hash functions are required for the corresponding groups. While for groups with a few non-keys, we allow higher FPRs. This variability is the core idea to obtaining a sweeter trade-off.\nOur Contributions: Instead of only relying on the classifier whether score s(x) is above a single specific threshold, we propose two algorithms, Ada-BF and disjoint Ada-BF, that rely on the complete spectrum of scores regions by adaptively tuning Bloom filter parameters in different score regions. 1) Ada-BF tunes the number of hash functions differently in different regions to adjust the FPR adaptively; disjoint Ada-BF allocates variable memory Bloom filters to each region. 2) Our theoretical analysis reveals a new set of trade-offs that brings lower FPR with our proposed scheme compared to existing alternatives. 3) We evaluate the performance of our algorithms on two datasets: malicious URLs and malware MD5 signatures, where our methods reduce the FPR by over 80% and save 50% of the memory usage over existing learned Bloom filters.\nNotations: Our paper includes some notations that need to be defined here. Let [g] denote the index set {1, 2, · · · , g}. We define query x as a key if x ∈ S, or a non-key if x /∈ S. Let n denote the size of keys (n = |S|), and m denote the size of non-keys. We denote K as the number of hash functions used in the Bloom filter." }, { "heading": "2 REVIEW: BLOOM FILTER AND LEARNED BLOOM FILTER", "text": "Bloom Filter: Standard Bloom filter for compressing a set S consists of an R-bits array and K independent random hash function h1, h2, · · · , hK , taking integer values between 0 and R− 1, i.e., hi : S ⇒ {0, 1, · · · , R − 1}. The bit array is initialized with all 0. For every item x ∈ S, the bit value of hi(x) = 1, for all i ∈ {0, 1, · · · ,K}, is set to 1.\nTo check a membership of an item x ′ in the set S, we return true if all the bits hi(x ′ ), for all i ∈ {0, 1, · · · ,K}, have been set to 1. It is clear that Bloom filter has zero FNR (false negative rate). However, due to lossy hash functions, x ′ may be wrongly identified to be positive when x ′ /∈ S while all the hi(x ′ )s are set to 1 due to random collisions. It can be shown that if the hash functions are independent, the expected FPR can be written as follows\nE (FPR) = ( 1− ( 1− 1\nR\n)Kn)K .\nLearned Bloom filter: Learned Bloom filter adds a binary classification model to reduce the effective number of keys going to the Bloom filter. The classifier is pre-trained on some available training data to classify whether any given query x belongs to S or not based on its observable features. LBF sets a threshold, τ , where x is identified as a key if s(x) ≥ τ . Otherwise, x will be inserted into a Bloom filter to identify its membership in a further step (Figure 1). Like standard Bloom filter, LBF also has zero FNR. And the false positives can be either caused by that false positives of the classification model (s(x|x /∈ S) ≥ τ ) or that of the Bloom filter.\nIt is clear than when the region s(x) ≥ τ contains large number of keys, the number of keys inserted into the Bloom filter decreases which leads to favorable FPR. However, since we identify the region s(x) ≥ τ as positives, higher values of τ is better. At the same time, large τ decreases the number of keys in the region s(x) ≥ τ , increasing the load of the Bloom filter. Thus, there is a clear trade-off." }, { "heading": "3 A STRICT GENERALIZATION: ADAPTIVE LEARNED BLOOM FILTER", "text": "(ADA-BF)\nWith the formulation of LBF in the previous section, LBF actually divides the x into two groups. When s(x) ≥ τ , x will be identified as a key directly without testing with the Bloom filter. In other words, it uses zero hash function to identify its membership. Otherwise, we will test its membership using K hash functions. In other view, LBF switches from K hash functions to no hash function at all, based on s(x) ≥ τ or not. Continuing with this mindset, we propose adaptive learned Bloom filter, where x is divided into g groups based on s(x), and for group j, we use Kj hash functions to test its membership. The structure of Ada-BF is represented in Figure 1(b).\nMore specifically, we divide the spectrum into g regions, where x ∈ Group j if s(x) ∈ [τj−1, τj), j = 1, 2, · · · , g. Without loss of generality, here, we assume 0 = τ0 < τ1 < · · · < τg−1 < τg = 1. Keys from group j are inserted into Bloom filter using Kj independent hash functions. Thus, we use different number of universal hash functions for keys from different groups.\nFor a group j, the expected FPR can be expressed as,\nE (FPRj) = ( 1− ( 1− 1\nR\n)∑g t=1 ntKt )Kj = αKj (1)\nwhere nt = ∑n t=1 I(τt−1 ≤ s(xi|xi ∈ S) < τt) is the number of keys falling in group t, and Kj is the number of hash functions used in group j. By varying Kj , E (FPRj) can be controlled differently for each group.\nVariable number of hash functions gives us enough flexibility to tune the FPR of each region. To avoid the bit array being overloaded, we only increase the Kj for groups with large number of keys nj , while decrease Kj for groups with small nj . It should be noted that f(s(x)|x ∈ S) shows an opposite trend compared to f(s(x)|x /∈ S) as s(x) increases (Figure 2). Thus, there is a need for variable tuning, and a spectrum of regions gives us the room to exploit these variability efficiently. Clearly, Ada-BF generalizes the LBF. When Ada-BF only divides the queries into two groups, by setting K1 = K, K2 = 0 and τ1 = τ , Ada-BF reduces to the LBF." }, { "heading": "3.1 SIMPLIFYING THE HYPER-PARAMETERS", "text": "To implement Ada-BF, there are some hyper-parameters to be determined, including the number of hash functions for each group Kj and the score thresholds to divide groups, τj (τ0 = 0, τg = 1). Altogether, we need to tune 2g − 1 hyper-parameters. Use these hyper-parameters, for Ada-BF, the expected overall FPR can be expressed as,\nE (FPR) = g∑ j=1 pjE (FPRj) = g∑ j=1 pjα Kj (2)\nwhere pj = Pr(τj−1 ≤ s(xi|xi /∈ S) < τj). Empirically, pj can be estimated by p̂j = 1 m ∑m i=1 I(τj−1 ≤ s(xi|xi /∈ S) < τj) = mj m (m is size of non-keys in the training data and mj is size of non-keys belonging to group j). It is almost impossible to find the optimal hyperparameters that minimize the E (FPR) in reasonable time. However, since the estimated false positive items ∑g j=1mjα Kj = O(maxj(mjα Kj )), we prefer mjαKj to be similar across groups when E (FPR) is minimized. While αKj decreases exponentially fast with larger Kj , to keep mjαKj stable across different groups, we require mj to grow exponentially fast with Kj . Moreover, since f(s(x)|x /∈ S) increases as s(x) becomes smaller for most cases,Kj should also be larger for smaller s(x). Hence, to balance the number of false positive items, as j diminishes, we should increase Kj linearly and let mj grow exponentially fast.\nWith this idea, we provide a strategy to simplify the tuning procedure. We fix pjpj+1 = c and Kj − Kj+1 = 1 for j = 1, 2, · · · , g − 1. Since the true density of s(x|x /∈ S) is unknown. To implement the strategy, we estimate pjpj+1 by p̂j pj+1 = mj mj+1\nand fix mjmj+1 = c. This strategy ensures p̂j to grow exponentially fast with Kj . Now, we only have three hyper-parameters, c, Kmin and Kmax (Kmax = K1). By default, we may also set Kmin = Kg = 0, equivalent to identifying all the items in group g as keys.\nLemma 1: Assume 1) the scores of non-keys, s(x)|x /∈ S, are independently following a distribution f ; 2) The scores of non-keys in the training set are independently sampled from a distribution f . Then, the overall estimation error of p̂j , ∑ j |p̂j − pj |, converges to 0 in probability as m becomes\nlarger. Moreover, if m ≥ 2(k−1) 2 [√ 1 π + √ 1−2/π δ ]2 , with probability at least 1 − δ, we have∑\nj |p̂j − pj | ≤ .\nEven though in the real application, we cannot access the exact value of pj , which may leads to the estimation error of the real E (FPR). However, Lemma 1 shows that as soon as we can collect enough non-keys to estimate the pj , the estimation error is almost negligible. Especially for the large scale membership testing task, collecting enough non-keys is easy to perform." }, { "heading": "3.2 ANALYSIS OF ADAPTIVE LEARNED BLOOM FILTER", "text": "Compared with the LBF, Ada-BF makes full use the of the density distribution s(x) and optimizes the FPR in different regions. Next, we will show Ada-BF can reduce the optimal FPR of the LBF without increasing the memory usage.\nWhen pj/pj+1 = cj ≥ c > 1 and Kj −Kj+1 = 1, the expected FPR follows,\nE (FPR) = g∑ j=1 pjα Kj =\n∑g j=1 c\ng−jαKj∑g j=1 c g−j ≤ (1− c)(1− (cα)g) ( 1α − c)(αg − (cα)g) αKmax , cα 6= 1\n1− c 1− cg\n· g, cα = 1 (3)\nwhere Kmax = K1. To simplify the analysis, we assume cα > 1 in the following theorem. Given the number of groups g is fixed, this assumption is without loss of generality satisfied by raising c since α will increase as c becomes larger. For comparisons, we also need τ of the LBF to be equal to τg−1 of the Ada-BF. In this case, queries with scores higher than τ are identified as keys directly by the machine learning model. So, to compare the overall FPR, we only need to compare the FPR of queries with scores lower than τ .\nTheorem 1: For Ada-BF, given pjpj+1 ≥ c > 1 for all j ∈ [g − 1], if there exists λ > 0 such that cα ≥ 1+λ holds, and nj+1−nj > 0 for all j ∈ [g−1] (nj is the number of keys in group j). When g is large enough and g ≤ b2Kc, then Ada-BF has smaller FPR than the LBF. Here K is the number of hash functions of the LBF.\nTheorem 1 requires the number of keys nj keeps increasing while pj decreases exponentially fast with j. As shown in figure 2, on real dataset, we observe from the histogram that as score increases, f(s(x)|x /∈ S) decreases very fast while f(s(x)|x ∈ S) increases. So, the assumptions of Theorem 1 are more or less satisfied.\nMoreover, when the number of buckets is large enough, the optimal K of the LBF is large as well. Given the assumptions hold, theorem 1 implies that we can choose a larger g to divide the spectrum into more groups and get better FPR. The LBF is sub-optimal as it only has two regions. Our experiments clearly show this trend. For figure 3(a), Ada-BF achieves 25% of the FPR of the LBF when the bitmap size = 200Kb, while when the budget of buckets = 500Kb, Ada-BF achieves 15% of the FPR of the LBF. For figure 3(b), Ada-BF only reduces the FPR of the LBF by 50% when the budget of buckets = 100Kb, while when the budget of buckets = 300Kb, Ada-BF reduces 70% of the FPR of the LBF. Therefore, both the analytical and experimental results indicate superior performance of Ada-BF by dividing the spectrum into more small groups. On the contrary, when g is small, Ada-BF is more similar to the LBF, and their performances are less differentiable." }, { "heading": "4 DISJOINT ADAPTIVE LEARNED BLOOM FILTER (DISJOINT ADA-BF)", "text": "Ada-BF divides keys into g groups based on their scores and hashes the keys into the same Bloom filter using different numbers of hash functions. With the similar idea, we proposed an alternative approach, disjoint Ada-BF, which also divides the keys into g groups, but hashes keys from different groups into independent Bloom filters. The structure of disjoint Ada-BF is represented in Figure 1(c). Assume we have total budget of R bits for the Bloom filters and the keys are divided into g groups using the same idea of that in Ada-BF. Consequently, the keys from group j are inserted into j-th Bloom filter whose length is Rj (R = ∑g j=1Rj). Then, during the look up stage, we just need to identify a query’s group and check its membership in the corresponding Bloom filter." }, { "heading": "4.1 SIMPLIFYING THE HYPER-PARAMETERS", "text": "Analogous to Ada-BF, disjoint Ada-BF also has a lot of hyper-parameters, including the thresholds of scores for groups division and the lengths of each Bloom filters. To determine thresholds τj , we use similar tuning strategy discussed in the previous section of tuning the number of groups g and mjmj+1 = c. To find Rj that optimizes the overall FPR, again, we refer to the idea in the previous section that the expected number of false positives should be similar across groups. For a Bloom filter with Rj buckets, the optimal number of hash functions Kj can be approximated as Kj =\nRj nj log(2), where nj is the number of keys in group j. And the corresponding optimal expected\nFPR is E (FPRj) = µRj/nj (µ ≈ 0.618). Therefore, to enforce the expected number of false items being similar across groups, Rj needs to satisfy\nmj · µ Rj nj = m1 · µ R1 n1 ⇐⇒ Rj nj − R1 n1 = (j − 1)log(c) log(µ)\nSince nj is known given the thresholds τj and the total budget of buckets R are known, thus, Rj can be solved accordingly. Moreover, when the machine learning model is accurate, to save the memory usage, we may also set Rg = 0, which means the items in group j will be identified as keys directly." }, { "heading": "4.2 ANALYSIS OF DISJOINT ADAPTIVE LEARNED BLOOM FILTER", "text": "The disjoint Ada-BF uses a group of shorter Bloom filters to store the hash outputs of the keys. Though the approach to control the FPR of each group is different from the Ada-BF, where the Ada-BF varies K and disjoint Ada-BF changes the buckets allocation, both methods share the same core idea to lower the overall FPR by reducing the FPR of the groups dominated by non-keys. Disjoint Ada-BF allocates more buckets for these groups to a achieve smaller FPR. In the following theorem,\nwe show that to achieve the same optimal expected FPR of the LBF, disjoint Ada-BF consumes less buckets. Again, for comparison we need τ of the LBF is equal to τg−1 of the disjoint Ada-BF.\nTheorem 2: If pjpj+1 = c > 1 and nj+1 − nj > 0 for all j ∈ [g − 1] (nj is the number of keys in group j), to achieve the optimal FPR of the LBF, the disjoint Ada-BF consumes less buckets compared with the LBF when g is large." }, { "heading": "5 EXPERIMENT", "text": "Baselines: We test the performance of four different learned Bloom filters: 1) standard Bloom filter, 2) learned Bloom filter, 3) sandwiched learned Bloom filter, 4) adaptive learned Bloom filter, and 5) disjoint adaptive learned Bloom filter. We use two datasets which have different associated tasks, namely: 1) Malicious URLs Detection and 2) Virus Scan. Since all the variants of Bloom filter structures ensure zero FNR, the performance is measured by their FPRs and corresponding memory usage." }, { "heading": "5.1 TASK1: MALICIOUS URLS DETECTION", "text": "We explore using Bloom filters to identify malicious URLs. We used the URLs dataset downloaded from Kaggle, including 485,730 unique URLs. 16.47% of the URLs are malicious, and others are benign. We randomly sampled 30% URLs (145,719 URLs) to train the malicious URL classification model. 17 lexical features are extracted from URLs as the classification features, such as “host name length”, “path length”, “length of top level domain”, etc. We used “sklearn.ensemble.RandomForestClassifier1” to train a random forest model. After saving the model with “pickle”, the model file costs 146Kb in total. “sklearn.predict_prob\" was used to give scores for queries.\nWe tested the optimal FPR for the four learned Bloom filter methods under the total memory budget = 200Kb to 500Kb (kilobits). Since the standard BF does not need a machine learning model, to make a fair comparison, the bitmap size of BF should also include the machine learning model size (146 Kb in this experiment). Thus, the total bitmap size of BF is 346Kb to 646Kb. To implement the LBF, we tuned τ between 0 and 1, and picked the one giving the minimal FPR. The number of hash functions was determined by K = Round( Rn0 log 2), where n0 is the number of keys hashed into the Bloom filter conditional τ . To implement the sandwiched LBF, we searched the optimal τ and calculated the corresponding initial and backup filter size by the formula in Mitzenmacher (2018). When the optimal backup filter size is larger than the total bits budget, sandwiched LBF does not need a initial filter and reduces to a standard LBF. For the Ada-BF, we used the tuning strategy described in the previous section. Kmin was set to 0 by default. Thus, we only need to tune the combination of (Kmax, c) that gives the optimal FPR. Similarly, for disjoint Ada-BF, we fixed Rg = 0 and searched for the optimal (g, c).\nResult: Our trained machine learning model has a classification accuracy of 0.93. Considering the non-informative frequent class classifier (just classify as benign URL) gives accuracy of 0.84, our trained learner is not a strong classifier. However, the distribution of scores is desirable (Figure 2), where as s(x) increases, the empirical density of s(x) decreases for non-keys and also increases for keys. In our experiment, when the sandwiched LBF is optimized, the backup filter size always exceeds the total bitmap size. Thus, it reduces to the LBF and has the same FPR (as suggested by Figure 4(a)).\nOur experiment shows that compared to the LBF and sandwiched LBF, both Ada-BF and disjoint Ada-BF achieve much lower FPRs. When filter size = 500Kb, Ada-BF reduces the FPR by 81% compared to LBF or sandwiched LBF (disjoint FPR reduces the FPR by 84%). Moreover, to achieve a FPR ≈ 0.9%, Ada-BF and disjoint Ada-BF only require 200Kb, while both LBF and the sandwiched LBF needs more than 350Kb. And to get a FPR ≈ 0.35%, Ada-BF and disjoint Ada-BF reduce the memory usage from over 500Kb of LBF to 300Kb, which shows that our proposed algorithms save over 40% of the memory usage compared with LBF and sandwiched LBF.\n1The Random Forest classifier consists 10 decision trees, and each tree has at most 20 leaf nodes." }, { "heading": "5.2 TASK 2: VIRUS SCAN", "text": "Bloom filter is widely used to match the file’s signature with the virus signature database. Our dataset includes the information of 41323 benign files and 96724 viral files. The virus files are collected from VirusShare database (Vir). The dataset provides the MD5 signature of the files, legitimate status and other 53 variables characterizing the file, like “Size of Code”, “Major Link Version” and “Major Image Version”. We trained a machine learning model with these variables to differentiate the benign files from the viral documents. We randomly selected 20% samples as the training set to build a binary classification model using Random Forest model 2. We used “sklearn.ensemble.RandomForestClassifier” to tune the model, and the Random Forest classifier costs about 136Kb. The classification model achieves 0.98 prediction accuracy on the testing set. The predicted the class probability (with the function “predict_prob” in “sklearn” library) is used as the score s(x). Other implementation details are similar to that in Task 1.\nResult: As the machine learning model achieves high prediction accuracy, figure 4 suggests that all the learned Bloom filters show huge advantage over the standard BF where the FPR is reduced by over 98%. Similar to the previous experiment results, we observe consistently lower FPRs of our algorithms although the the score distributions are not smooth or continuous (Figure 3). Again, our methods show very similar performance. Compared with LBF, our methods reduce the FPRs\n2The Random Forest classifier consists 15 decision trees, and each tree has at most 5 leaf nodes.\nby over 80%. To achieve a 0.2% FPR, the LBF and sandwiched LBF cost about 300Kb bits, while Ada-BF only needs 150Kb bits, which is equivalent to 50% memory usage reduction compared to the previous methods." }, { "heading": "5.3 SENSITIVITY TO HYPER-PARAMETER TUNING", "text": "Compared with the LBF and sandwiched LBF where we only need to search the space of τ to optimize the FPR, our algorithms require to tune a series of score thresholds. In the previous sections, we have proposed a simple but useful tuning strategies where the score thresholds can be determined by only two hyper-parameters, (K, c). Though our hyper-parameter tuning technique may lead to a sub-optimal choice, our experiment results have shown we can still gain significantly lower FPR compared with previous LBF. Moreover, if the number of groups K is misspecified from the optimal choice (of K), we can still achieve very similar FPR compared with searching both K and c. Figure 5 shows that for both Ada-BF and disjoint Ada-BF, tuning c while fixing K has already achieved similar FPRs compared with optimal case by tuning both (K, c), which suggests our algorithm does not require very accurate hyper-parameter tuning to achieve significant reduction of the FPR." }, { "heading": "5.4 DISCUSSION: SANDWICHED LEARNED BLOOM FILTER VERSUS LEARNED BLOOM FILTER", "text": "Sandwiched LBF is a generalization of LBF and performs no worse than LBF. Although Mitzenmacher (2018) has shown how to allocate bits for the initial filter and backup filter to optimize the expected FPR, their result is based on the a fixed FNR and FPR. While for many classifiers, FNR and FPR are expressed as functions of the prediction score τ . Figure 4(a) shows that the sandwiched LBF always has the same FPR as LBF though we increase the bitmap size from 200Kb to 500Kb. This is because the sandwiched LBF is optimized when τ corresponds to a small FPR and a large FNR, where the optimal backup filter size even exceeds the total bitmap size. Hence, we should not allocate any bits to the initial filter, and the sandwiched LBF reduces to LBF. On the other hand, our second experiment suggests as the bitmap size becomes larger, sparing more bits to the initial filter is clever, and the sandwiched LBF shows the its advantage over the LBF (Figure 6(b))." }, { "heading": "6 CONCLUSION", "text": "We have presented new approaches to implement learned Bloom filters. We demonstrate analytically and empirically that our approaches significantly reduce the FPR and save the memory usage compared with the previously proposed LBF and sandwiched LBF even when the learner’s discrimination power . We envision that our work will help and motivate integrating machine learning model into probabilistic algorithms in a more efficient way." }, { "heading": "APPENDIX A SENSITIVITY TO HYPER-PARAMETER TUNING", "text": "" }, { "heading": "APPENDIX B MORE COMPARISONS BETWEEN THE LBF AND SANDWICHED", "text": "LBF" }, { "heading": "APPENDIX C COMPARING THE BLOOM FILTER TO HIERARCHICAL", "text": "HASHING\nThe machine learning model used in the learned Bloom filters is critical because it has discrimination power between the keys and non-keys and is more efficient in identifying keys in some cases. To show its unique role, we replaced the machine learning model with another Bloom filter such that it becomes a hierarchical Bloom filter (learner is replaced by an initial filter). To implement the hierarchical Bloom filter, we spare 50% of the bit budget to the initial filter and use the other bits to build the backup filter.\nFigure 7 shows that the hierarchical BF does not outperform the original BF under all the budget of buckets, and in some cases, it even achieves a worse FPR. Hence, using a random hash function to replace the learner is not a memory efficient approach." }, { "heading": "APPENDIX D PROOF OF THE STATEMENTS", "text": "Proof of Lemma 1: Let Zj(x) = ∑m i=1 1(s(x) ∈ [τj−1, τj)|x /∈ S), then Zj(x) ∼\nBernoulli(pj), and mj = ∑m i=1 Zj(xi) counts the number of non-keys falling in group j and p̂j = mj m . To upper bound the probability of the overall estimation error of pj , first, we need to\nevaluate its expectation, E (∑K j=1|p̂j − pj | ) .\nSince mj is a binomial random variable, its exact cdf is hard to compute. But with central limit theorem, when m is large, mj−mpj√\nmpj(1−pj) −→ N(0, 1). Thus, we can approximate E (|p̂j − pj |) = E ( | mj−mpj√ mpj(1−pj) | ) · √ pj(1−pj) m ≈ √ 2 π · √ pj(1−pj) m (if Z ∼ N(0, 1), E (|Z|) = √ 2 π ). Then, the ex-\npectation of overall error is approximated by E (∑K j=1|p̂j − pj | ) ≈ √ 2 mπ · (∑K j=1 √ pj(1− pj) ) , which goes to 0 as m becomes larger.\nWe need to further upper bound the tail probability of ∑K j=1|p̂j − pj |. First, we upper bound the\nvariance of ∑K j=1|p̂j − pj |,\nVar K∑ j=1 |p̂j − pj | ≤ K K∑ j=1 Var (|p̂j − pj |) = K K∑ j=1 ( Var (p̂j − pj)− E (|p̂j − pj |)2 )\n≈ K m K∑ j=1 pj(1− pj)− 2 π ( K∑ i=1 √ pj(1− pj) )2 , K m V (p)\nNow, by envoking the Chebyshev’s inequality,\nP K∑ j=1 |p̂j − pj | ≥ = P K∑ j=1 |p̂j − pj | − E K∑ j=1 |p̂j − pj | ≥ − E K∑ j=1 |p̂j − pj | ≤ Var (∑K j=1|p̂j − pj | )\n( − E (∑K j=1|p̂j − pj | ))2 = KV (p)\nm ( − E (∑K j=1|p̂j − pj | ))2 −→ 0 as m −→∞\nThus, ∑K j=1|p̂j − pj | converges to 0 in probability as m −→∞.\nMoreover, since we have\nE K∑ j=1 |p̂j − pj | ≈ √ 2 mπ ( K∑ j=1 √ pj(1− pj)) ≤ √ 2 mπ (K − 1) (4)\nV (p) = K∑ j=1 pj(1− pj)− 2 π ( K∑ i=1 √ pj(1− pj) )2 ≤\nK∑ j=1 ( pj(1− pj) ( 1− 2 π ))\n≤ ( 1− 2\nπ\n)( 1− 1\nK\n) (5)\nThen, by Eq 4 and Eq 5, we can upper bound P [∑K j=1|p̂j − pj | ≥ ] by,\nP K∑ j=1 |p̂j − pj | ≥ ≤ KV (p) m ( − E (∑K j=1|p̂j − pj |\n))2 ≤ (1− 2π )(K − 1)\nm ( − √ 2 mπ (K − 1) )2 (6) When m ≥ 2(k−1) 2 [√ 1 π + √ 1−2/π δ ]2 , we have m ( − √ 2 mπ (K − 1) )2 ≥ (K−1)(1− 2 π ) δ , thus,\nP [∑K j=1|p̂j − pj | ≥ ] ≤ δ.\nProof of Theorem 1: For comparison, we choose τ = τg−1, for both LBF and Ada-BF, queries with scores larger than τ are identified as keys directly by the same machine learning model. Thus, to compare the overall FPR, we only need to evaluate the FPR of queries with score lower than τ .\nLet p0 = P [s(x) < τ |x /∈ S] be the probability of a key with score lower than τ . Let n0 denote the number of keys with score less than τ , n0 = ∑ i:xi∈S I(s(xi) < τ). For learned Bloom filter using K hash functions, the expected FPR follows,\nE (FPR) = (1− p0) + p0 ( 1− ( 1− 1\nR\n)Kn0)K = 1− p0 + p0βK , (7)\nwhere R is the length of the Bloom filter. For Ada-BF, assume we fix the number of groups g. Then, we only need to determineKmax andKmin = Kmax−g+1. Let pj = Pr(τj−1 ≤ s(x) < τj |x /∈ S) The expected FPR of the Ada-BF is,\nE (FPRa) = g∑ j=1 pj\n( 1− ( 1− 1\nR\n)∑g−1 j=1 Kjnj )K j = g−1∑ j=1 pjα Kj , (8)\nwhere ∑g−1 j=1 nj = n0. Next, we give a strategy to select Kmax which ensures a lower FPR of Ada-BF than LBF.\nSelect Kmax = bK + g2 − 1c. Then, we have\nn0K = g−1∑ j=1 njK = K n1 + g−1∑ i=2 (n1 + j−1∑ i=1 Ti) = n1(g − 1) + g−2∑ j=1 Tj(g − j − 1) = 2K\ng − 2 (g − 1)(g − 2) 2 n1 + g−2∑ j=1 (g − 2)(g − 1− j) 2 Tj ≤ 2\ng − 2 (g − 1)(g − 2) 2 n1 + g−2∑ j=1 (g + j − 2)(g − 1− j) 2 Tj = 2\ng − 2 g−1∑ j=1 (j − 1)nj (9)\nBy Eq 9. we further get the relationship between α and β.\ng−1∑ j=1 Kjnj = g−1∑ j=1 (Kmax − j + 1)nj ≤ n0 ( Kmax − g 2 + 1 ) ≤ n0K =⇒ α ≤ β.\nMoreover, by Eq. 3, we have,\nE (FPRa) = (1− c)(1− (cα)g) ( 1α − c)(αg − (cα)g) αKmax ≤ (1− c)(1− (cα) g) ( 1α − c)(αg − (cα)g) βKmax\n≤ βKmax α(c− 1) cα− 1\n< E (FPR) ( 1 + λ\nλ βKmax−K ) ≤ E (FPR) ( 1 + λ\nλ βbg/2−1c\n) .\nTherefore, as g increases, the upper bound of E (FPRa) decreases exponentially fast. Moreover, since 1+λ λ is a constant, when g is large enough, we have 1+λ λ β\nbg/2−1c ≤ 1. Thus, the E (FPRe) is reduced to strictly lower than E (FPR).\nProof of Theorem 2: Let η = log(c)log(µ) ≈ log(c) log(0.618) < 0. By the tuning strategy described in the previous section, we require the expected false positive items should be similar across the groups. Thus, we have\np1 · µR1/n1 = pj · µRj/nj =⇒ Rj = nj ( R1 n1 + (j − 1)η ) , for j ∈ [g − 1]\nwhere Rj is the budget of buckets for group j. For group j, since all the queries are identified as keys by the machine learning model directly, thus, Rg = 0. Given length of Bloom filter for group 1, R1, the total budget of buckets can be expressed as,\ng−1∑ j=1 Rj = g−1∑ j=1 nj n1 R1 + (j − 1)njη\nLet p0 = Pr(s(x) < τ |x /∈ S) and pj = Pr(τj−1 ≤ s(x) < τj |x /∈ S). Let n0 denote the number of keys with score less than τ , n0 = ∑ i:xi∈S I(s(xi) < τ), and nj be the number of keys in group\nj, nj = ∑\ni:xi∈S I(τj−1 ≤ s(xi) < τj). Due to τ = τg−1, we have\n∑g−1 j=1 nj = n0. Moreover, since\nτg−1 = τ , queries with score higher than τ have the same FPR for both disjoint Ada-BF and LBF. So, we only need to compare the FPR of the two methods when the score is lower than τ . If LBF and Ada-BF achieve the same optimal expected FPR, we have\np0 · µR/n0 = g−1∑ j=1 pj · µRj/nj = g · p1 · µR1/n1\n=⇒ R = n0 n1 R1 − n0 log(p0/p1)− log(g) log(µ)\n= g−1∑ j=1 [ nj n1 R1 − nj log(1− ( 1 c ) )g − log (1− 1c )− log(g) log(µ) ] ,\nwhere R is the budget of buckets of LBF. Let Tj = nj+1 − nj ≥ 0. Next, we upper bound ∑g−1 j=1 nj\nwith ∑g−1 j=1(j − 1)nj .\ng−1∑ j=1 nj = n1 + g−1∑ i=2 (n1 + j−1∑ i=1 Ti) = n1(g − 1) + g−2∑ j=1 Tj(g − j − 1)\n= 2\ng − 2 (g − 1)(g − 2) 2 n1 + g−2∑ j=1 (g − 2)(g − 1− j) 2 Tj ≤ 2\ng − 2 (g − 1)(g − 2) 2 n1 + g−2∑ j=1 (g + j − 2)(g − 1− j) 2 Tj = 2\ng − 2 g−1∑ j=1 (j − 1)nj\nTherefore, we can lower bound R,\nR ≥ g−1∑ j=1 [ nj n1 R1 − (j − 1)nj 2(log(1− ( 1 c ) )g − log (1− 1c )− log(g)) (g − 2) log(µ) ] .\nNow, we can lower bound R− ∑g−1 j=1 Rj ,\nR− g−1∑ j=1 Rj ≥ g−1∑ j=1 (j − 1)nj\n[ −η − 2(log(1− ( 1 c ) )g − log (1− 1c )− log(g)) (g − 2) log(µ) ] .\nSince η is a negative constant, while 2(log(1−( 1c )) g−log(1− 1c )−log(g)) (g−2) log(µ) approaches to 0 when g is large. Therefore, when g is large, η − 2(log(1−( 1 c ))\ng−log(1− 1c )−log(g)) (g−2) log(µ) < 0 and R − ∑g−1 j=1 Rj is strictly\nlarger than 0. So, disjoint Ada-BF consumes less memory than LBF to achieve the same expected FPR." } ]
2,019
null
SP:e13275073d8298a924331305623d86a4b41c670e
[ "This paper is trying to answer the question why ensembles of deep neural networks trained with random initialization work so well in practice in improving accuracy. Their proposed hypothesis is that networks trained from different initializations, although all converge to a low-loss/high accuracy optimum, explore different modes in function space and therefore provide more diversity. To experimentally support their hypothesis, first they show that functions along a single training trajectory are similar, however trajectories starting from different initializations may significantly differ. The difference in function space is based on the fraction of points on which the two functions disagree in terms of their prediction. Second, they use different subspace sampling methods around a single optimum and demonstrate that they are significantly less diverse (low disagreement between predictions) than sampling from independent optima through diversity vs accuracy plots. Moreover, they comment on the recent observation that local optima are connected by low-loss tunnels. They experimentally show that even though low-loss/high accuracy path exists between local optima, these tunnels do not correspond to similar solutions in function space, further supporting the multi-mode hypothesis. The authors compare the relative benefit of subspace sampling, weight averaging and ensembling on accuracy and interpret their findings in terms of the hypothesis. ", "This paper analyzes ensembling methods in deep learning from the perspective of the loss landscapes. The authors empirically show that popular methods for learning Bayesian neural networks produce samples with limited diversity in the function space compared to modes of the loss found using different random initializations. The paper also considers the low-loss paths connecting independent local optima in the weight-space. The analysis shows that while the values of the loss and accuracy are nearly constant along the paths, the models corresponding to different points on a path define different functions with diverse predictions. The paper also demonstrates the complementary benefits of using subspace sampling/weight averaging in combination with deep ensembles and shows that relative benefits of deep ensembles are higher. " ]
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable approximate Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. We demonstrate that while low-loss connectors between modes exist, they are not connected in the space of predictions. Developing the concept of the diversity– accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods.
[]
[ { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "In ICML,", "year": 2015 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Felix Draxler", "Kambis Veschgini", "Manfred Salmhofer", "Fred A Hamprecht" ], "title": "Essentially no barriers in neural network energy landscape", "venue": "arXiv preprint arXiv:1803.00885,", "year": 2018 }, { "authors": [ "Stanislav Fort", "Stanislaw Jastrzebski" ], "title": "Large scale structure of neural network loss landscapes", "venue": "arXiv preprint arXiv:1906.04724,", "year": 2019 }, { "authors": [ "Stanislav Fort", "Adam Scherlis" ], "title": "The Goldilocks zone: Towards better understanding of neural network loss landscapes", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of DNNs", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Ian J. Goodfellow", "Oriol Vinyals" ], "title": "Qualitatively characterizing neural network optimization problems", "venue": "CoRR, abs/1412.6544,", "year": 2014 }, { "authors": [ "Alex Graves" ], "title": "Practical variational inference for neural networks", "venue": "In NeurIPS,", "year": 2011 }, { "authors": [ "Fredrik K Gustafsson", "Martin Danelljan", "Thomas B Schön" ], "title": "Evaluating scalable Bayesian deep learning methods for robust computer vision", "venue": null, "year": 1906 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in Bayesian deep learning for computer vision", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Stefan Lee", "Senthil Purushwalkam", "Michael Cogswell", "David Crandall", "Dhruv Batra" ], "title": "Why M heads are better than one: Training a diverse ensemble of deep networks", "venue": "arXiv preprint arXiv:1511.06314,", "year": 2015 }, { "authors": [ "Chunyuan Li", "Heerad Farkhoor", "Rosanne Liu", "Jason Yosinski" ], "title": "Measuring the intrinsic dimension of objective landscapes", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative Normalizing Flows for Variational Bayesian Neural Networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-SNE", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "David JC MacKay" ], "title": "Bayesian methods for adaptive models", "venue": "PhD thesis, California Institute of Technology,", "year": 1992 }, { "authors": [ "Stephan Mandt", "Matthew D Hoffman", "David M Blei" ], "title": "Stochastic gradient descent as approximate Bayesian inference", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Radford M. Neal" ], "title": "Bayesian Learning for Neural Networks", "venue": null, "year": 1996 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D Sculley", "Sebastian Nowozin", "Joshua V Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift", "venue": "arXiv preprint arXiv:1906.02530,", "year": 2019 }, { "authors": [ "Jost Tobias Springenberg", "Aaron Klein", "Stefan Falkner", "Frank Hutter" ], "title": "Bayesian optimization with robust Bayesian neural networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": null, "year": 2014 }, { "authors": [ "Max Welling", "Yee Whye Teh" ], "title": "Bayesian Learning via Stochastic Gradient Langevin Dynamics", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Yeming Wen", "Paul Vicol", "Jimmy Ba", "Dustin Tran", "Roger Grosse" ], "title": "Flipout: Efficient pseudoindependent weight perturbations on mini-batches", "venue": "arXiv preprint arXiv:1803.04386,", "year": 2018 }, { "authors": [ "Izmailov" ], "title": "B ADDITIONAL ABLATION EXPERIMENTS B.1 EFFECT OF RANDOMNESS: RANDOM INITIALIZATION VERSUS RANDOM SHUFFLING Random seed affects both initial parameter values as well the order of shuffling of data points. We run experiments to decouple the effect of random initialization and shuffling", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Consider a typical classification problem, where xn ∈ RD denotes the D-dimensional features and yn ∈ [1, . . . ,K] denotes the class label. Assume we have a parametric model p(y|x,θ) for the conditional distribution where θ denotes weights and biases of a neural network, and p(θ) is a prior distribution over parameters. The Bayesian posterior over parameters is given by\np(θ|{xn, yn}Nn=1) ∝ p(θ) N∏\nn=1\np(yn|xn,θ). (1)\nComputing the exact posterior distribution over θ is computationally expensive (if not impossible) when p(yn|xn,θ) is a deep neural network. A variety of approximations have been developed for Bayesian neural networks, including Laplace approximation (MacKay, 1992), Markov chain Monte Carlo methods (Neal, 1996; Welling & Teh, 2011; Springenberg et al., 2016), variational Bayesian methods (Graves, 2011; Blundell et al., 2015; Louizos & Welling, 2017; Wen et al., 2018) and Monte-Carlo dropout (Gal & Ghahramani, 2016; Srivastava et al., 2014). While computing the posterior is challenging, it is usually easy to perform maximum-a-posteriori (MAP) estimation, which corresponds to a mode of the posterior. The MAP solution can be written as the minimizer of the following loss (negative log likelihood + negative log prior):\nθ̂MAP = argmin θ L(θ, {xn, yn}Nn=1) = argmin θ\n− log p(θ)− N∑\nn=1\nlog p(yn|xn,θ). (2)\nThe MAP solution is computationally efficient, but only gives a point estimate and not a distribution over parameters. Deep ensembles, proposed by Lakshminarayanan et al. (2017), train an ensemble\nof neural networks by initializing at M different values and repeating the minimization multiple times which could lead to M different solutions, if the loss is non-convex. (Lakshminarayanan et al. (2017) found adversarial training provides additional benefits in some of their experiments, but we will ignore adversarial training and focus only on ensembles with random initialization in this paper.)\nGiven finite training data, many parameter values could equally well explain the observations, and capturing these diverse solutions is crucial for quantifying epistemic uncertainty (Kendall & Gal, 2017). Bayesian neural networks learn a distribution over weights, and a good posterior approximation should be able to learn multi-modal posterior distributions in theory. Deep ensembles were inspired by the bootstrap (Breiman, 1996), which has nice theoretical properties. However, it has been empirically observed by Lakshminarayanan et al. (2017); Lee et al. (2015) that training individual networks with just random initialization is sufficient in practice and using the bootstrap even hurts performance in some cases (e.g. for small ensemble sizes). Furthermore, Ovadia et al. (2019) and Gustafsson et al. (2019) independently benchmarked existing methods for uncertainty quantification on a variety of datasets and architectures, and observed that ensembles tend to outperform approximate Bayesian neural networks in terms of both accuracy and uncertainty, particularly under dataset shift.\nThese empirical observations raise an important question: Why do ensembles trained with just random initialization work so well in practice? One possible hypothesis is that ensembles tend to sample from different modes1 in function space, whereas variational Bayesian methods (which minimize DKL(q(θ)|p(θ|{xn, yn}Nn=1)) might fail to explore multiple modes even though they are effective at capturing uncertainty within a single mode. See Figure 1 for a cartoon illustration. Note that while the MAP solution is\na local minima for the training loss by definition, it may not necessarily be a local minima for the validation loss.\nRecent work on understanding loss landscapes (Fort & Jastrzebski, 2019; Draxler et al., 2018; Garipov et al., 2018) allows us to investigate this hypothesis. Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are, beyond an initial exploration in Fort & Jastrzebski (2019). Our findings show that:\n• The functions sampled along a single training trajectory or subspace thereof (e.g. diagonal Gaussian, low-rank Gaussian and Dropout subspaces) tend to be very similar in predictions (while potential far away in the weight space), whereas functions sampled from different randomly initialized trajectories tend to be very diverse.\n• Solution modes are connected in the loss landscape but they are distinct in the space of predictions. Low-loss tunnels create functions with near-identical low values of loss along the path, however these functions tend to be very different in function space, changing significantly in the middle of the tunnel." }, { "heading": "2 BACKGROUND", "text": "The loss landscape of neural networks (also called the objective landscape) – the space of weights and biases that the network navigates – is typically a very high dimensional function and therefore could potentially be very complicated. However, many empirical results show interesting properties of the loss surface. Goodfellow & Vinyals (2014) observed that the loss along a linear path from an initialization to the corresponding optimum is monotonically decreasing, encountering no significant\n1We use the term mode to refer to unique functions fθ(x). Due to weight space symmetries, different parameters could correspond to the same function, i.e. fθ1(x) = fθ2(x) even though θ1 6= θ2, but we ignore this aspect and leave it to future work.\nobstacles along the way. Li et al. (2018) demonstrated that constraining optimization to a random, low-dimensional hyperplane in the weight space leads to results comparable to full-space optimization, provided that the dimension exceeds a modest threshold. This was geometrically understood and extended in (Fort & Scherlis, 2019). Garipov et al. (2018); Draxler et al. (2018) demonstrate that while a linear path between two independent optima hits a high loss area in the middle, there in fact exist continuous, low-loss paths connecting any pair of optima. These observations are unified into a single phenomenological model in (Fort & Jastrzebski, 2019). While independent, low-loss optima in the loss landscape are connected, Fort & Jastrzebski (2019) provide an early indication that in fact they represent very different functions in terms of their predictions. Therefore the connectivity cannot be due to trivial symmetries of the network which would keep the input–output mapping intact." }, { "heading": "3 VISUALIZING FUNCTION SIMILARITY ACROSS INITIALIZATIONS", "text": "We train convolutional neural networks on the CIFAR-10 (Krizhevsky, 2009) dataset:\n• SmallCNN: channels [16,32,32] for 10 epochs which achieves 64% test accuracy. • MediumCNN: channels [64,128,256,256] for 20 epochs which achieves 70% test accuracy. • ResNet20v1: for 200 epochs which achieves 90% test accuracy.\nWe use the Adam optimizer (Kingma & Ba, 2015) for training and to make sure the effects we observe are general, we validate that our results hold for vanilla stochastic gradient descent (SGD) as well, which we do not show in this paper. We use batch size 128 and dropout 0.03 for training SmallCNN and MediumCNN. To generate weight space and prediction space similarity results, we use a constant learning rate of 1.6× 10−3, unless specified otherwise. We do not use any data augmentation with those two architectures. For ResNet20v1, we use the data augmentation and learning rate schedule used in Keras examples2. The overall trends are consistent across all architectures, datasets, and other hyperparameter and non-linearity choices we explored." }, { "heading": "3.1 SIMILARITY OF FUNCTIONS WITHIN AND ACROSS TRAJECTORIES", "text": "First, we compute the similarity between different checkpoints along a single trajectory. We plot the cosine similarity in weight space in Figure 2(a) and the disagreement in function space, defined as the fraction of points the checkpoints disagree on, in Figure 2(b). We observe that the checkpoints along a trajectory are largely similar both in the weight space and the function space. Next, we evaluate how diverse the final solutions from different random initializations are. The functions from different initialization are different, as demonstrated by the similarity plots in Figure 3. Comparing this with Figures 2(a) and 2(b), we see that functions within a single trajectory exhibit higher similarity and functions across different trajectories exhibit much lower similarity.\nNext, we take the predictions from different checkpoints along the individual training trajectories from multiple initializations and compute a t-SNE plot (Maaten & Hinton, 2008) to visualize their similarity in function space. More precisely, we take the softmax output for a set of points, flatten the vector and use it as the input to the t-SNE plot. Figure 2(c) shows that the functions explored by different trajectories (denoted by circles with different colors) are far away, while functions explored within a single trajectory (circles with the same color) tend to be much more similar." }, { "heading": "3.2 SIMILARITY OF FUNCTIONS ACROSS SUBSPACES FROM EACH TRAJECTORY", "text": "In addition to the checkpoints along a trajectory, we also construct subspaces based on each individual trajectory. Scalable Bayesian methods typically compute statistics based on the weights along a trajectory, hence visualizing the diversity of functions between the subspace helps understand the difference between Bayesian neural networks and ensembles. We use a representative set of four subspace sampling methods: a random subspace, a Monte Carlo dropout, a diagonal Gaussian approximation, and a low-rank covariance matrix Gaussian approximation. In the descriptions of the methods, let ~w0 be the current weight-space position (the weights and biases of our trained neural net) around which we will construct the subspace.\n2https://keras.io/examples/cifar10_resnet/\n• Random subspace sampling: We start at an optimized solution ~w0 and choose a random direction v̂ in the weight space. We step in that direction by choosing different values of t and looking at predictions at configurations ~w0 + tv̂. We do this for many random directions v̂.\n• Monte Carlo dropout subspace: We start at an optimized solution ~w0 and apply dropout with a randomly chosen pkeep to it. We do this many times, each time choosing a random pkeep, and look at predictions at dropoutpkeep(~w0). • Diagonal Gaussian subspace: We start at an optimized solution ~w0 and look at the most recent iterations of training proceeding it. For each trainable parameter wi, we calculate its mean meani and standard deviation std(wi). To sample solutions from the subspace, we draw each parameter independently as wi ∼ N (meani, stdi). We repeat this many times and obtain predictions in each. This corresponds to sampling from a normal distribution with a diagonal covariance matrix.\n• Low-rank Gaussian subspace: We start at an optimized solution ~w0 and look at the most recent iterations of training proceeding it. For each trainable parameter wi, we calculate its mean meani. For a rank-k approximation, we calculate top k principal components of the weight vectors in the most recent iterations of training {~pi ∈ Rparams}k. We sample from a k-dimensional normal distribution and obtain the weight configurations as ~w ∼ ~mean + ∑ iN (0k, 1k)~pi.\nFigure 4 shows that functions sampled from a subspace (denoted by colored squares) corresponding to a particular initialization, are much more similar to each other. While some subspaces are more diverse, they still do not overlap with functions from another randomly initialized trajectory.\nDiversity versus Accuracy plots To illustrate the difference in another fashion, we sample functions from a single subspace and plot diversity (as measured by disagreement between predictions) versus accuracy in Figure 5. Comparing these subspace points (colored dots) to the baseline optima (green star) and the optima from different random initializations (denoted by red stars), we observe that random initializations are much more effective at sampling diverse and accurate solutions, than subspace based methods constructed out of a single trajectory.\nThe diversity score used above quantifies the difference of two functions, by measuring fraction of points on which their predictions differ. We chose this approach due to its simplicity; one could also compute the KL-divergence or other distances between the output probability distributions Let ddiff denote the fraction of predictions on which the two functions differ. It is 0 when the two functions make identical class predictions, and 1 when they differ on every single example. To account for the fact that the lower the accuracy of a function, the higher its potential ddiff due to the possibility of the wrong answers being random and uncorrelated between the two functions, we normalize this by (1− a), where a is the accuracy. For a reference function f∗ of accuracy a∗ and a function f of accuracy a whose predictions are obtained by randomly perturbing the predictions of f∗, the expected fractional difference is ddiff = (C − 1)(a∗ − a)/(a∗C − 1), where C is the number of classes. If the function f of accuracy a were entirely independent of f∗, then the expected fractional difference would be ddiff = (1− a∗)a+ (1− a)a∗ + (1− a∗)(1− a)(C − 2)/(C − 1). Those two limiting behaviours – the function f being derived from f∗ by a perturbation, and f and f∗ being completely independent – form the two dashed lines in Figure 5. We refer to Appendix D for further details on the limiting curves. The diversity reached is not as high as the theoretical optimum even for the independently initialized and optimized solutions, which provides scope for future work." }, { "heading": "3.3 IDENTICAL LOSS DOES NOT IMPLY IDENTICAL FUNCTIONS IN PREDICTION SPACE", "text": "Figure 6 shows the radial loss landscape (train as well as the validation set) along the directions of two different optima. The left subplot shows that different trajectories achieve similar values of the loss, and the right subplot shows the similarity of these functions to their respective optima (in particular the fraction of labels predicted on which they differ divided by their error rate). While the loss values from different optima are similar, the functions are different, which confirms that random initialization leads to different modes in function space.\nWe construct a low-loss tunnel between different optima using the procedure proposed by Fort & Jastrzebski (2019), which is a simplification of the procedures proposed in Garipov et al. (2018) and Draxler et al. (2018). As shown in Figure 7(a), we start at the linear interpolation point (denoted by the black line) and reach the closest point on the manifold by minimizing the training loss. The minima of the training loss are denoted by the yellow line in the manifolds. Figure 7(b) confirms that the tunnel is indeed low-loss.\nIn order to visualize the 2-dimensional cut through the loss landscape and the the associated predictions on along a curved low-loss path, we divide the path into linear segments, and compute the loss and prediction similarities on a triangle given by this segment on one side and the origin of the weight space on the other. We perform this operation on each of the linear segments from which the low-loss path is constructed, and place them next to each other for visualization. Figure 8 visualizes the loss along the manifold, as well as the similarity to the original optima. Note that the regions between radial yellow lines consist of segments, and we stitch these segments together in Figure 8. The accuracy plots show that as we traverse along the low-loss tunnel, the accuracy remains fairly constant as expected. However, the prediction similarity plot shows that the low-loss tunnel does not correspond to similar solutions in function space. What it shows is that while the modes are connected in terms of accuracy/loss, their functional forms remain distinct and they do not collapse into a single mode." }, { "heading": "4 EVALUATING THE RELATIVE EFFECTS OF ENSEMBLING VERSUS SUBSPACE METHODS", "text": "Our observations in the previous section suggest that subspace-based methods and ensembling should provide complementary benefits in terms of uncertainty and accuracy. To test this, we evaluate the performance of the following four variants using SmallCNN on CIFAR-10:\n• Baseline: optimum at the end of a single training trajectory. • Subspace sampling: average predictions over the solutions sampled from a subspace. • Ensemble: train baseline multiple times with random initialization and average the predictions. • Ensemble + Subspace sampling: train multiple times with random initialization, use subspace\nsampling within each trajectory.\nFigures 9(a) and 9(b) show the results for low rank Gaussian subspace and diagonal Gaussian subspace respectively. The results validate our hypothesis as (i) subspace sampling and ensembling provide complementary benefits, and (ii) the relative benefits of ensembling are higher as it averages predictions over more diverse solutions.\nWeight averaging within a subspace One could use the mean and diagonal/low-rank variance to approximate each mode of the posterior, however that increases the number of parameters required for each mode. Using just the mean weight for each mode would not increase the number of parameters. Izmailov et al. (2018) proposed stochastic weight averaging (SWA) for better generalization. One could also compute an (exponential moving) average of the weights along the trajectory, inspired by Polyak-Ruppert averaging in convex optimization, (see also (Mandt et al., 2017) for a Bayesian view on iterate averaging). As weight averaging has been already studied by Izmailov et al. (2018), we do not discuss it in detail. Figure S1 provides an illustration of why these strategies might help with generalization. We use weight averaging (WA) on the last few epochs which corresponds to using the mean of the subspace within each mode. Figure 10(a) shows that weight averaging achieves better performance within each mode, and ensemble + WA performs as well as ensemble + subspace combination methods, without any additional parameter overhead.\n(a) Accuracy & Brier: Weight Averaging vs. Ensemble (b) Results on CIFAR-10-C: Accuracy & Brier versus Corruption Intensity\nFigure 10: Results on CIFAR-10 using SimpleCNN: clean test and CIFAR-10-C corrupted test set.\nFigure 10(b) shows accuracy and Brier score on CIFAR-10, both on the usual test set (corresponding to the intensity = 0 column) as well as on the CIFAR-10-C benchmark proposed (Hendrycks & Dietterich, 2019) which contains corrupted versions of CIFAR-10 with varying intensity values (1-5), making it useful to verify calibration under dataset shift (Ovadia et al., 2019). We see that ensembling and weight-averaging provide complementary benefits. WA improves over the vanilla baseline, but combining WA with ensembling over multiple random initializations improves performance further. Figure 9 reports accuracy and Brier score on the usual CIFAR-10 test set as a function of ensemble size. Under dataset shift, it is particular important to have diverse functions to avoid overconfident predictions (as averaging over similar functions would not reduce overconfidence)." }, { "heading": "4.1 RESULTS ON IMAGENET", "text": "To illustrate the effect on another challenging dataset, we repeat these experiments on ImageNet (Deng et al., 2009) using the same ResNet20V1 architecture. Due to computational constraints, we focus mainly on the experiment decoupling the effect of weight averaging vs ensembling. Figure 11(a) shows the complementary effects of ensembling and weight averaging; Figure 11(b) shows results on (subset of) ImageNet-C demonstrating that these trends are similar to those observed on CIFAR-10." }, { "heading": "5 DISCUSSION", "text": "Our results show that trajectories of randomly initialized neural networks explore different modes in function space, which explains why deep ensembles with random initializations help. They are essentially orthogonal to each other in the space of weights and very diverse in terms of their predictions. While these modes can be connected via optimized low-loss paths between them, we demonstrate that they correspond to distinct functions in terms of their predictions. Therefore the connectivity in the loss landscape does not imply connectivity in the space of functions.\nSubspace sampling methods such as weight averaging, Monte Carlo dropout, and various versions of local Gaussian approximations, sample functions that might lie relatively far from the starting point in the weight space, however, they remain in the vicinity of their starting point in terms of predictions, giving rise to an insufficiently diverse set of functions. Using the concept of the diversity– accuracy plane, we demonstrate empirically that these subspace sampling methods never reach the combination of diversity and accuracy that independently trained models do, limiting their usefulness for ensembling." }, { "heading": "B ADDITIONAL ABLATION EXPERIMENTS", "text": "B.1 EFFECT OF RANDOMNESS: RANDOM INITIALIZATION VERSUS RANDOM SHUFFLING\nRandom seed affects both initial parameter values as well the order of shuffling of data points. We run experiments to decouple the effect of random initialization and shuffling; Figure S2 shows shows the results. We observe that both of them provide complementary sources of randomness, with random initialization being the dominant of the two. As expected, random mini-batch shuffling adds more randomness at higher learning rates due to gradient noise." }, { "heading": "C ADDITIONAL DIVERSITY – ACCURACY RESULTS ON CIFAR-100", "text": "We run additional experiments comparing the diversity of solutions found vs their test accuracy on CIFAR-100. CIFAR-100 is an intermediate step between CIFAR-10 and ImageNet, and is overall\nFigure S2: The effect of random initializations and random training batches on the diversity of predictions.\nmuch more challenging to learn than CIFAR-10. Our additional results are presented in Figure S3. Solutions obtained by subspace sampling methods described in Section 4 have a worse trade off between prediction diversity (needed for ensembling) and accuracy, compared to independently initialized and trained optima. This is consistent with our results on CIFAR-10 in Figure 5.\nFigure S3: Diversity versus accuracy plots for a ResNet20v1 trained on CIFAR-100." }, { "heading": "D DERIVING THE UPPER AND LOWER LIMIT CURVES IN THE DIVERSITY–ACCURACY PLOTS", "text": "In Figures 5 and S3 we bound our empirical results by two theoretically derived curves, limiting the expected trade off between diversity and accuracy in the best and worst case scenarios. The resulting functions are presented in the main text in Section 3.2. We will show the detailed derivations here.\nGiven a C-class classification problem and a reference solution with accuracy a∗, we would like to obtain a function ddiff(a) which gives us the fraction of labels on which another solution disagrees with the reference solution as a function of its accuracy.\nD.1 UNCORRELATED PREDICTIONS – THE BEST CASE\nThe best case scenario is when the predicted labels are uncorrelated with the reference solution’s labels. On a particular example, the probability that the reference solution got it correctly is a∗, and the probability that our solution got it correctly is a. On those examples, the predictions do not differ since they both have to be equal to the ground truth label. The probability that the reference solution is correct on an example while our solution is wrong is a∗(1− a). The probability that the reference solution is wrong on an example while our solution is correct is (1− a∗)a. On the examples where both solutions are wrong (probability (1− a∗)(1− a)) there are two cases: a) the two solutions agree (an additional factor of 1/(C − 1)) or b) disagree (an additional factor of (C − 2)/(C − 1)). Only the case b) contributes to the fraction of labels on which they disagree. Hence we end up with\nddiff(a; a ∗, C) = (1− a∗)a+ (1− a)a∗ + (1− a∗)(1− a)C − 2\nC − 1 . (3)\nD.2 CORRELATED PREDICTIONS – THE WORST CASE\nThe other extreme case is when the predictions of our new solution are just the predictions of the reference solution perturbed by perturbations of different strength. Then, the solutions retain a great amount of correlation.\nLet the probability of a label changing be p. We will consider 4 cases: a) the label of the correctly classified image does not flip (probability a∗(1− p)), b) it flips (probability a∗p), c) an incorrectly labelled image does not flip (probability (1− a∗)(1− p)), and d) it flips (probability (1− a∗)p). The resulting accuracy a(p) obtains a contribution a∗(1 − p) from case a) and with probability 1/(C − 1) contribution (1 − a∗)p from d). Therefore a(p) = a∗(1 − p) + p(1 − a∗)/(C − 1). Inverting this relationship, we get p(a) = (C − 1)(a∗ − a)/(Ca∗ − 1). The fraction of labels on which the solutions disagree is simply p by our definition of p, and therefore\nddiff(a; a ∗, C) = (C − 1)(a∗ − 1a) Ca∗ − 1 . (4)" } ]
2,019
DEEP ENSEMBLES: A LOSS LANDSCAPE PERSPECTIVE
SP:d5d2c965b30b18749ef11e08271d76ff9c556329
[ "In this paper, the authors present an approach for semi-supervised learning which combines noisy labels with boosting. In a first step, the labeled instances are used to train a set of classifiers, and these are used to create noisy labels for the unlabeled instances. Then, an EM procedure is used to estimate the noise level of each instance. Finally, a version of AdaBoost which accounts for instance noise levels is proposed to create a final classifier. A limited set of experiments suggests the proposed approach is competitive with existing approaches.", "The authors propose a new semi-supervised boosting approach. The approach takes a set of supervised learning algorithms to simulate \"crowd-source\" labels of the unlabeled data, which are then used to generate a noisy label per unlabeled instance. The noise level is then estimated with an agreement-based scheme, and fed to a modified AdaBoost algorithm that is more noise-tolerant given the noise level. Some theoretical guarantee of the modified AdaBoost algorithm is derived and promising experiment results are demonstrated." ]
Attention to semi-supervised learning grows in machine learning as the price to expertly label data increases. Like most previous works in the area, we focus on improving an algorithm’s ability to discover the inherent property of the entire dataset from a few expertly labelled samples. In this paper we introduce Boosting via Self Labelling (BSL), a solution to semi-supervised boosting when there is only limited access to labelled instances. Our goal is to learn a classifier that is trained on a data set that is generated by combining the generalization of different algorithms which have been trained with a limited amount of supervised training samples. Our method builds upon a combination of several different components. First, an inference aided ensemble algorithm developed on a set of weak classifiers will offer the initial noisy labels. Second, an agreement based estimation approach will return the average error rates of the noisy labels. Third and finally, a noiseresistant boosting algorithm will train over the noisy labels and their error rates to describe the underlying structure as closely as possible. We provide both analytical justifications and experimental results to back the performance of our model. Based on several benchmark datasets, our results demonstrate that BSL is able to outperform state-of-the-art semi-supervised methods consistently, achieving over 90% test accuracy with only 10% of the data being labelled.
[ { "affiliations": [], "name": "SELF LABELLING" } ]
[ { "authors": [ "Avrim Blum", "Tom Mitchell" ], "title": "Combining labeled and unlabeled data with co-training", "venue": "In Proceedings of the eleventh annual conference on Computational learning theory,", "year": 1998 }, { "authors": [ "Jakramate Bootkrajang", "Ata Kabán" ], "title": "Boosting in the presence of label noise", "venue": "arXiv preprint arXiv:1309.6818,", "year": 2013 }, { "authors": [ "Yohan Chon", "Nicholas D Lane", "Fan Li", "Hojung Cha", "Feng Zhao" ], "title": "Automatically characterizing places with opportunistic crowdsensing using smartphones", "venue": "In Proceedings of the 2012 ACM Conference on Ubiquitous Computing,", "year": 2012 }, { "authors": [ "Alexander Philip Dawid", "Allan M Skene" ], "title": "Maximum likelihood estimation of observer error-rates using the em algorithm", "venue": "Journal of the Royal Statistical Society: Series C (Applied Statistics),", "year": 1979 }, { "authors": [ "Ayhan Demiriz", "Kristin P Bennett", "Mark J Embrechts" ], "title": "Semi-supervised clustering using genetic algorithms", "venue": null, "year": 1999 }, { "authors": [ "Carlos Domingo", "Osamu Watanabe" ], "title": "Madaboost: A modification of adaboost", "venue": "In COLT, pp", "year": 2000 }, { "authors": [ "Yoav Freund", "Robert Schapire" ], "title": "A short introduction to boosting", "venue": "Journal-Japanese Society For Artificial Intelligence,", "year": 1999 }, { "authors": [ "Akinori Fujino", "Naonori Ueda", "Kazumi Saito" ], "title": "A hybrid generative/discriminative approach to semi-supervised classifier design", "venue": "In Proceedings of the National Conference on Artificial Intelligence,", "year": 1999 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Trevor Hastie", "Saharon Rosset", "Ji Zhu", "Hui Zou" ], "title": "Multi-class adaboost", "venue": "Statistics and its Interface,", "year": 2009 }, { "authors": [ "David R Karger", "Sewoong Oh", "Devavrat Shah" ], "title": "Iterative learning for reliable crowdsourcing systems. In Advances in neural information processing", "venue": null, "year": 1953 }, { "authors": [ "David R Karger", "Sewoong Oh", "Devavrat Shah" ], "title": "Budget-optimal task allocation for reliable crowdsourcing systems", "venue": "Operations Research,", "year": 2014 }, { "authors": [ "Hyun-Chul Kim", "Zoubin Ghahramani" ], "title": "Bayesian classifier combination", "venue": "In Artificial Intelligence and Statistics,", "year": 2012 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Qiang Liu", "Jian Peng", "Alexander T Ihler" ], "title": "Variational inference for crowdsourcing", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin Raffel", "Ekin Dogus Cubuk", "Ian Goodfellow" ], "title": "Realistic evaluation of semi-supervised learning algorithms. 2018", "venue": "URL https://arxiv.org/pdf/ 1804.09170.pdf", "year": 2018 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Gunnar Rätsch", "Takashi Onoda", "K-R Müller" ], "title": "Soft margins for adaboost", "venue": "Machine learning,", "year": 2001 }, { "authors": [ "Vikas C Raykar", "Shipeng Yu" ], "title": "Eliminating spammers and ranking annotators for crowdsourced labeling tasks", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Vikas C Raykar", "Shipeng Yu", "Linda H Zhao", "Gerardo Hermosillo Valadez", "Charles Florin", "Luca Bogoni", "Linda Moy" ], "title": "Learning from crowds", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Robert E Schapire" ], "title": "Explaining adaboost", "venue": "In Empirical inference,", "year": 2013 }, { "authors": [ "Padhraic Smyth", "Usama M Fayyad", "Michael C Burl", "Pietro Perona", "Pierre Baldi" ], "title": "Inferring ground truth from subjective labelling of venus images", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Peter Welinder", "Steve Branson", "Pietro Perona", "Serge J Belongie" ], "title": "The multidimensional wisdom of crowds", "venue": "In Advances in neural information processing systems,", "year": 2010 }, { "authors": [ "Jacob Whitehill", "Ting-fan Wu", "Jacob Bergsma", "Javier R Movellan", "Paul L Ruvolo" ], "title": "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in neural information processing", "venue": null, "year": 2009 }, { "authors": [ "Xiaojin Jerry Zhu" ], "title": "Semi-supervised learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "The rise of the Internet has made it easy to collect massive amounts of data to perform machine learning tasks. However, providing quality labels to each of the samples collected within these large datasets is a long and expensive process. There is a rich literature aiming to alleviate this issue, including using techniques from unsupervised machine learning and crowdsourcing. In this paper, we approach the problem by combining concepts from crowdsourcing, learning with noisy data and boosting to propose a novel framework: Boosting via Self Labelling (BSL).\nOur aim is to develop and leverage i) machine learning and estimation approaches to create selflabels for unlabelled instances, and ii) a noise-resistant learning procedure to speed up the performance of the seminal AdaBoost algorithm. The framework consists of mainly three steps:\n1. Accurately predict labels for the unlabelled part of the data using a set of supervised classifiers trained upon a small labelled dataset. Each classifier in our set is analogous to an agent in a crowdsourcing setting. As a result, an inference method can be used to aggregate each of the classifiers predictions and output an accurate noisy label for each of the unlabelled data points.\n2. The second step aims to estimate the noise rate of the generated noisy labels by checking how often the noisy labels generated in step 1 would agree with a particular classifier. This second order statistic suffices to return us the error rates.\n3. The third step of our approach looks at producing a robust boosting method that is trained over the generated noisy data. Classically AdaBoost does not perform well under noisy data compounding errors for each point and progressively creating a worse classifier. The third step introduces a noise resistant version of AdaBoost which relies on the noise and error rate estimated in step 2. This results in a final classifier which can be compared against different semi-supervised algorithms.\nBSL builds upon mainly two lines of similar works on boosting without cleanly labelled data:\n1. Semi-supervised boosting (Fujino et al., 2005; Blum & Mitchell, 1998; Laine & Aila, 2016; Grandvalet & Bengio, 2005) uses semi-supervised algorithms (e.g., clustering) to generate ar-\ntificial or proxy labels and boost accordingly (which we compare with). Existing methods that apply self-generated labels directly to boosting will fail as the noises in the labels will accumulate while boosting, especially when the noise rates are high - this is what we observed in our experiment results. Our idea is a couple of steps further: we introduce a bias correction procedure into boosting, and explicitly estimate the noises in generated labels. We also introduced an inference framework for generating these labels at first place.\n2. Noise resistant boosting (Bootkrajang & Kabán, 2013) addresses boosting algorithms which are susceptible to noisy data and proposing variants which can perform under a certain noisy conditions. A set of noisy labels, as well as the knowledge of the noises, are often assumed to be known. We do not require neither - we will self-generate the labels for unlabelled instances and learn their error rates.\nOur contributions summarize as follows:\n1. We propose a novel self-labelling boosting algorithm (BSL) which is able to outperform present state-of-the-art semi-supervised algorithms.\n2. As two key components of our self-labelling framework, we contribute i) a new formulation of a noise resistant Adaboost algorithm which corrects the noises in the labels - this is important in a boosting process, because otherwise the label noises will accurate while boosting; ii) a label error estimation procedure without accessing the ground truth labels.\n3. We offer both theoretical guarantees, as well as experimental evidence to the performance of our framework. We conducted an extensive set of experiments to verify the effectiveness of BSL. In the different datasets we ran our algorithm on, our method consistently outperforms many other algorithms by more than 20%. In the cancer dataset when 10% of the data was labelled, the second best algorithm (Semiboost) under performed our algorithm by 66% (relative performance). Theoretically we are able to show the convergence of our noise-resistant AdaBoost subroutine under symmetric error rate assumption.\nThe rest of the paper organizes as follows. We survey the most relevant work in the rest of the section. Preliminaries are introduced in Section 2. Section 3 presents a noise-resistant AdaBoost algorithm. Our Boosting via Self Labelling framework is introduced in Section 4. Section 5 presents our experiment results. Section 6 concludes our paper. All missing details can be found in Appendix." }, { "heading": "1.1 RELATED WORKS", "text": "Our work has been inspired by three different lines of research:\nSemi-Supervised Learning: Research in this avenue looked at creating accurate labels in the presence of limited labelled data. Work started with some basic algorithms (Cortes & Vapnik, 1995; Fujino et al., 2005; Demiriz et al., 1999; Blum & Mitchell, 1998), but escalated to complex systems (Lee; Miyato et al., 2018; Laine & Aila, 2016; Grandvalet & Bengio, 2005). A good survey in this area was done by Zhu (Zhu, 2005).\nCrowdsourcing: Inference crowdsourcing methods have played a huge part in uncovering true labels from multiple noisy labels. Work in the area has ranged from EM algorithms (Dawid & Skene, 1979; Raykar et al., 2010; Smyth et al., 1995; Karger et al., 2011) to variational inference methods (Liu et al., 2012; Karger et al., 2014; Whitehill et al., 2009; Welinder et al., 2010; Raykar & Yu, 2012). (Chon et al., 2012) wrote a good survey for this topic. In an ensemble setting, crowdsourcing has appeared in numerous works include, most famously, (Kim & Ghahramani, 2012). However, many of these results do not consider the noise present in the final aggregated value, which can lead to accumulated errors in a boosting setting.\nBoosting: Starting with the work (Freund & Schapire, 1999), research in AdaBoost has expanded to all areas of machine learning. Some of the recent work in the area has looked into improving the performance and pitfalls the original algorithm faced (Rätsch et al., 2001; Hastie et al., 2009; Domingo et al., 2000; Schapire, 2013; Bootkrajang & Kabán, 2013).\nOur work builds upon previous work done in these areas, using similar ideas to construct a unique method. One notable piece of work is Semiboost (Zhu, 2005). This algorithm takes a similar approach, introducing a boosting framework that improves upon existing classifiers to provide a good\nclassifier in the semi-labelled setting. However, Semiboost uses an unsupervised learning approach using a similarity matrix based on labelled points and unlabelled points. Semiboost requires a similarity equation to run, and this can be hard to achieve in practice. Pseudo-labelling is one of many neural network approaches to the limited labelled dataset problem. Pseudo-labelling assigns labels to the unlabelled data and then trains a neural network on the combination of the clean and noisy labels. However, Pseudo-labelling and algorithms like it, face many different requirements. Algorithms in this class require a sufficient amount of supervised data to work optimally. They also need the classes to be clustered within the data, and the labelled data to not adhere to the same distribution as the unlabelled data (Oliver et al., 2018). Learning in noisy data has also been a research focus that runs parallel to our work. In (Natarajan et al., 2013), Natarajan et al. propose a noisy learning algorithm which performs significantly better than other noise resistant algorithms. However, the method requires knowledge on the amount of noise inside of the dataset before training and in a limited label dataset, which is hard to get in practice. In contrast to these related works, the algorithm we introduce in this paper does not need prior knowledge about the dataset for it to perform and the number of labelled points does not adversely affect the accuracy of the classifier generated." }, { "heading": "2 PRELIMINARY AND PROBLEM FORMULATION", "text": "Assume that D is the underlying true distribution generating n iid examples (xi, yi)ni=1, where each example consists of a feature vector xi = [xi,1, xi,2, . . . , xi,d] ∈ X ⊆ Rd, and a label yi ∈ {−1,+1} (∼ Y ). We assume the true labels of τ examples is available where τ < n such that (xi, yi) τ i=1, denoted as {(x1, y1), (x2, y2), · · · , (x|NL|, y|NL|)} := NL ; while the rest of n − τ samples (xi)ni=τ is unlabelled, {x1, x2, · · · , x|NU |} := NU . Let NU and NL be the set of indices that make up NU and NL respectively. Assume that after an initial classifier f : X → {−1,+1} is trained on NL and applied on the unlabelled dataset such that f(xi) → ỹi, xi ∈ NU . Then NUnoisy denotes the noisy data set produced (xi, ỹi) |NU | i=1 . Let Nnoisy be the combination of NL and NUnoisy such that for i = 1 to |Nnoisy|: {(xi, ỹi), if (xi, ỹi) ∈ NUnoisy ; (xi, yi), if (xi, yi) ∈ NL}. Let Nnoisy be the set of all indices within Nnoisy . Assume that Nnoisy follows as class-conditional random noise model such that: ∀n = 1, 2, ..., |Nnoisy|: P(ỹi = −1|yi = +1, xn) = ρ+, P(ỹi = +1|yi = −1, xn) = ρ− and ρ+ + ρ− < 1. Our goal is to learn a classifier f : X → {−1,+1} trained on Nnoisy that minimizes the risk of f w.r.t to the 0-1 loss function RD(f) = E(xi,yi)∼D[1(f(xi) 6= yi)]." }, { "heading": "2.1 OUR PROBLEM: BOOSTING VIA SELF LABELLING", "text": "We introduce the setting for AdaBoost (Freund & Schapire, 1999). The key idea is, at step t,\n• Maintain a weight Di(t) for each data instance (xi, yi) ∈ NL. • Train a weak learner ft according to the weighted data distribution. • The final hypothesis F is a linear combination of each ft trained at every step t.\nLet Nmiss be the set of all (xi, yi) ∈ NL, such that ft(xi) 6= yi. The goal is to increase the weight of mis-classified points Di(t), (xi, yi) ∈ Nmiss to encourage classifier ft+1(·) to focus on correctly classifying Nmiss:\nDt(i+ 1) = Dt(i) · exp\n( −αt · ft(xi) · yi ) Zt ,\nwhere Zt = ∑ i∈NL Dt(i) ·exp ( −αt ·ft(xi) ·yi ) is a normalization factor. AdaBoost creates a final\nhypothesis in the additive form: F (xi) = ∑T t=1 αtft(xi), where xi is a test sample.\nOur goal, and a short coming with AdaBoost, is classifying a dataset where some of the yi in (xi, yi) ∈ Nnoisy are noisy. Because AdaBoost uses a exponential loss function it is inherently susceptible to noisy labels. We propose a new loss function that removes bias:\nDt+1(i) = Dt(i) · exp\n( −αt · ˜̀(xi, ỹi) ) Zt , ∀i ∈ Nnoisy\nOur goal is to define a function ˜̀(·) that can help us evaluate an unlabelled instance. This function ˜̀(·) will allow us to adjust to noisy labels within Nnoisy . Our algorithm runs in two main stages. It\nfirst applies noisy labels for unlabelled instances in our dataset, and then creates a final hypothesis F using a noise-resistant variant of AdaBoost where we define ˜̀(·) ." }, { "heading": "3 NOISE-RESISTANT ADABOOST", "text": "We first extend a learning with noisy data approach to the boosting setting, following the work in (Natarajan et al., 2013). Suppose the examples have the following homogeneous error rates:\nρ+ := Pxi(ỹi = −1|yi = +1), ρ− := Pxi(ỹi = +1|yi = −1) Suppose we know these error rates in this section. Boosting over the above noisy examples will lead to a biased training process when the label noises are sufficiently large. Our approach is a straightforward adaptation from a noise correction mechanism adopted in supervised learning (Natarajan et al., 2013): defining surrogate loss function ˜̀on noisy labels (for an arbitrary loss function `)\n˜̀(f(xi), ỹi = +1) := (1− ρ−)`(f(xi),+1)− ρ+`(f(xi),−1)\n1− ρ+ − ρ− , (1)\n˜̀(f(xi), ỹi = −1) := (1− ρ+)`(f(xi),−1)− ρ−`(f(xi),+1)\n1− ρ+ − ρ− . (2)\nA nice property of above estimator is its unbiasedness (Natarajan et al., 2013): Eỹi|yi [˜̀(f(xi), ỹi)] = `(f(xi), yi). We adapt this idea to AdaBoost. Replace `(·) with the following loss measure as adopted in AdaBoost: `(f(xi), yi) = ft(xi) · yi. Define ω+ := 1−ρ−+ρ+1−ρ−−ρ+ , ω− := 1−ρ++ρ− 1−ρ−−ρ+ and ̂+t := Px|ỹ=+1(ft(x) 6= ỹ), ̂−t := Px|ỹ=−1(ft(x) 6= ỹ), (3) and ̂t := max{̂+t , ̂−t } and the following αts\nα+t = 1\n2ω+ ln 1− ̂t ̂t , α−t = 1 2ω− ln 1− ̂t ̂t\n(4)\nThen update Di(t) as follows:\nDi(t+ 1) := Di(t) · exp\n( −αsign(ỹi)t ˜̀(f(xi), ỹi) ) Zt\n(5)\nwhere Zt is again the normalization factor. The reason that we need to define two learning rate αts is because the losses are weighted differently for the noisy labels. and ω := max{ω+, ω−},\nγt := 1 2 − ̂t, and (δ, n, ρ+, ρ−) := ω ·\n√ n ln 2δ\n2 . Since we have defined two αts on the training\ndata based on the noisy labels, we define αt := α+t +α − t\n2 , and let the final output classifier be F (x) = sign (∑T t=1 αtft(x) T > 0 ) . We prove the following performance guarantee when ρ+ = ρ−:\nTheorem 1 With probability at least 1− δ,∑ i∈Nnoisy 1(F (xi) 6= yi) ≤ exp ( −2 T∑ t=1 γ2t ) + (δ,N, ρ+, ρ−).\nThough our above results are proved under the symmetric error setting, we experimentally verified the performance of our error-resistant boosting procedure." }, { "heading": "4 A SELF LABELLING FRAMEWORK FOR BOOSTING", "text": "We now introduce our Self Labelling framework for boosting. The framework can be broken into two major components: the noisy label generation and the noisy Adaboost algorithm. Section 3 already described the formation and proof of the noisy Adaboost algorithm. In the next two sections go into detail over i) the inference method to generate noisy label and ii) the procedure to estimating the noise levels within the generated labels as inputs into the Adaboost algorithm. More specifically, Section 4.1 delves deeper into detailing the processes of noisy label generation particularly addressesing matrix L and the aggregation inside of L that forms the noisy dataset. While Section 4.2 explores how the noise levels within the dataset (ρ−,ρ+) is calculated and used as inputs to the noisy Adaboost algorithm.\nAlgorithm 1 Boosting via Self Labelling 1: Input: Labelled Data NL, unlabelled Data NU 2: Output: Final Hypothesis F (x) 3: For i = 1, · · · ,M :\n• Train fi on (xi, yi) ∈ NL • Get hypothesis ht = fi(xi)→ {−1,+1}|NU |, ∀xi ∈ NU • Add ht to matrix L\n4: Run inference method to generate noisy labels for NU : I(L) → {−1,+1}|NU |. Denote the noisy dataset as NUnoisy . Let Nnoisy= NL ∪NUnoisy . 5: Estimate (ρ+, ρ−) using Eqn. (6, 7). 6: Initialize: Di(1) = 1/N for i ∈ Nnoisy 7: for t = 1, · · · , T :\n• Train weak classifier gt on data distribution D(t) • Get weak hypothesis ht :=gt(x, i), (xi, ỹi) ∈ Nnoisy → {−1,+1} • Get weighted error +, − using Eqn.(3) • Calculate ω+, ω− and α+t , α−t (Eqn.(4)), using estimated ρ̃+, ρ̃−. • Compute αt := α + t +α − t\n2 . • For i ∈ NU : Update weight Di(t + 1) according to Eqn.(5). Else for i ∈ NL, update\nweight Di(t+ 1) according to Eqn.(5) with ρ+ = ρ− = 0. 8: Final Hypothesis:\nF (x) = sign\n(∑T t=1 αtft(x)\nT > 0\n)" }, { "heading": "4.1 SELF LABELLING", "text": "We generate noisy labels for the unlabelled dataset via a crowdsourcing perspective based on the works of (Dawid & Skene, 1979; Liu et al., 2012). On line 3 in Algorithm 1, M classifiers {f1(·), f2(·), ...., fM (·)} are trained on labelled data NL = {x1, x2, . . . , x|NL|}, with the goal to classify all NU data points, {x1, x2, . . . , x|NU |} as closely as possible to their true unknown binary labels {y1, y2, . . . , y|NU |} ∈ {−1,+1}. Each data point xi ∈ NU will receive a noisy label ỹi,j which denotes classifier fj(·) ∈M prediction on xi.\nLet each row in matrix L ∈ {−1,+1}|NU |×M , in the final step on line 3, represent {ỹi};∀i ∈ NU for each fj(·) s.t. Li,j = fj(xi) = ỹi,j . Assuming fj(·) 6= fk(·);∀j, k ∈ M s.t. j 6= k each fj(·) will learn a different distribution. This will allow for a variation that gives a better prediction on the underlying structure of NU allowing for some fj(·) to be closer to capturing the true underlying distribution of NU compared to others. Assigning weight qj to each fj(·) according to its perceived accuracy in respect to other classifiers allows for a more accurate aggregation of all fj(·) responses. More formally, in step 4 in Algorithm 1, for each of the M classifiers {f1(·), f2(·), . . . , fM (·)}, I (variational inference method) assigns an optimal weight {q1, q2, . . . , qM} such that taking the aggregation of each classifiers prediction for point xi ∈ NU against its respective weight qj will result in a value ỹi that will be as close as possible to the unknown true label yi. The set of xi and aggregated ỹi ∀i ∈ NU will form NUnoisy . Conceptually, I uses qj to classify fj(·) as an expert if qj > 0.5, a spammer if qj ≈ 0.5 or an adversary if qj < 0.5. If fj(·) is designated a spammer then fj(·) is randomly guessing and does not provide any useful prediction. If fj(·) is denoted as an adversary, fj(·) is believed to be “purposely” picking the incorrect label and ỹi,j = fj(xi). As the number of labelled data gets smaller, the number of spammers and adversaries found in M classifiers increases.\nAssuming conditional independence among classifiers I can predict the optimal weight as following: If P(fj(x), fk(x)|y) = P(fj(x)|y)P(fk(x)|y) where j, k ∈ |M|s.t.j 6= k (treating each classifier as an independent labeler), we can apply inference approaches to aggregate and infer the true label:\nq̂ = arg max log P(q|L, θ) = log ∑ y P(q, y|L, θ)\nExpectation maximization can be used to solve for this maximum a posteriori estimator q̂ by treating the true labels yi as the hidden variable. Assuming a Beta(σ, β) distribution, an EM can be formulated as follows:\nE Step: µi(yi) ∝ ∏ j∈Mi q̂ δi,j j (1− q̂j) 1−δi,j M Step: q̂j = ∑ i∈Nj µi(Li,j) + σ − 1 |Nj |+ σ + β − 2\nwhere δi,j = I[Li,j = yi] and ỹi is estimated by ỹi = arg maxyi µi(yi) given µi is the estimated likelihood of yi. Nj stands for all labels observed by classifier j: Nj = ({fj(xi)},∀i ∈ 1 · · · |NU |). Finally, step 5 of Algorithm 1 shows the noisy dataset (xi, ỹi),∀i ∈ Nnoisy where ỹi is the noisy label provided by the inference algorithm if xi ∈ NU , else ỹi = yi if (xi, yi) ∈ NL. Nnoisy then feeds into the Noisy Adaboost Algorithm introduced in Section 3." }, { "heading": "4.2 ERROR ESTIMATION", "text": "LetA(.) represent the noise resistant AdaBoost algorithm we introduced in section 3. Let I represent the inference method we will use to aggregate matrix L. Inference method I outputs a noisy labelled data set: NUnoisy = {xi, ỹi},∀xi ∈ NU . In order to run A we need to accurately estimate the error rates ρ−, ρ+: ρ− = (ỹi = +1|yi = −1), ρ+ = (ỹi = −1|yi = +1) within NUnoisy . Assuming homogeneous error rates, we can derive an accurate estimation for the error rates of NUnoisy from the confusion matrices of the the classifiers fj ∈M. Denote the false positive and false negative of fj(·) by ρ+,fj and ρ−,fj respectively. The inference algorithm I will output ρ−,fj and ρ+,fj for each fi ∈ M by comparing {xi, fj(xi)} against the aggregated label from the inference algorithm. P(yi = −1) and P(yi = +1) are the marginal distribution of positive and negative labels within the dataset. If P(yi = −1) and P(yi = +1) are known we show the below equation uniquely identify the error rates given any fj ∈M:\nLemma 1 ρ+, ρ− can be determined by the following set of equations:\nρ+ = (ρ−,fjP(ỹi = 1)− P(yi,fj = ỹi = 1)\nP(yi = +1)(1− ρ+,fj )− P(yi = −1)(ρ−,fj ) (6)\nρ− = −P(ỹi = 1)(1− ρ+,fj ) + P(yi,fj = ỹi = 1) −P(yi = +1)(1− ρ+,fj ) + P(yi = −1)(ρ−,fj )\n(7)\nIn practice we can balance the dataset to make P(yi = −1) = P(yi = +1) = 0.5. The probability terms P(ỹi = 1) and P(yi,fj = ỹi = 1)n can be estimated through the data. The estimated parameters are plugged into Eqn.(6,7) to approximate ρ̃+, ρ̃−. Note this estimation is done without using ground truth labels." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "The focus of BSL is to provide a framework which can generate an accurate classifier given very little labelled data. In this section, we conduct extensive experiments to verify the effectiveness of BSL and compare with benchmark algorithms." }, { "heading": "5.1 DATASETS", "text": "8 UCI datasets were used to evaluate Boosting via Self-Labelling (BSL). Since Boosting via SelfLabelling works on binary classification problems, we chose datasets which only contained two class labels or turned linear regression datasets into binary labels. The first column of Table 1 has the name of the dataset used and the percentage of data labelled. The Cancer dataset had 567 samples and 30 features. The Diabetes dataset had 768 samples and 8 features. The Thyroid dataset had 215 samples and 5 features. The Heart dataset had 303 samples and 13 features. The German dataset has 1000 samples and 20 features. The Image dataset has 2310 samples and 19 features. The Housing dataset has 506 samples and 13 features. And the Sonar dataset has 208 samples and 60 features." }, { "heading": "5.2 EXPERIMENTAL SETUP", "text": "The goal of our experiments was to show the performance improvement we achieved by using the BSL compared to other semi-supervised algorithms. We use classification error rate (measuring the fraction of mis-classified sample points) each model faced as the evaluation measure. Table 1 reports the mean of 20 different runs of the experiment. To measure the performance of each trial, we split the data into 40% test and 60% train. We then broke up the training into increasing percentages of unlabelled/labelled data points. Table with all results can be found in Appendix.\nThe pre-processing step started by taking out any points that had missing features or missing labels in the entire dataset. As a result, the dataset was completely labelled and each sample had all of its features. Before creating the unlabelled dataset, we split the data into testing and training set. The unlabelled data was created by removing the labels from a designated percentage of the training set. Each of the features was normalized between 0 and 1 to allow for a better approximation of the data. At the end of the pre-processing step, three separate lists were outputted: the testing set, the set that contained the unlabelled data points and the set that contained the labelled data points.\nDuring the first step of our framework, the labelled data was passed to a set of supervised machine learning algorithms. Although the number of classifiers could have been theoretically infinite, we chose to limit the number to 10. As a result, the experiment was able to be conducted in a reasonable time. The supervised algorithms were all implemented using the Scikit-learn (Pedregosa et al., 2011) and consisted of KNN, Decision Tree, Gaussian Mixture Model, Naive Bayes, SVM and Logistic Regression. Some of the models were repeated in their use but took on different initialization values to create different classifiers.\nThe second step of the framework called a basic inference algorithm abbreviated (D&S) (Dawid & Skene, 1979). The crowdsourcing algorithm outputted the noisy labels for each of the unlabelled data points.\nThe final part of the framework, our noise-resistant variant of the AdaBoost algorithm is used. The alpha value for each classifier was optimised to compensate for the increase in values from the loss functions. We limited the algorithm to running only 20 decision stumps as the base classifier.\nTable 1 shows the performance of BSL compared to other state of the art algorithms which try to create optimised classifiers within the label limited dataset. Table 1 only reports the classification errors for 10% of the training data labelled. Each algorithm bench marked a different part of the framework. C-SVM(Liu et al.) showed the improvement noise resistant version of AdaBoost gave over other noise-resistant algorithms. Semiboost, S3VM, Label Propagation, NN, and Logitboost tested against the final classifier produced by the framework. CSVM, Semiboost, S3VM, and Logitboost were all implemented using different external libraries. Label Propagation was implemented using Scikit-Learn (Pedregosa et al., 2011) And the Supervised Neural Network (NN) was was implemented with Keras (Chollet et al., 2015)." }, { "heading": "5.3 RESULTS", "text": "Performance Comparison of BSL with Benchmark Algorithms We compared BSL’s performance to seven benchmark algorithms, specifically: DS+AdaBoost, Semiboost, S3VM, CSVM, Label Propagation, NN, Logitboost. DS + AdaBoost applied inference method to assign noisy labels to the unlabelled data and used a standard AdaBoost algorithm (Pedregosa et al., 2011) to create a final classifier. The Supervised Neural Network (NN) is a sequential model with two hidden layers each with 100 nodes. If DS was not specified then the model performed without any noise labelled data outputted by the crowdsourcing algorithm. BSL’s improvement in performance compared to the other algorithms is significant. BSL consistently had a 20%− 30% increase in performance compared to most of the competing algorithms. While BSL is able to outperform most of the algorithms on each of the 8 datasets, it was outperformed by the DS + AdaBoost for trials in the thyroid and sonar dataset. Semiboost was also able to produce better results against BSL in the image dataset. It is important to note that although DS+AdaBoost and Semiboost outperform BSL, the difference in improvement was not significant. This shows that the loss function that we introduced to the noise resistant algorithm might overfit on the data by taking out more noise than was necessary. The loss of performance could also indicate that the error rates passed into the algorithm were not always close to the actual noise in the dataset or were too high to be effective. Table 2 (located in the\nAppendix) shows the estimated error rate the inference crowdsourcing method outputted for each experiment. It is important to notice the noise that exists within the dataset after the crowdsourcing step. Our handling of this noise allows our algorithm to perform on average better than the other benchmark algorithms. When noise was close to 50%, the noise resistant Adaboost loss function becomes unbounded and this creates unstable classifications. Similarly, the low error rates present within the thyroid dataset allowed for a non-noise resistant version of Adaboost to outperform BSL.\nPerformance with increment in the percentage of unlabelled data With Figure 1 (located in the Appendix), we also show the performance of BSL on 4 UCI datasets we used to compare framework against the baseline algorithms with increasing amount of unlabelled data instances. In the experiment, we increase the percentage of unlabelled data one at each step, starting from 50% going up to 99% (or as far as possible). Under each step of the experiment, we ran BSL 20 times randomising at every turn which points were used in training, testing, being labelled or unlabelled. The solid line shows the mean of running the algorithm 20 times at each step. The graph shows that despite the decrease of labelled data available to train on, the framework can maintain relatively similar classification error rate. This feature is significant because it shows that having a lot of unlabelled data does not restrict the performance of the framework and therefore is not a prerequisite for it to perform well. One can note that all the graphs don’t go to 0.99% unlabelled data. Since the UCI datasets were not significantly big, the closer we are to 0.99% our training data became single-classed labelled. As a result, our framework could not create a final classifier as the ensemble we used was not able to train on the single-classed dataset. As a result, we stopped the experiment at the points where we started getting a lot of single-class labelled data warnings." }, { "heading": "6 DISCUSSION AND CONCLUDING REMARKS", "text": "Our goal in this paper is to present a novel and efficient boosting algorithm for semi-labelled datasets and show its effectiveness in providing an accurate classifier. The usefulness of BSL stems in its ability to produce high-quality noisy labels to unlabelled instances and its ability to handle the noises in labels, despite having severely limited amount of labelled data.\nOur results over the 8 UCI datasets reveal the performance improvement BSL brings compared to other algorithms currently in use. Our experiments also show how impervious BSL can be to noise by showing constant performance despite increases in unlablled data. A natural direction would be considering datasets that are fully unlabelled. Instead of having an ensemble of supervised classifiers BSL could consider using a consensus clustering approach and view each clustering algorithm as a potential agent within the crowdsource setting. Another important extension of this project would be using a more proficient inference method to extract the labels less nosier than those produced during our experiments. It is also a very interesting question to further study the theoretical guarantees of BSL in more sophisticated settings. Finally, another aspect of the paper we wish to further pursue is looking at non-homogeneous error rates." }, { "heading": "A PROOF FOR THEOREM 1", "text": "For our error-corrected AdaBoost we first prove\nZ := T∏ t=1 Zt = ∑\ni∈Nnoisy\nexp ( −˜̀(F (xi), ỹi) ) (8)\nFollowing standard argument of AdaBoost:\nDi(t+ 1) = Di(t)exp\n( −αt ˜̀(ft(xi), ỹi) ) Zt\n= Di(t− 1)exp\n( −αt ˜̀(ft(xi), ỹi)− αt−1 ˜̀(ft−1(xi), ỹi) ) ZtZt−1\n=...\n= Di(1)exp\n( − ∑ τ ατ ˜̀(fτ (xi), ỹi) )\nZ\nSince ˜̀(ft(xi), ỹi) is linear in ft we know t∑\nτ=1 −αt ˜̀(fτ (xi), ỹi) = −˜̀ ( t∑ τ=1 ατfτ (xi), ỹi ) = −˜̀ ( F (xi), ỹi ) Therefore\n1 = ∑\ni∈Nnoisy\nDi(t+ 1) =\n∑ i∈Nnoisy Di(1) · exp ( − ∑t τ=1 ˜̀(F (xi), ỹi) )\nZ\nMultiple Z on both sides, we have proved Eqn. (8).\nDefine\nˆ̀ ( f(xi), ỹi = +1 ) :=\n(1− ρ−)1(f(xi) 6= +1)− ρ+1(f(xi) 6= −1) 1− ρ+ − ρ− , (9)\nˆ̀ ( f(xi), ỹi = −1 ) :=\n(1− ρ+)1(f(xi) 6= −1)− ρ−1(f(xi) 6= +1) 1− ρ+ − ρ− . (10)\nNext we show that ∑ i∈Nnoisy exp ( −˜̀(F (xi), ỹi) ) ≥ ∑ i∈Nnoisy ˆ̀ ( F (xi), ỹi ) When F (xi) = ỹi = +1, ˆ̀(F (xi), ỹi) = 1 − ω1 < 0, but exp ( −˜̀(F (xi), ỹi) ) > 0. When\nF (xi) = −1, ỹi = +1, ˆ̀(F (xi), ỹi) = ω1, but exp ( −˜̀(F (xi), ỹi) ) > exp(ω1) > ω1. The case for ỹi = 1 is symmetric. Via Hoeffding inequality we know that with high probability at least 1− δ∑ i∈Nnoisy ˆ̀(F (xi), ỹi) ≥ ∑ i∈Nnoisy 1(f(xi) 6= yi)− (δ,N, ρ−, ρ+)\nwhere (δ,N, ρ+, ρ−) := max(ω+, ω−) · √ N ln 2δ 2 . Therefore ∑ i∈Nnoisy 1(f(xi) 6= yi) (11)\n≤ ∑\ni∈Nnoisy\nˆ̀(F (xi), ỹi) + (δ,N, ρ−, ρ+) (12)\n≤ ∑\ni∈Nnoisy\nexp ( −˜̀(F (xi), ỹi) ) + (δ,N, ρ−, ρ+) (13)\n=Z + (δ,N, ρ−, ρ+) (14)\nNow we prove that\nZ ≤ exp ( −2 T∑ t=1 γ2t )\nFirst we notice the following: at time t\n˜̀(f(xi) = +1, ỹi = +1) = exp(−ω+αt) (15) ˜̀(f(xi) = −1, ỹi = +1) = exp(ω+αt) (16) ˜̀(f(xi) = −1, ỹi = −1) = exp(−ω−αt) (17) ˜̀(f(xi) = +1, ỹi = −1) = exp(ω−αt) (18)\nThen when ỹi = +1, taking derivatives of Zt w.r.t. αt\n∂Zt ∂αt = ∑ x∈A+ −ω+exp(−ω+αt) + ∑ x∈Ā+ ω+exp(ω+αt) (19)\nHere we have defined four sets:\nA+ := the set of correctly classified data when ỹi = +1, (20) A− := the set of correctly classified data when ỹi = −1, (21)\nand Ā+, Ā− are their complement sets. Set derivative in Eqn. (19) to 0 we have\nαt = 1\n2ω+ ln 1− ̂t ̂t\nwhere ̂t is defined as follows: ̂+t := Pỹ=+1(ft(x) 6= ỹ)\nSimilarly when ỹi = −1 ̂−t := Pỹ=−1(ft(x) 6= ỹ)\nDefine ̂t = max{̂+t , ̂−t } and when ρ+ = ρ−, we have ω+ = ω−. And consequently, αt = α+t = α−t Next follows standard argument in boosting we are ready to prove\nZt ≤ √ ̂t(1− ̂t)\nWithout loss of generality, consider the negative label case ỹi = −1. Then\nZt =̂ − t √ 1− ̂t ̂t + (1− ̂−t ) √ ̂t 1− ̂t\n≤̂t √\n1− ̂t ̂t\n+ (1− ̂t) √\n̂t 1− ̂t\n= √ ̂t(1− ̂t)\nThe inequality is due to the fact √\n1−̂t ̂t\n> √\n̂t 1−̂t . Therefore\nZt ≤ √ ̂t(1− ̂t) = √ 1− 4γ2t , γt := 1\n2 − ̂t\nFurther √ 1− 4γ2t ≤ exp(−2γ2t )\nThis completes the proof.\nPROOF FOR LEMMA 1\nGiven ỹi,fj = fj(xi) ∈ {±1}, xi ∈ NU , yi is true labels of xi ∈ NU , and yi ∈ (xi, yi) ∈ NUnoisy .\nP(yi,fj = ỹi = 1) =P(yi,fj = ỹi = 1, yi = 0) + P(yi,fj = ỹi = 1, yi = 1) =P(yi,fj = ỹi = 1|Y = 0) · P(yi = 0)\n+ P(yi,fj = ỹi = 1|yi = 1) · P(yi = 1) =P(yi,fj = 1|yi = 0)P · (ỹi = 1|yi = 0) · P(yi = 0)\n+ P(yi,fj = 1|yi = 1) · P(ỹi = 1|yi = 1) · P(yi = 1) =P(yi = 0) · ρ− · ρ−,fj + P(yi = 1) · (1− ρ+) · (1− ρ+,fj )\nFurther we have\nP(ỹi = 1) = P(yi = 0) · ρ− + P(yi = 0)(1− ρ+) (22)\nSolving above linear equations completes the proof.\nADDITIONAL EXPERIMENTAL RESULTS" } ]
2,019
null
SP:6b1f56de94f5edc349fed07546f5964151b51d8e
[ "This paper proposes a new method for learning diverse policies in RL environments, with the ultimate goal of increasing reward. The paper develops a novel method, called interior policy differentiation (IPD), that constrains trained policy to be sufficiently different from one another. They test on 3 Mujoco domains, showing improved diversity in all of them and improved performance in 2 of them.", "The paper presents a new algorithm for maximizing the diversity of different policies learned for a given task. The diversity is quantified using a metric, where in this case the total variation is used. A policy is different from a set of other policy if its minimum distance to all the other policies is high. The authors formulate a new constraint optimization problem where the diversity to previous policies is lower bounded in order to avoid a tedious search for combining task reward and diversity reward. The algorithm is evaluated on different Mojoco locomotion tasks. " ]
Animals develop novel skills not only through the interaction with the environment but also from the influence of the others. In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers1. Specifically, we first define a metric to measure the distance between policies then quantitatively derive the definition of uniqueness. Unlike previous precarious joint optimization approaches, the social uniqueness motivation in our work is imposed as a constraint to encourage the agent to learn a policy different from the existing agents while still solve the primal task. The resulting algorithm, namely Interior Policy Differentiation (IPD), is able to learn a collection of policies that can solve a given task with distinct behaviors and brings about performance improvement as a byproduct in some cases.2.
[]
[ { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "arXiv preprint arXiv:1808.04355,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Cindy Chan", "Jonah Berger", "Leaf Van Boven" ], "title": "Identifiable but not identical: Combining social identity and uniqueness motives in choice", "venue": "Journal of Consumer research,", "year": 2012 }, { "authors": [ "A Conn", "Nick Gould", "Ph Toint" ], "title": "A globally convergent lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds", "venue": "Mathematics of Computation of the American Mathematical Society,", "year": 1997 }, { "authors": [ "Edoardo Conti", "Vashisht Madhavan", "Felipe Petroski Such", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "George B Dantzig", "Mukund N Thapa" ], "title": "Linear programming 2: theory and extensions", "venue": "Springer Science & Business Media,", "year": 2006 }, { "authors": [ "Dominik Maria Endres", "Johannes E Schindelin" ], "title": "A new metric for probability distributions", "venue": "IEEE Transactions on Information theory,", "year": 2003 }, { "authors": [ "Bent Fuglede", "Flemming Topsoe" ], "title": "Jensen-shannon divergence and hilbert space embedding", "venue": "In International Symposium onInformation Theory,", "year": 2004 }, { "authors": [ "Yuval Noah Harari" ], "title": "Sapiens: A brief history of humankind", "venue": "Random House,", "year": 2014 }, { "authors": [ "Nicolas Heess", "Srinivasan Sriram", "Jay Lemmon", "Josh Merel", "Greg Wayne", "Yuval Tassa", "Tom Erez", "Ziyu Wang", "SM Eslami", "Martin Riedmiller" ], "title": "Emergence of locomotion behaviours in rich environments", "venue": "arXiv preprint arXiv:1707.02286,", "year": 2017 }, { "authors": [ "Peter Henderson", "Riashat Islam", "Philip Bachman", "Joelle Pineau", "Doina Precup", "David Meger" ], "title": "Deep reinforcement learning that matters", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Joseph Henrich" ], "title": "The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter", "venue": null, "year": 2017 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "Variational information maximizing exploration", "venue": null, "year": 2016 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Joel Lehman", "Kenneth O Stanley" ], "title": "Exploiting open-endedness to solve problems through the search for novelty", "venue": "In ALIFE, pp", "year": 2008 }, { "authors": [ "Joel Lehman", "Kenneth O Stanley" ], "title": "Abandoning objectives: Evolution through the search for novelty alone", "venue": "Evolutionary computation,", "year": 2011 }, { "authors": [ "Hao Liu", "Alexander Trott", "Richard Socher", "Caiming Xiong" ], "title": "Competitive experience replay", "venue": "CoRR, abs/1902.00528,", "year": 2019 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Eric R Pianka" ], "title": "On r-and k-selection", "venue": "The american naturalist,", "year": 1970 }, { "authors": [ "Florian A Potra", "Stephen J Wright" ], "title": "Interior-point methods", "venue": "Journal of Computational and Applied Mathematics,", "year": 2000 }, { "authors": [ "Justin K Pugh", "Lisa B Soros", "Kenneth O Stanley" ], "title": "Quality diversity: A new frontier for evolutionary computation", "venue": "Frontiers in Robotics and AI,", "year": 2016 }, { "authors": [ "Barbara Rogoff" ], "title": "Apprenticeship in thinking: Cognitive development in social context", "venue": "Oxford university press,", "year": 1990 }, { "authors": [ "Richard M Ryan", "Edward L Deci" ], "title": "Intrinsic and extrinsic motivations: Classic definitions and new directions", "venue": "Contemporary educational psychology,", "year": 2000 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Wolfram Schultz", "Peter Dayan", "P Read Montague" ], "title": "A neural substrate of prediction and reward", "venue": null, "year": 1997 }, { "authors": [ "Felipe Petroski Such", "Vashisht Madhavan", "Edoardo Conti", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning", "venue": "arXiv preprint arXiv:1712.06567,", "year": 2017 }, { "authors": [ "Felipe Petroski Such", "Vashisht Madhavan", "Rosanne Liu", "Rui Wang", "Pablo Samuel Castro", "Yulun Li", "Ludwig Schubert", "Marc Bellemare", "Jeff Clune", "Joel Lehman" ], "title": "An atari model zoo for analyzing, visualizing, and comparing deep reinforcement learning agents", "venue": "arXiv preprint arXiv:1812.07069,", "year": 2018 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Introduction to reinforcement learning, volume 2", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "Edward Thorndike" ], "title": "Animal intelligence: Experimental studies", "venue": null, "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In IROS, pp. 5026–5033", "year": 2012 }, { "authors": [ "Carel P van Schaik", "Judith M Burkart" ], "title": "Social learning and evolution: the cultural intelligence hypothesis", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2011 }, { "authors": [ "Cédric Villani" ], "title": "Optimal transport: old and new, volume 338", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Rui Wang", "Joel Lehman", "Jeff Clune", "Kenneth O Stanley" ], "title": "Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions", "venue": null, "year": 1901 }, { "authors": [ "Stephen J Wright" ], "title": "On the convergence of the newton/log-barrier method", "venue": "Mathematical Programming,", "year": 2001 }, { "authors": [ "Yunbo Zhang", "Wenhao Yu", "Greg Turk" ], "title": "Learning novel policies for tasks", "venue": "CoRR, abs/1905.05252,", "year": 2019 }, { "authors": [ "Guus Zoutendijk" ], "title": "Methods of feasible directions: a study in linear and non-linear programming", "venue": null, "year": 1960 } ]
[ { "heading": "1 INTRODUCTION", "text": "The paradigm of Reinforcement Learning (RL), inspired by cognition and animal studies (Thorndike, 2017; Schultz et al., 1997), can be described as learning by interacting with the environment to maximize a cumulative reward (Sutton et al., 1998). From the perspective of ecology, biodiversity as well as the development of various skills are crucial to the continuation and evolution of species (Darwin, 1859; Pianka, 1970). Thus the behavioral diversity becomes a rising topic in RL. Previous works have tried to encourage the emergence of behavioral diversity in RL with two approaches: The first approach is to design interactive environments which contain sufficient richness and diversity. For example, Heess et al. (2017) show that rich environments enable agents to learn different locomotion skills even using the standard RL algorithms. Yet designing a complex environment requires manual efforts, and the diversity is limited by the obstacle classes. The second approach to increase behavioral diversity is to motivate agents to explore beyond just maximizing the reward for the given task. Zhang et al. (2019) proposed to maximize a heuristically defined novelty metric between policies through task-novelty joint optimization, but the final performance of agents is not guaranteed.\nIn this work, we address the topic of policy differentiation in RL, i.e., to improve the diversity of RL agents while keeping their ability to solve the primal task. We draw the inspiration from the Social Influence in animal society (Rogoff, 1990; Ryan & Deci, 2000; van Schaik & Burkart, 2011; Henrich, 2017; Harari, 2014) and formulate the concept of social influence in the reinforcement learning paradigm. Our learning scheme is illustrated in Fig 1. The target agent not only learns to interact with the environment to maximize the reward but also differentiate the actions it takes in order to be different from other existing agents.\nSince the social influence often acts on people passively as a sort of peer pressure, we implement the social influence in terms of social uniqueness motivation (Chan et al., 2012) and consider it as a constrained optimization problem. In the following of our work, we first define a rigorous policy distance metric in the policy space to compare the similarity of the agents. Then we develop an optimization constraint using the proposed metric, which brings immediate rather than episodic feedback in the learning process. A novel method, namely Interior Policy Differentiation (IPD), is further\n1In this work, we use the term “peer” to denote a population of RL agents. 2Code will be made available soon\nRun\nproposed as a better solution for the constrained policy optimization problem. We benchmark our method on several locomotion tasks and show it can learn various diverse and well-behaved policies for the given tasks based on the standard Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017)." }, { "heading": "2 RELATED WORK", "text": "Intrinsic motivation methods. The Variational Information Maximizing Exploration (VIME) method is designed by Houthooft et al. (2016) to tackle the sparse reward problems. In VIME, an intrinsic reward term based on the maximization of information gains is added to contemporary RL algorithms to encourage exploration. The curiosity-driven methods, proposed by Pathak et al. (2017) and Burda et al. (2018a) define intrinsic rewards according to prediction errors of neural networks. i.e., when taking previous unseen states as inputs, networks trained with previous states will tend to predict with low accuracy, so that such prediction errors can be viewed as rewards. Burda et al. (2018b) proposed Random Network Distillation (RND) to quantify intrinsic reward by prediction differences between a fixed random initialized network and another randomly initialized network trained with previous state information. Liu et al. (2019) proposed Competitive Experience Replay (CER), in which they use two actors and a centralized critic, and defined an intrinsic reward by the state coincidence of two actors. The values of intrinsic rewards are fixed to be ±1 for the two actors separately. All of those approaches leverage the weighted sum of the external rewards, i.e., the primal rewards provided by environments, and intrinsic rewards that provided by different heuristics. A challenging problem is the trade-off between external rewards and intrinsic rewards. The Task-Novelty Bisector (TNB) learning method introduced by Zhang et al. (2019) aims to solve such problem by jointly optimize the extrinsic rewards and intrinsic rewards. Specifically, TNB updates the policy in the direction of the angular bisector of the two gradients, i.e., gradients of the extrinsic and intrinsic objective functions. However, the foundation of such joint optimization is not solid. Besides, creating an extra intrinsic reward function and evaluating the novelty of states or policies always requires additional neural networks such as auto-encoders. Thus extra computation expenses are needed (Zhang et al., 2019) .\nDiverse behaviors from rich environments and algorithms. Heess et al. (2017) introduce the Distributed Proximal Policy Optimization (DPPO) method and enable agents with simulated bodies to learn complex locomotion skills in a diverse set of challenging environments. Although the learning reward they utilize is straightforward, the skills their policy learned are quite impressive and effective in traveling terrains and obstacles. Their work shows that rich environments can encourage the emergence of different locomotion behaviors, but extra manual efforts are required in designing such environments. The research of Such et al. (2018) shows that different RL algorithms may converge to different policies for the same task. The authors find that algorithms based on policy gradient tend to converge to the same local optimum in the game of Pitfall, while off-policy and value-based algorithms are prone to learn sophisticated strategies. On the contrary, in this paper, we are more interested in how to learn different policies through a single learning algorithm and learn the capability of avoiding local optimum.\nPopulation-based novelty-seeking methods. Pugh et al. (2016) establish a standard framework for understanding and comparing different approaches to searching for quality diversity (QD). Conti et al. (2018) investigate adding novelty search (NS) and QD to evolution strategies (ES) to avoid local optima as well as achieve higher performance. Lehman & Stanley (2011; 2008) conclude that deriving an open-ended search algorithm that operates without pressure towards the ultimate objective is possible, suggesting ignoring the objective may often benefit the search itself. The work of Wang et al. (2019) yields a new kind of open-ended algorithm which indicates the solution to one environment might be a stepping stone to a new level of performance in another. Such et al. (2017) evolve a DNN with a population-based genetic algorithm (GA) for challenging RL tasks. By improving the vanilla TRPO algorithm (Schulman et al., 2015), Kurutach et al. (2018) maintains model uncertainty given the data collected from the environment via an ensemble of deep neural networks." }, { "heading": "3 QUANTIFYING THE DISTANCE BETWEEN POLICIES", "text": "To encourage the emergence of behavioral diversity in RL, we first define a metric to measure the difference between policies, which is the foundation for the later algorithm we propose. We denote the learned policies as {πθi ; θi ∈ Θ, i = 1, 2, ...}, wherein θi represents parameters of the i-th policy, Θ denotes the whole parameter space. In the following, we omit π and denote a policy πθi as θi for simplicity unless stated otherwise." }, { "heading": "3.1 DEFINITION", "text": "Mathematically, a metric should satisfy three important properties, namely the identity, the symmetry as well as the triangle inequality.\nDefinition 1 A metric space is an ordered pair (M,d) where M is a set and d is a metric on M , i.e., a function d : M ×M → R such that for any x, y, z ∈M , the following holds: 1. d(x, y) ≥ 0, d(x, y) = 0⇔ x = y, 2. d(x, y) = d(y, x), 3. d(x, z) ≤ d(x, y) + d(y, z).\nWe use the Total Variance Divergence DTV (Schulman et al., 2015) to measure the distance between policies. Concretely, for discrete probability distributions p and q, this distance is defined as DTV (p, q) = ∑ i |pi − qi|. 34\nTheorem 1 (Metric Space (Θ, DρTV )) The expectation of DTV (·, ·) of two policies over any state distribution ρ(s):\nD ρ\nTV (θi, θj) := Es∼ρ(s)[DTV (θi(s), θj(s))], (1) is a metric on Θ, thus (Θ, D ρ\nTV ) is a metric space.\nThe proof of Theorem 1 is in Appendix A. It is worth mentioning that, although TVD is used in our work, we can easily extend the result to use other distance between distributions as substitutes of TVD (e.g. Jensen Shannon divergence DJS or Wasserstein metric DW ) (Endres & Schindelin, 2003; Fuglede & Topsoe, 2004; Villani, 2008), and similar results can be get\nCorollary 1 Let DρJS := Es∼ρ(s)[DJS(θi(s), θj(s))] and D ρ\nW := Es∼ρ(s)[DW (θi(s), θj(s))], (Θ, D ρ\nJS) and (Θ, D ρ W ) are also metric spaces.\nOn top of the metric space (Θ, D ρ\nTV ), we could then compute the uniqueness of a policy.\nDefinition 2 (Uniqueness of Policy) Given a reference policy set Θref such that Θref = {θrefi , i = 1, 2, ...},Θref ⊂ Θ, the uniqueness U(θ|Θref) of policy θ is the minimal difference between θ and all policy in the reference policy set, i.e.,\nU(θ|Θref) := min θj∈Θref\nD ρ\nTV (θ, θj). (2)\n3It can be extended to continuous state and action spaces by replacing the sums with integrals. 4The factor 1\n2 in Schulman et al. (2015) is omitted in our work for conciseness.\nConsequently, to motivate RL with the social uniqueness, we hope our method can maximize the uniqueness of a new policy, i.e., maxθ U(θ|Θref ), where the Θref includes all the existing policies.\n3.2 ESTIMATION OF D ρ\nTV (θi, θj)\nIn practice, the calculation of D ρ\nTV (θi, θj) is based on Monte Carlo estimation. i.e., we need to sample s from ρ(s). Although in finite state space we can get precise estimation after establishing ergodicity, problem arises when we are facing continuous state cases. i.e. it is difficult to efficiently get enough samples.\nFormally, we denote the domain of ρ(s) as S and denote the domain of ρθ(s) as Sθ ⊂ S , where ρθ(s) := ρ(s|s ∼ θ) and in finite time horizon problems ρ(s|s ∼ θ) = P (s0 = s|θ) + P (s1 = s|θ) + ...+P (sT = s|θ). As we only care about the reachable regions, the domain S can be divided by S = limN→∞ ⋃N i=1 Sθi .\nIn order to improve the sample efficiency, we propose to approximate D ρ\nTV (θi, θj) with D ρθ TV (θi, θj), where θ is a certain fixed behavior policy that irrelevant to θi, θj . Such approximation requires a necessary condition:\nCondition 1 The domain of possible states are similar between different policies:∑ s∈S P (s ∈ (Sθ ∪ Sθj ) \\ (Sθ ∩ Sθj )) ∑ s∈S P (s ∈ (Sθ ∩ Sθj )),∀j. (3)\nWhen such condition holds, we can use ρ(s|s ∼ θ) as our choice of ρ(s), and the properties in Definition 1 still holds.\nIn practice, the Condition 1 always holds as we can ensure this by adding sufficiently large noise on θ, while the permitted state space is always limited. And for more general cases, to satisfy the properties in Definition 1, we must sample s from Sθ ∪ Sθj , accordingly,\nD ρ\nTV (θ, θj) = Es∼(Sθ∪Sj)[DTV (θ(s), θj(s))] = Es∼(Sθ∩Sθj )[DTV (θ(s), θj(s))] + Es∼(Sθ∪Sθj )\\Sθj [DTV (θ(s),N )]+\nEs∼(Sθ∪Sθj )\\Sθ [DTV (N , θj(s))] (4)\nwhere N represents random action when a policy have never been trained or visited such state domain. Plugging Eq.(4) into Eq.(2), the objective function of policy differentiation is\nmax θ min θj∈Θref\nD ρ\nTV (θ, θj) = Es∼(Sθ∩Sθj )[DTV (θ(s), θj(s))]\n+ Es∼(Sθ∪Sθj )\\Sθj [DTV (θ(s),N )] + Es∼(Sθ∪Sθj )\\Sθ [DTV (N , θj(s))] (5)\nWhile the first two terms are related to the policy θ, the last term is only related to the domain Sθ. If we enable sufficient exploration in training as well as in the initialization of θ, the last term will disappear (i.e. Sθj ⊂ Sθ). Hence we can also use D ρθi TV (θi, θj) as an approximation of D ρ\nTV (θi, θj) in training of θi as long as sufficient exploration is guaranteed.\nProposition 1 (Unbiased Single Trajectory Estimation) The estimation of ρθ(s) using a single trajectory τ is unbiased.\nThe proof of Proposition 1 is in Appendix B. Given the definition of uniqueness and a practically unbiased sampling method, the next step is to develop an efficient learning algorithm." }, { "heading": "4 INTERIOR POLICY DIFFERENTIATION", "text": "In the traditional RL paradigm, maximizing the expectation of cumulative rewards g = ∑ t=0 γ\ntrt is commonly used as the objective. i.e. maxθ∈Θ Eτ∼θ[g], where τ ∼ θ denotes a trajectory τ sampled from the policy θ using Monte Carlo methods.\nTo improve the behavioral diversity of different agents, the learning objective must take both reward from the primal task and the policy uniqueness into consideration. Previous approaches (Houthooft et al., 2016; Pathak et al., 2017; Burda et al., 2018a;b; Liu et al., 2019) often directly write the weighted sum of the reward from the primal task and the intrinsic reward gint = ∑ t=0 γ\ntrint,t, where rint,t denotes the intrinsic reward (e.g., rint = minθj∈Θref D ρ\nTV (θ, θj) as the uniqueness reward in our case) as follows,\nmax θ∈Θ Eτ∼θ[gtotal] = max θ∈Θ Eτ∼θ[α · gtask + (1− α) · gint], (6)\nwhere 0 < α < 1 is a weight parameter. Such an objective is sensitive to the selection of α as well as the formulation of rint. For example, in our case formulating the intrinsic reward rint as minθj D ρ TV (θ, θj), exp [minθj D ρ TV (θ, θj)] and − exp [−minθj D ρ\nTV (θ, θj)] will result in significantly different results. Besides, a trade-off arises in the selection of α: while a large α may undermine the contribution of intrinsic reward, a small α could ignore the importance of the reward, leading to the failure of agent in solving the primal task.\nTo tackle these issues, we draw inspiration from the observation that social uniqueness motivates people in passive ways. In other words, it plays more like a constraint rather than an additional target. Therefore, we change the multi-objective optimization problem in Eq.(6) into a constrained optimization problem as:\nmax θ∈Θ\nEτ∼θ[gtask],\ns.t. rint,t − r0 ≥ 0,∀t = 1, 2, ..., T, (7)\nwhere r0 is a threshold indicating minimal permitted uniqueness, and rint,t denotes a moving average of rint,t. Further discussion on the selection of r0 will be deliberated in Appendix D.\nFrom the perspective of optimization, Eq.(6) can be viewed as a penalty method which replaces the constrained optimization problem in Eq.(7) with the penalty term rint and the penalty coefficient 1−αα > 0, where the difficulty lies in the selection of α. The work of Zhang et al. (2019)) tackles this challenge by the Task Novel Bisector (TNB) in the form of Feasible Direction Methods (FDMs) (Zoutendijk, 1960). As a heuristic approximation, that approach requires reward shaping and intensive emphasis on rint,t. Instead, in this work we propose to solve the constrained optimization problem Eq.(7) by resembling the Interior Point Methods (IPMs) (Potra & Wright, 2000; Dantzig & Thapa, 2006). In vanilla IPMs, the constrained optimization problem in Eq.(7) is solved by reforming it to an unconstrained form with an additional barrier term in the objective as\nmax θ∈Θ Eτ∼θ[gtask + T∑ t=0 α log (rint,t − r0)]. (8)\nThe limit of Eq.(8) when α → 0 then leads to the solution of Eq.(7). Readers please refer to Appendix G for more discussion on the correspondence between those novel policy seeking methods and constrained optimization methods.\nHowever, directly applying the IPMs is computationally challenging and numerically unstable, especially when α is small. Luckily, in our proposed RL paradigm where the behavior of an agent is influenced by its peers, a more natural way can be used. Precisely, since the learning process is based on sampled transitions, we can simply bound the collected transitions in the feasible region by permitting previous trained M policies θi ∈ Θref, i = 1, 2, ...,M sending termination signals during the training process of new agents. In other words, we implicitly bound the feasible region by terminating any new agent that steps outside it. Consequently, during the training process, all valid samples we collected are inside the feasible region, which means these samples are less likely to appear in previously trained policies. At the end of the training, we then naturally obtain a new policy that has sufficient uniqueness. In this way, we no longer need to consider the trade-off problem between intrinsic and extrinsic rewards deliberately. The learning process of our method is thus more robust and no longer suffer from objective inconsistency. As our formulation of the constrained optimization problem Eq.(7) is inspired by IPMs, we name our approach as Interior Policy Differentiation (IPD) method." }, { "heading": "5 EXPERIMENTS", "text": "The MuJoCo environment We demonstrate our proposed method on the OpenAI Gym where the physics engine is based on MuJoCo (Brockman et al., 2016; Todorov et al., 2012). Concretely, we test on three locomotion environments, the Hopper-v3 (11 observations and 3 actions), Walker2dv3 (11 observations and 2 actions), and HalfCheetah-v3 (17 observations and 6 actions). In our experiments, all the environment parameters are set as default values.\nUniqueness beyond intrinsic stochasticity Experiments in Henderson et al. (2018) show that policies that perform differently can be produced by simply selecting different random seeds before training. Before applying our method to improve behavior diversity, we firstly benchmark how much uniqueness can be generated from the stochasticity in the training process of vanilla RL algorithms as well as the random weight initialization. In this work, we mainly demonstrate our proposed method based on PPO(Schulman et al., 2017). The extension to other popular algorithms is straightforward. We also compare our proposed method with the TNB and weighted sum reward (WSR) approaches as different ways to combine the goal of the task and the uniqueness motivation (Zhang et al., 2019). More implementation details are depicted in Appendix D." }, { "heading": "5.1 UNIQUENESS AND PERFORMANCE COMPARISON", "text": "According to Theorem 2, the uniqueness rint in equation (7) under our uniqueness metric can be unbiased approximated by rint = minθj∈Θref D ρθ TV (θ(st), θj(st)). i.e., we utilize the metric directly in learning new policies instead of applying any kind of reshaping.\nWe implement WSR, TNB, and our method in the same experimental settings and for each method, 10 different policies are trained and try to be unique with regard to all previously trained policies\nsequentially. Concretely, the 1st policy is trained by ordinary PPO without any social influence. The 2nd policy should be different from 1st policy, and the 3rd should be different from the previous two policies, and so on. Fig.2 shows the qualitative results of our method. We visualize the motion of agents by drawing multiple frames representing the pose of agents at different time steps in the same row. The horizontal interval between consecutive frames is proportional to the velocity of agents. The settings of the frequency of highlighted frames and the correlation between interval and velocity are fixed for each environment. The visualization starts from the beginning of each episode and therefore the readers can get sense of the process of acceleration as well as the pattern of motion of agents clearly.\nFig. 3 shows our experimental results in terms of uniqueness (the x-axis) and the performance (the y-axis). Policies in the upper right are the more unique ones with higher performance. In Hopper and HalfCheetah, our proposed method distinctively outperforms other methods. In Walker2d, both WSR and our method work well in improving the uniqueness of policies, but none of the three methods can find way to surpass the performance of PPO apparently. Detailed comparison on the task related rewards are carried out in Table 1. A box figure depicting the performance of each trained policy and their reward gaining curve are disposed in Fig.5 and Fig.6 in Appendix C. And Fig.7 in Appendix C provides more detailed results from the view of uniqueness." }, { "heading": "5.2 SUCCESS RATE OF EACH METHOD", "text": "In addition to averaged reward, we also use success rate as another metrics to compare the performance of different approaches. In this work, we consider a policy is success when its performance is at least as good as the averaged performance of policies trained without social influences. To be specific, we use the averaged final performance of PPO as the baseline. If a new policy, which aims at performing differently to solve the same task, surpasses the baseline during its training process, it will be regarded as a successful policy. Through the success rate, we know the policy does not learn unique behavior at the expense of performance. Table 1 shows the success rate of all the methods, including the PPO baseline. The results show that our method can always surpass the average baseline during training. Thus the performance of our method can always be insured.\nHopper\nWalker2d\nHalfCheetah" }, { "heading": "5.3 BETTER POLICY DISCOVERY", "text": "In our experiments, we observed noticeable performance improvements in the Hopper and the HalfCheetah environments. For the environment of Hopper, in many cases, the agents trained with PPO tend to learn a policy that jumps as far as possible and then fall to the ground and terminate this episode (please refer to Fig.11 in Appendix E). Our proposed method can prevent new policies from always falling into the same local minimum. After the first policy being trapped in a local minimum, the following policies will try other approaches to avoid the same behavior, explore other feasible action patterns, and thereafter the performance may get improved. Such property shows that our method can be a helpful enhancement of the traditional RL scheme, which can be epitomized as policies could make mistakes, but they should explore more instead of hanging around the same local minimum. The similar feature attributes to the reward growth in the environment of HalfCheetah.\nMoreover, we can illuminate the performance improvement of HalfCheetah from another perspective. The environment of HalfCheetah is quite different from the other two for there is no explicit termination signal in its default settings (i.e., no explicit action like falling to the ground would trigger termination). At the beginning of the learning process, an agent will act randomly, resulting in massive repeat, trivial samples as well as large control costs. In our learning scheme, since the agent also interacts with the peers, it can receive termination signals from the peers to prevent wasting too much effort acting randomly. During the learning process in our method, an agent will first learn to terminate itself as soon as possible to avoid heavy control costs by imitating previous policies and then learns to behave differently to pursue higher reward. From this point of view, such learning process can be regarded as a kind of implicit curriculum." }, { "heading": "5.4 SCALE OF THE INFLUENCE", "text": "As the number of policies learned with social influence grows, the difficulty of finding a unique policy may also increase. Later policies must keep away from all previous solutions. The results of our ablation study on how the performance changes under different scales of social influence (i.e., the number of peers) is shown in Fig. 4, where the thresholds are selected according to our previous ablation study in Sec. D. The performance decrease is more obvious in Hopper than the other two environments for the action space of Hopper is only 3 dimensional. Thus the number of possible diverse policies can be discovered is limited." }, { "heading": "6 CONCLUSION", "text": "In this work, we develop an efficient approach to motivate RL to learn diverse strategies inspired by social influence. After defining the distance between policies, we introduce the definition of policy uniqueness. Regarding the problem as constrained optimization problem, our proposed method, Interior Policy Differentiation (IPD), draws the key insight of the Interior Point Methods. And our experimental results demonstrate IPD can learn various well-behaved policies, and our approach can help agents to avoid local minimum and can be interpreted as a kind of implicit curriculum learning in certain cases." }, { "heading": "A PROOF OF THEOREM 1", "text": "The first two properties are obviously guaranteed by D ρ\nTV . As for the triangle inequality,\nEs∼ρ(s)[DTV (θi(s), θk(s)] = Es∼ρ(s)[ |A|∑ l=1 |θi(s)− θk(s)|]\n= Es∼ρ(s)[ |A|∑ l=1 |θi(s)− θj(s) + θj(s)− θk(s)|] ≤ Es∼ρ(s)[ (|A|∑ l=1 |θi(s)− θj(s)|+ |θj(s)− θk(s)|)] = Es∼ρ(s)[ |A|∑ l=1 |θi(s)− θj(s)|] + Es∼ρ(s)[ |A|∑ l=1 |θj(s)− θk(s)|]\n= Es∼ρ(s)[DTV (θi(s), θj(s)] + Es∼ρ(s)[DTV (θj(s), θk(s)]" }, { "heading": "B PROOF OF PROPOSITION 1", "text": "ρθ(s) = P (s0 = s|θ) + P (s1 = s|θ) + ...+ P (sT = s|θ)\nL.L.N. = lim\nN→∞\n∑N i=1 I(s0 = s|τi)\nN +\n∑N i=1 I(s1 = s|τi)\nN + ...+\n∑N i=1 I(sT = s|τi)\nN\n= lim N→∞\n∑T j=0 ∑N i=1 I(sj = s|τi) N\nρθ(s) = N∑ i=1 T∑ j=0 I(sj = s|τi) N\nE[ρθ(s)− ρθ(s)] = 0" }, { "heading": "C DETAILS OF UNIQUENESS AND PERFORMANCE", "text": "Hopper-v3\nWalker2d-v3\nHalfCheetah-v3\nD IMPLEMENTATION DETAILS\nCalculation of DTV We use deterministic part of policies in the calculation of DTV , i.e., we remove the Gaussian noise on the action space in PPO and use DTV (a1, a2) = |a1 − a2|.\nNetwork Structure We use MLP with 2 hidden layers as our actor models in PPO. The first hidden layer is fixed to have 32 units. Our ablation study on the choice of unit number in the second layer is detailed in Table.2, Table3 and Fig.8. Moreover, we choose to use 10, 64 and 256 hidden units for the three tasks respectively in all of the main experiments, after taking the success rate (Table.2), performance (Table.3) and computation expense (i.e. the preference to use less unit when the other two factors are similar) into consideration.\nTraining Timesteps We fix the training timesteps in our experiments. The timesteps are fixed to be 1M in Hopper-v3, 1.6M for Walker2d-v3 and 3M for HalfCheetah.\nThreshold Selection In our proposed method, we can control the magnitude of policy uniqueness flexibly by adjusting the constraint threshold r0. Choosing different thresholds will lead to different policy behaviors. Concretely., a larger threshold may drive the agent to perform more differently while smaller threshold imposes a lighter constraint on the behavior of the agent. Intuitively, a larger threshold will lead to relatively poor performance for the learning algorithm is less likely to find a feasible solution to Eq.(7).\nBesides, we do not use constraints in the form of Eq.(7) as we need not force every single action of a new agent to be different from others. Instead, we are more care about the long term differences. Therefore, we use the cumulative uniqueness as constraints,\nmax θ∈Θ\nEτ∼θ[gtask],\ns.t. ∑t=τ t=0(rint,t − r0) ≥ 0,∀τ = 1, 2, ..., T,\nWe test our method with different choices of threshold values. The performance of agents under different thresholds are shown in Fig. 9 and more detailed analysis of their success rate is presented in Table. 2.\nHopper-v3\nWalker2d-v3\nHalfCheetah-v3" }, { "heading": "E MORE QUALITATIVE RESULTS", "text": "F IMPLEMENTATION OF EQ.(7)\nWe do not use constraints in the form of Eq.(7) as we need not force every single action of a new agent to be different from others. Instead, we are more care about the long term differences. Therefore, we use the cumulative uniqueness as constraints. Moreover, the constraints can be applied after the first tS timesteps (e.g. tS = 20) for the consideration of similar starting sequences.\nmax θ∈Θ\nEτ∼θ[gtask],\ns.t. ∑t=τ t=tS (rint,t − r0) ≥ 0, τ = S, ..., T, (9)" }, { "heading": "G RELATION BETWEEN DIFFERENT APPROACHES AND CONSTRAINED OPTIMIZATION METHODS", "text": "We note here, the WSR, TNB and IPD methods correspond to three approaches in constrained optimization problem. For simplicity, we consider Eq.(9) with a more concise notion gint,t − g0,t ≥ 0, where gint,t = ∑t t=0 rint,t, i.e.,\nmax θ∈Θ\nf(θ) = Eτ∼θ[gtask]\ns.t. gt(θ) = gint,t − g0,t ≥ 0, t = 1, 2, ..., T (10)\nAs the optimization of policy is based on batches of trajectory samples and is implemented with stochastic gradient descent, Eq.(10) can be further simplified as:\nmax θ∈Θ\nf(θ) = Eτ∼θ[gtask]\ns.t. g(θ) = gt(θ) ≥ 0 (11)\nwhere gt(θ) denotes the average over a trajectory.\nWSR: Penalty Method The Penalty Method considers the constraints of Eq.(11) by putting constraint g(θ) into a penalty term, and then solve the unconstrained problem\nmax θ∈Θ f(θ) + 1− α α min{g(θ), 0} (12)\nusing an iterative manner, and the limit when α → 0 lead to the solution of the primal constrained problem. As an approximation, WSR choose a fixed weight term α, and use the gradient of ∇θf + 1−α α ∇θg instead of ∇θf + 1−α α ∇θ min{g(θ), 0}, thus the final solution will intensely rely on the selection of α.\nTNB: Feasible Direction Method The Taylor series of g(θ) at point θ̄ is\ng(θ̄ + λ~p) = g(θ̄) +∇θg(θ̄)Tλ~p+O(||λ~p||) (13)\nThe Feasible Direction Method (FDM) considers the constraints of Eq.(11) by first finding a direction ~p satisfies\n∇θfT · ~p > 0 ∇θgT · ~p > 0 if g = 0\n(14)\nso that for small λ, we have\ng(θ̄ + λ~p) = g(θ̄) + λ∇θg(θ̄)T~p > g(θ̄) = 0 if g(θ̄) = 0 (15)\nand g(θ̄ + λ~p) = g(θ̄) + λ∇θg(θ̄)T~p > 0 if g(θ̄) > 0 (16)\nThe TNB method, by using the bisector of gradients∇θf and∇θg, select ~p to be\n~p = { ∇θf + |∇θf ||∇θg|∇θg · cos (∇θf,∇θg) if cos (∇θf,∇θg) ≤ 0 ∇θf + |∇θf ||∇θg|∇θg if cos (∇θf,∇θg) > 0\n(17)\nClearly, Eq.(17) satisfies Eq.(14), but it is more strict than Eq.(14) as the ∇θg term always exists during the optimization of TNB. In TNB, the learning stride is fixed to be |∇θf |+|∇θg|2 , leading to problem when∇θf → 0, which shows the final optimization result will heavily rely on the selection of g. i.e., the shape of g is crucial for the success of TNB.\nIPD: Interior Point Methods (IPMs) In vanilla IPMs, the constrained optimization problem in Eq.(11) is solved by reforming it to an unconstrained form with an additional barrier term α 1g(θ) in the objective as\nmax θ∈Θ\nf(θ) + α 1\ng(θ) (18)\nor use the barrier term of −α log g(θ) instead:\nmax θ∈Θ\nf(θ)− α log g(θ) (19)\nwhere α, the barrier factor, is a small positive number. As α is small, the barrier term will introduce only minuscule influence on the objective. On the other hand, when θ get closer to the barrier, the objective will increase fast. It is clear that the solution of the objective with barrier term will get closer to the primal objective as α getting smaller. Thus in practice, such methods will choose a sequence of {αi} such that 0 < αi < αk+1 and αi → 0 as k → ∞ The limit of Eq.(18), Eq.(19) when α → 0 then leads to the solution of Eq.(11). The work of Conn et al. (1997); Wright (2001) provide proofs of the convergence.\nDirectly applying this method is computationally challenging and numerically unstable, especially when α is small. A more natural way can be used: since the learning process is based on sampled transitions, we can simply bound the collected transitions in the feasible region by permitting previous trained M policies θi ∈ Θref, i = 1, 2, ...,M sending termination signals during the training process of new agents. In other words, we implicitly bound the feasible region by terminating any new agent that steps outside it.\nConsequently, during the training process, all valid samples we collected are inside the feasible region, which means these samples are less likely to appear in previously trained policies. At the end of the training, we then naturally obtain a new policy that has sufficient uniqueness. In this way, we no longer need to consider the trade-off problem between intrinsic and extrinsic rewards deliberately. The learning process of our method is thus more robust and no longer suffer from objective inconsistency. Algorithm.1 shows the pseudo code of IPD based on PPO, where the blue lines show the addition to primal PPO algorithm.\nAlgorithm 1 IPD with PPO, Actor-Critic Style Require • a behavior policy θold • a set of previous policies {j}, j = 1, 2, ...,M • a uniqueness metric U(θ, {θj}|ρ) = U(θ, {θj}|τ) = minθj D τ TV (θ, θj)\n• a uniqueness threshold r0, starting point tS Initialize θold for iteration = 1, 2, ... do\nfor actor = 1, 2, ..., N do for t = 1, 2, ..., T do\nRun policy θold in environment, get trajectory τ if U(θold, {θj}|τ)− r0 < 0, AND t > tS then\ndone = True end if if done then\nbreak end if\nend for Compute advantage estimates Â1, ..., ÂT\nend for Optimize surrogate LCLIP w.r.t. θ, with K epochs and minibatch size M ≤ NT θold ← θ\nend for" } ]
2,019
INTERIOR POLICY DIFFERENTIATION
SP:e8af90f522657cb1cc069da98c22ae60d04b8879
[ "This paper proposes a two-stage GNN-based architecture to establish correspondences between two graphs. The first step is to learn node embeddings using a GNN to obtain soft node correspondences between two graphs. The second step is to iteratively refine them using the constraints of matching consensus in local neighborhoods between graphs. The overall refining process resembles the classic graph matching algorithm of graduated assignment (Gold & Rangarajan, 1996), but generalizes it using deep neural representation. Experiments show that the proposed algorithm performs well on real-world tasks of image matching and knowledge graph entity alignment.", "The authors proposed a message passing neural network-based graph matching methods. The overall framework can be viewed as a graph siamese network, where two set of points are passing through the same graph neural network, and then two new embeddings are generated. Using the two embedding the similarity between points can be computed and then the final matching can be generated. " ]
This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs. First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes. Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs. We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process. Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently. We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-theart. Our source code is available under https://github.com/rusty1s/ deep-graph-matching-consensus.
[ { "affiliations": [], "name": "Matthias Fey" }, { "affiliations": [], "name": "Jan E. Lenssen" }, { "affiliations": [], "name": "Christopher Morris" }, { "affiliations": [], "name": "Jonathan Masci" }, { "affiliations": [], "name": "Nils M. Kriege" } ]
[ { "authors": [ "R.P. Adams", "R.S. Zemel" ], "title": "Ranking via sinkhorn propagation", "venue": "CoRR, abs/1106.1925,", "year": 2011 }, { "authors": [ "Y. Aflalo", "A. Bronstein", "R. Kimmel" ], "title": "On convex relaxation of graph isomorphism", "venue": "Proceedings of the National Academy of Sciences,", "year": 2015 }, { "authors": [ "K. Anstreicher" ], "title": "Recent advances in the solution of quadratic assignment problems", "venue": "Mathematical Programming,", "year": 2003 }, { "authors": [ "V. Arvind", "J. Kbler", "G. Rattan", "O. Verbitsky" ], "title": "On the power of color refinement", "venue": "In Fundamentals of Computation Theory,", "year": 2015 }, { "authors": [ "Y. Bai", "H. Ding", "Y. Sun", "W. Wang" ], "title": "Convolutional set matching for graph similarity", "venue": "In NeurIPSW,", "year": 2018 }, { "authors": [ "Y. Bai", "H. Ding", "S. Bian", "T. Chen", "Y. Sun", "W. Wang" ], "title": "SimGNN: A neural network approach to fast graph similarity computation", "venue": null, "year": 2019 }, { "authors": [ "P.W. Battaglia", "J.B. Hamrick", "V. Bapst", "A. Sanchez-Gonzalez", "V.F. Zambaldi", "M. Malinowski", "A. Tacchetti", "D. Raposo", "A. Santoro", "R. Faulkner", "Ç. Gülçehre", "F. Song", "A.J. Ballard", "J. Gilmer", "G.E. Dahl", "A. Vaswani", "K. Allen", "C. Nash", "V. Langston", "C. Dyer", "N. Heess", "D. Wierstra", "P. Kohli", "M. Botvinick", "O. Vinyals", "Y. Li", "R. Pascanu" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": null, "year": 2018 }, { "authors": [ "M. Bayati", "D.F. Gleich", "A. Saberi", "Y. Wang" ], "title": "Message-passing algorithms for sparse network alignment", "venue": "ACM Transactions on Knowledge Discovery from Data,", "year": 2013 }, { "authors": [ "J. Bento", "S. Ioannidis" ], "title": "A family of tractable graph distances", "venue": "In SDM,", "year": 2018 }, { "authors": [ "P. Bojanowski", "E. Grave", "A. Joulin", "T. Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "S. Bougleux", "L. Brun", "V. Carletti", "P. Foggia", "B. Gazre", "M. Vento" ], "title": "Graph edit distance as a quadratic assignment problem", "venue": "Pattern Recognition Letters,", "year": 2017 }, { "authors": [ "L. Bourdev", "J. Malik" ], "title": "Poselets: Body part detectors trained using 3D human pose annotations", "venue": "In ICCV,", "year": 2009 }, { "authors": [ "M.M. Bronstein", "J. Bruna", "Y. LeCun", "A. Szlam", "P. Vandergheynst" ], "title": "Geometric deep learning: Going beyond euclidean data", "venue": null, "year": 2017 }, { "authors": [ "H. Bunke" ], "title": "On a relation between graph edit distance and maximum common subgraph", "venue": "Pattern Recognition Letters,", "year": 1997 }, { "authors": [ "H. Bunke", "K. Shearer" ], "title": "A graph distance metric based on the maximal common subgraph", "venue": "Pattern Recognition Letters,", "year": 1998 }, { "authors": [ "T.S. Caetano", "J.J. McAuley", "L. Cheng", "Q.V. Le", "A.J. Smola" ], "title": "Learning graph matching", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2009 }, { "authors": [ "Y. Cao", "Z. Liu", "C. Li", "J. Li", "T. Chua" ], "title": "Multi-channel graph neural network for entity alignment", "venue": null, "year": 2019 }, { "authors": [ "X. Chen", "H. Huo", "J. Huan", "J.S. Vitter" ], "title": "An efficient algorithm for graph edit distance computation", "venue": "Knowledge-Based Systems,", "year": 2019 }, { "authors": [ "M. Cho", "K. Alahari", "J. Ponce" ], "title": "Learning graphs to match", "venue": "In ICCV,", "year": 2013 }, { "authors": [ "C.B. Choy", "J. Gwak", "S. Savarese", "M. Chandraker" ], "title": "Universal correspondence network", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "D. Conte", "P. Foggia", "C. Sansone", "M. Vento" ], "title": "Thirty years of graph matching in pattern recognition", "venue": "International Journal of Pattern Recognition and Artificial Intelligence,", "year": 2004 }, { "authors": [ "X. Cortés", "D. Conte", "H. Cardot" ], "title": "Learning edit cost estimation models for graph edit distance", "venue": "Pattern Recognition Letters,", "year": 2019 }, { "authors": [ "T. Cour", "P. Srinivasan", "J. Shi" ], "title": "Balanced graph matching", "venue": "In NIPS,", "year": 2006 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "T. Derr", "H. Karimi", "X. Liu", "J. Xu", "J. Tang" ], "title": "Deep adversarial network", "venue": "alignment. CoRR,", "year": 2019 }, { "authors": [ "A. Egozi", "Y. Keller", "H. Guterman" ], "title": "A probabilistic approach to spectral graph matching", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "P. Erdős", "A. Rényi" ], "title": "On random graphs I", "venue": "Publicationes Mathematicae Debrecen,", "year": 1959 }, { "authors": [ "M. Everingham", "L. Van Gool", "C.K.I. Williams", "J. Winn", "A. Zisserman" ], "title": "The Pascal visual object classes (VOC) challenge", "venue": "In IJCV,", "year": 2010 }, { "authors": [ "M. Fey", "J.E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR-W,", "year": 2019 }, { "authors": [ "M. Fey", "J.E. Lenssen", "F. Weichert", "H. Müller" ], "title": "SplineCNN: Fast geometric deep learning with continuous B-spline kernels", "venue": null, "year": 2018 }, { "authors": [ "M.R. Garey", "D.S. Johnson" ], "title": "Computers and Intractability: A Guide to the Theory of NPCompleteness", "venue": null, "year": 1979 }, { "authors": [ "J. Gilmer", "S.S. Schoenholz", "P.F. Riley", "O. Vinyals", "G.E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": null, "year": 2017 }, { "authors": [ "X. Glorot", "A. Bordes", "Y. Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In AISTATS,", "year": 2011 }, { "authors": [ "S. Gold", "A. Rangarajan" ], "title": "A graduated assignment algorithm for graph matching", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1996 }, { "authors": [ "M. Gori", "M. Maggini", "L. Sarti" ], "title": "Exact and approximate graph matching using random walks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2005 }, { "authors": [ "K. Gouda", "M. Hassaan" ], "title": "CSI GED: An efficient approach for graph edit similarity computation", "venue": "In ICDE,", "year": 2016 }, { "authors": [ "P. Goyal", "E. Ferrara" ], "title": "Graph embedding techniques, applications, and performance: A survey", "venue": "Knowledge-Based Systems,", "year": 2018 }, { "authors": [ "M. Grohe", "G. Rattan", "G.J. Woeginger" ], "title": "Graph similarity and approximate isomorphism", "venue": "In Mathematical Foundations of Computer Science,", "year": 2018 }, { "authors": [ "A. Grover", "J. Leskovec" ], "title": "Node2Vec: Scalable feature learning for networks", "venue": "In SIGKDD,", "year": 2016 }, { "authors": [ "O. Halimi", "O. Litany", "E. Rodolà", "A.M. Bronstein", "R. Kimmel" ], "title": "Self-supervised learning of dense shape correspondence", "venue": null, "year": 2019 }, { "authors": [ "B. Ham", "M. Cho", "C. Schmid", "J. Ponce" ], "title": "Proposal flow", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "W.L. Hamilton", "R. Ying", "J. Leskovec" ], "title": "Representation learning on graphs: Methods and applications", "venue": "IEEE Data Engineering Bulletin,", "year": 2017 }, { "authors": [ "M. Heimann", "H. Shen", "T. Safavi", "D. Koutra" ], "title": "REGAL: Representation learning-based graph alignment", "venue": "In CIKM,", "year": 2018 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "M. Jaggi" ], "title": "Revisiting Frank-Wolfe: Projection-free sparse convex optimization", "venue": "In ICML,", "year": 2013 }, { "authors": [ "V. Kann" ], "title": "On the approximability of the maximum common subgraph problem", "venue": "In STACS,", "year": 1992 }, { "authors": [ "Kristian Kersting", "Martin Mladenov", "Roman Garnett", "Martin Grohe" ], "title": "Power iterated color refinement", "venue": "In AAAI,", "year": 2014 }, { "authors": [ "D.P. Kingma", "J.L. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "T.N. Kipf", "M. Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "G.W. Klau" ], "title": "A new graph-based method for pairwise global network alignment", "venue": "BMC Bioinformatics,", "year": 2009 }, { "authors": [ "G. Kollias", "S. Mohammadi", "A. Grama" ], "title": "Network similarity decomposition (NSD): A fast and scalable approach to network alignment", "venue": "IEEE Tranactions on Knowledge and Data Engineering,", "year": 2012 }, { "authors": [ "N.M. Kriege", "P.L. Giscard", "F. Bause", "R.C. Wilson" ], "title": "Computing optimal assignments in linear time for approximate graph matching", "venue": "In ICDM,", "year": 2019 }, { "authors": [ "N.M. Kriege", "L. Humbeck", "O. Koch" ], "title": "Chemical similarity and substructure searches. In Encyclopedia of Bioinformatics and Computational Biology", "venue": null, "year": 2019 }, { "authors": [ "G. Lample", "A. Conneau", "M. Ranzato", "L. Denoyer", "H. Jégou" ], "title": "Word translation without parallel data", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "M. Leordeanu", "M. Hebert" ], "title": "A spectral technique for correspondence problems using pairwise constraints", "venue": "In ICCV,", "year": 2005 }, { "authors": [ "M. Leordeanu", "M. Hebert", "R. Sukthankar" ], "title": "An integer projected fixed point method for graph matching and MAP inference", "venue": "In NIPS,", "year": 2009 }, { "authors": [ "J. Lerouge", "Z. Abu-Aisheh", "R. Raveaux", "P. Hroux", "S. Adam" ], "title": "New binary linear programming formulation to compute the graph edit distance", "venue": "Pattern Recognition,", "year": 2017 }, { "authors": [ "Y. Li", "C. Gu", "T. Dullien", "O. Vinyals", "P. Kohli" ], "title": "Graph matching networks for learning the similarity of graph structured objects", "venue": null, "year": 2019 }, { "authors": [ "O. Litany", "T. Remez", "E. Rodolà", "A.M. Bronstein", "M.M. Bronstein" ], "title": "Deep functional maps: Structured prediction for dense shape correspondence", "venue": null, "year": 2017 }, { "authors": [ "V. Lyzinski", "D.E. Fishkind", "M. Fiori", "J.T. Vogelstein", "C.E. Priebe", "G. Sapiro" ], "title": "Graph matching: Relax at your own risk", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2016 }, { "authors": [ "C. Morris", "M. Ritzert", "M. Fey", "W.L. Hamilton", "J.E. Lenssen", "G. Rattan", "M. Grohe" ], "title": "Weisfeiler and Leman go neural: Higher-order graph neural networks", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "R.L. Murphy", "B. Srinivasan", "V. Rao", "B. Ribeiro" ], "title": "Relational pooling for graph representations", "venue": "In ICML,", "year": 2019 }, { "authors": [ "M. Ovsjanikov", "M. Ben-Chen", "J. Solomon", "A. Butscher", "L.J. Guibas" ], "title": "Functional maps: A flexible representation of maps between shapes", "venue": "ACM Transactions on Graphics,", "year": 2012 }, { "authors": [ "L. Page", "S. Brin", "R. Motwani", "T. Winograd" ], "title": "The PageRank citation ranking: Bringing order to the web", "venue": "Technical report, Stanford InfoLab,", "year": 1999 }, { "authors": [ "A. Paszke", "S. Gross", "S. Chintala", "G. Chanan", "E. Yang", "Z. DeVito", "Z. Lin", "A. Desmaison", "L. Antiga", "A. Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "G. Peyré", "M. Cuturi", "J. Solomon" ], "title": "Gromov-Wasserstein averaging of kernel and distance matrices", "venue": "In ICML,", "year": 2016 }, { "authors": [ "K. Riesen", "H. Bunke" ], "title": "Approximate graph edit distance computation by means of bipartite graph matching", "venue": "Image and Vision Computing,", "year": 2009 }, { "authors": [ "K. Riesen", "M. Ferrer", "R. Dornberger", "H. Bunke" ], "title": "Greedy graph edit distance. In Machine Learning and Data Mining in Pattern Recognition, 2015a", "venue": null, "year": 2015 }, { "authors": [ "K. Riesen", "M. Ferrer", "A. Fischer", "H. Bunke" ], "title": "Approximation of graph edit distance in quadratic time. In Graph-Based Representations in Pattern Recognition, 2015b", "venue": null, "year": 2015 }, { "authors": [ "I. Rocco", "M. Cimpo", "R. Arandjelović", "A. Torii", "T. Pajdla", "J. Sivic" ], "title": "Neighbourhood consensus networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "E. Rodolà", "L. Cosmo", "M.M. Bronstein", "A. Torsello", "D. Cremers" ], "title": "Partial functional correspondence", "venue": "Computer Graphics Forum,", "year": 2017 }, { "authors": [ "A. Sanfeliu", "K.S. Fu" ], "title": "A distance measure between attributed relational graphs for pattern recognition", "venue": "IEEE Transactions on Systems, Man, and Cybernetics,", "year": 1983 }, { "authors": [ "T. Sattler", "B. Leibe", "L. Kobbelt" ], "title": "SCRAMSAC: Improving RANSAC’s efficiency with a spatial consistency filter", "venue": "In ICCV,", "year": 2009 }, { "authors": [ "M.S. Schlichtkrull", "T.N. Kipf", "P. Bloem", "R. van den Berg", "I. Titov", "M. Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In ESWC,", "year": 2018 }, { "authors": [ "C. Schmid", "R. Mohr" ], "title": "Local grayvalue invariants for image retrieval", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1997 }, { "authors": [ "R. Sharan", "T. Ideker" ], "title": "Modeling cellular machinery through biological network comparison", "venue": "Nature Biotechnology,", "year": 2006 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "R. Singh", "J. Xu", "B. Berger" ], "title": "Global alignment of multiple protein interaction networks with application to functional orthology detection", "venue": "In National Academy of Sciences,", "year": 2008 }, { "authors": [ "R. Sinkhorn", "P. Knopp" ], "title": "Concerning nonnegative matrices and doubly stochastic matrices", "venue": "Pacific Journal of Mathematics,", "year": 1967 }, { "authors": [ "J. Sivic", "A. Zisserman" ], "title": "Video Google: A text retrieval approach to object matching in videos", "venue": "In ICCV,", "year": 2003 }, { "authors": [ "N. Srivastava", "G.E. Hinton", "A. Krizhevsky", "I. Sutskever", "R. Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "M. Stauffer", "T. Tschachtli", "A. Fischer", "K. Riesen" ], "title": "A survey on applications of bipartite graph edit distance. In Graph-Based Representations in Pattern Recognition, 2017", "venue": null, "year": 2017 }, { "authors": [ "Z. Sun", "W. Hu", "C. Li" ], "title": "Cross-lingual entity alignment via joint attribute-preserving embedding", "venue": "In ISWC,", "year": 2017 }, { "authors": [ "Z. Sun", "W. Hu", "Q. Zhang", "Y. Qu" ], "title": "Bootstrapping entity alignment with knowledge graph embedding", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "P. Swoboda", "C. Rother", "H.A. Ahljaija", "D. Kainmueller", "B. Savchynskyy" ], "title": "A study of lagrangean decompositions and dual ascent solvers for graph matching", "venue": null, "year": 2017 }, { "authors": [ "G. Tinhofer" ], "title": "A note on compact graphs", "venue": "Discrete Applied Mathematics,", "year": 1991 }, { "authors": [ "S. Umeyama" ], "title": "An eigendecomposition approach to weighted graph matching problems", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1988 }, { "authors": [ "P. Veličković", "G. Cucurull", "A. Casanova", "A. Romero", "P. Liò", "Y. Bengio" ], "title": "Graph attention networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "M. Vento", "P. Foggia" ], "title": "Graph matching techniques for computer vision", "venue": "Graph-Based Methods in Computer Vision: Developments and Applications,", "year": 2012 }, { "authors": [ "F. Wang", "N. Xue", "Y. Zhang", "G. Xia", "M. Pelillo" ], "title": "A functional representation for graph matching", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2019 }, { "authors": [ "R. Wang", "J. Yan", "X. Yang" ], "title": "Learning combinatorial embedding networks for deep graph matching", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Y. Wang", "J.M. Solomon" ], "title": "Deep closest point: Learning representations for point cloud registration", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Z. Wang", "Q. Lv", "X. Lan", "Y. Zhang" ], "title": "Cross-lingual knowledge graph alignment via graph convolutional networks", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "B. Weisfeiler", "A.A. Lehman" ], "title": "A reduction of a graph to a canonical form and an algebra arising during this reduction", "venue": "Nauchno-Technicheskaya Informatsia,", "year": 1968 }, { "authors": [ "Y. Wu", "X. Liu", "Y. Feng", "Z. Wang", "R. Yan", "D. Zhao" ], "title": "Relation-aware entity alignment for heterogeneous knowledge graphs", "venue": null, "year": 2019 }, { "authors": [ "H. Xu", "D. Luo", "L. Carin" ], "title": "Scalable Gromov-Wasserstein learning for graph partitioning and matching", "venue": "CoRR, abs/1905.07645,", "year": 2019 }, { "authors": [ "H. Xu", "D. Luo", "H. Zha", "L. Carin" ], "title": "Gromov-wasserstein learning for graph matching and node embedding", "venue": "In ICML,", "year": 2019 }, { "authors": [ "K. Xu", "C. Li", "Y. Tian", "T. Sonobe", "K. Kawarabayashi", "S. Jegelka" ], "title": "Representation learning on graphs with jumping knowledge", "venue": null, "year": 2018 }, { "authors": [ "K. Xu", "W. Hu", "J. Leskovec", "S. Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "K. Xu", "L. Wang", "M. Yu", "Y. Feng", "Y. Song", "Z. Wang", "D. Yu" ], "title": "Cross-lingual knowledge graph alignment via graph matching neural network", "venue": "In ACL,", "year": 2019 }, { "authors": [ "J. Yan", "X.C. Yin", "W. Lin", "C. Deng", "H. Zha", "X. Yang" ], "title": "A short survey of recent advances in graph matching", "venue": "ICMR,", "year": 2016 }, { "authors": [ "A. Zanfir", "C. Sminchisescu" ], "title": "Deep learning of graph matching", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "M. Zaslavskiy", "F. Bach", "J.P. Vert" ], "title": "A path following algorithm for the graph matching problem", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2009 }, { "authors": [ "S.H. Zhang", "Tong" ], "title": "FINAL: fast attributed network alignment", "venue": "In SIGKDD,", "year": 2016 }, { "authors": [ "J. Zhang", "S.Y. Philip" ], "title": "Multiple anonymized social networks alignment", "venue": "In ICDM,", "year": 2015 }, { "authors": [ "W. Zhang", "K. Shu", "H. Liu", "Y. Wang" ], "title": "Graph neural networks for user identity linkage", "venue": "CoRR, abs/1903.02174,", "year": 2019 }, { "authors": [ "Y. Zhang", "A. Prügel-Bennett", "J. Hare" ], "title": "Learning representations of sets through optimized permutations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Z. Zhang", "W.S. Lee" ], "title": "Deep graphical feature learning for the feature matching problem", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Z. Zhang", "Y. Xiang", "L. Wu", "B. Xue", "A. Nehorai" ], "title": "KerGM: Kernelized graph matching", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "F. Zhou", "F. De la Torre" ], "title": "Factorized graph matching", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "J.Y. Zhu", "T. Park", "P. Isola", "A.A. Efros" ], "title": "Unpaired image-to-image translation using cycleconsistent adversarial networks", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Q. Zhu", "X. Zhou", "J. Wu", "J. Tan", "L. Guo" ], "title": "Neighborhood-aware attentional representation for multilingual knowledge graphs", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "D Gt" ], "title": "COMPARISON TO THE GRADUATED ASSIGNMENT ALGORITHM As stated in Section 3.3, our algorithm can be viewed as a generalization of the graduated assignment algorithm (Gold & Rangarajan, 1996) extending it by trainable parameters. To evaluate the impact of a trainable refinement procedure, we replicated the experiments of Sections", "venue": null, "year": 1996 }, { "authors": [ "Kriege" ], "title": "common subgraph isomorphism problem is studied, which asks for the largest graph that is contained as subgraph in two given graphs. The problem is NP-hard in general and remains so even in trees (Garey & Johnson, 1979) unless the common subgraph is required to be connected (Matula, 1978). Moreover, most variants of the problem are difficult to approximate with theoretical guarantees (Kann", "venue": null, "year": 1992 }, { "authors": [ "Yan" ], "title": "2016) for a more detailed discussion. There is a long line of research trying to minimize Equation (12) for S ∈ [0, 1]n×n by a Frank-Wolfe type algorithm (Jaggi, 2013) and finally projecting the fractional solution to P (Gold", "venue": null, "year": 1996 }, { "authors": [ "Kersting" ], "title": "Gt if and only if there is no fractional S such that the objective function in Equation", "venue": null, "year": 2014 }, { "authors": [ "Zhou", "De la Torre" ], "title": "2016) proposed to factorize the affinity matrix into smaller matrices and incorporated global geometric constraints. Zhang et al. (2019c) studied kernelized graph matching, where the node and edge similarities are kernels, which allows to express the graph matching problem again as Koopmans-Beckmann’s", "venue": null, "year": 2019 }, { "authors": [ "Singh" ], "title": "addition a similarity function between pairs of nodes is given. Most algorithms follow a two step approach: First, an n×n node-to-node similarity matrix M is computed from the given similarity function and the topology of the two graphs. Then, in the second step, an alignment is computed by solving the assignment problem for M", "venue": null, "year": 2008 }, { "authors": [ "Caetano" ], "title": "The techniques briefly summarized above aim to find an optimal correspondence according to a clearly defined objective function. In practical applications, it is often difficult to specify node and edge similarity functions. Recently, it has been proposed to learn such functions for a specific task, e.g., in form of a cost model for the graph edit distance (Cortés et al., 2019)", "venue": null, "year": 2009 }, { "authors": [ "Wang" ], "title": "2019) develop supervised deep graph matching networks based on displacement and combinatorial objectives, respectively. Zanfir & Sminchisescu (2018) model the graph matching affinity via a differentiable, but unlearnable spectral graph matching solver (Leordeanu", "venue": null, "year": 2020 }, { "authors": [ "Xu" ], "title": "2019b) tackles the problem of graph matching by relating it to the Gromov-Wasserstein discrepancy (Peyré et al., 2016). In addition, the optimal transport objective is enhanched by simultaneously learning node embeddings which shall account for the noise in both graphs. In a follow-up work, Xu et al. (2019a) extend this concept to the tasks of multi-graph partioning and matching", "venue": null, "year": 2019 }, { "authors": [ "Wang" ], "title": "Intra- and inter-graph message passing. The concept of enhanching intra-graph node embeddings by inter-graph node embeddings has been already heavily investigated in practice (Li et al., 2019", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graph matching refers to the problem of establishing meaningful structural correspondences of nodes between two or more graphs by taking both node similarities and pairwise edge similarities into account (Wang et al., 2019b). Since graphs are natural representations for encoding relational data, the problem of graph matching lies at the heart of many real-world applications. For example, comparing molecules in cheminformatics (Kriege et al., 2019b), matching protein networks in bioinformatics (Sharan & Ideker, 2006; Singh et al., 2008), linking user accounts in social network analysis (Zhang & Philip, 2015), and tracking objects, matching 2D/3D shapes or recognizing actions in computer vision (Vento & Foggia, 2012) can be formulated as a graph matching problem.\nThe problem of graph matching has been heavily investigated in theory (Grohe et al., 2018) and practice (Conte et al., 2004), usually by relating it to domain-agnostic distances such as the graph edit distance (Stauffer et al., 2017) and the maximum common subgraph problem (Bunke & Shearer, 1998), or by formulating it as a quadratic assignment problem (Yan et al., 2016). Since all three approaches are NP-hard, solving them to optimality may not be tractable for large-scale, real-world instances. Moreover, these purely combinatorial approaches do not adapt to the given data distribution and often do not consider continuous node embeddings which can provide crucial information about node semantics.\nRecently, various neural architectures have been proposed to tackle the task of graph matching (Zanfir & Sminchisescu, 2018; Wang et al., 2019b; Zhang & Lee, 2019; Xu et al., 2019d;b; Derr et al., 2019; Zhang et al., 2019a; Heimann et al., 2018) or graph similarity (Bai et al., 2018; 2019; Li et al., 2019) in a data-dependent fashion. However, these approaches are either only capable of computing similarity scores between whole graphs (Bai et al., 2018; 2019; Li et al., 2019), rely on an inefficient global matching procedure (Zanfir & Sminchisescu, 2018; Wang et al., 2019b; Xu et al., 2019d; Li et al., 2019), or do not generalize to unseen graphs (Xu et al., 2019b; Derr et al., 2019; Zhang et al., 2019a). Moreover, they might be prone to match neighborhoods between graphs\n3Correspondence to matthias.fey@udo.edu 4Work done during an internship at NNAISENSE\ninconsistently by only taking localized embeddings into account (Zanfir & Sminchisescu, 2018; Wang et al., 2019b; Zhang & Lee, 2019; Xu et al., 2019d; Derr et al., 2019; Heimann et al., 2018).\nHere, we propose a fully-differentiable graph matching procedure which aims to reach a data-driven neighborhood consensus between matched node pairs without the need to solve any optimization problem during inference. In addition, our approach is purely local, i.e., it operates on fixed-size neighborhoods around nodes, and is sparsity-aware, i.e., it takes the sparsity of the underlying structures into account. Hence, our approach scales well to large input domains, and can be trained in an end-to-end fashion to adapt to a given data distribution. Finally, our approach improves upon the state-of-the-art on several real-world applications from the fields of computer vision and entity alignment on knowledge graphs." }, { "heading": "2 PROBLEM DEFINITION", "text": "A graph G = (V,A,X,E) consists of a finite set of nodes V = {1, 2, . . .}, an adjacency matrix A ∈ {0, 1}|V|×|V|, a node feature matrix X ∈ R|V|×·, and an optional (sparse) edge feature matrix E ∈ R|V|×|V|×·. For a subset of nodes S ⊆ V , G[S] = (S,AS,S ,XS,:,ES,S,:) denotes the subgraph of G induced by S. We refer toNT (i) = {j ∈ V : d(i, j) ≤ T} as the T -hop neighborhood around node i ∈ V , where d : V × V → N denotes the shortest-path distance in G. A node coloring is a function V → Σ with arbitrary codomain Σ. The problem of graph matching refers to establishing node correspondences between two graphs. Formally, we are given two graphs, a source graph Gs = (Vs,As,Xs,Es) and a target graph Gt = (Vt,At,Xt,Et), w.l.o.g. |Vs| ≤ |Vt|, and are interested in finding a correspondence matrix S ∈ {0, 1}|Vs|×|Vt| which minimizes an objective subject to the one-to-one mapping constraints∑ j∈Vt Si,j = 1 ∀i ∈ Vs and ∑ i∈Vs Si,j ≤ 1 ∀j ∈ Vt. As a result, S infers an injective mapping π : Vs → Vt which maps each node in Gs to a node in Gt. Typically, graph matching is formulated as an edge-preserving, quadratic assignment problem (Anstreicher, 2003; Gold & Rangarajan, 1996; Caetano et al., 2009; Cho et al., 2013), i.e.,\nargmax S ∑ i,i′∈Vs j,j′∈Vt A (s) i,i′A (t) j,j′Si,jSi′,j′ (1)\nsubject to the one-to-one mapping constraints mentioned above. This formulation is based on the intuition of finding correspondences based on neighborhood consensus (Rocco et al., 2018), which shall prevent adjacent nodes in the source graph from being mapped to different regions in the target graph. Formally, a neighborhood consensus is reached if for all node pairs (i, j) ∈ Vs × Vt with Si,j = 1, it holds that for every node i′ ∈ N1(i) there exists a node j′ ∈ N1(j) such that Si′,j′ = 1. In this work, we consider the problem of supervised and semi-supervised matching of graphs while employing the intuition of neighborhood consensus as an inductive bias into our model. In the supervised setting, we are given pair-wise ground-truth correspondences for a set of graphs and want our model to generalize to unseen graph pairs. In the semi-supervised setting, source and target graphs are fixed, and ground-truth correspondences are only given for a small subset of nodes. However, we are allowed to make use of the complete graph structures." }, { "heading": "3 METHODOLOGY", "text": "In the following, we describe our proposed end-to-end, deep graph matching architecture in detail. See Figure 1 for a high-level illustration. The method consists of two stages: a local feature matching procedure followed by an iterative refinement strategy using synchronous message passing networks. The aim of the feature matching step, see Section 3.1, is to compute initial correspondence scores based on the similarity of local node embeddings. The second step is an iterative refinement strategy, see Sections 3.2 and 3.3, which aims to reach neighborhood consensus for correspondences using a differentiable validator for graph isomorphism. Finally, in Section 3.4, we show how to scale our method to large, real-world inputs." }, { "heading": "3.1 LOCAL FEATURE MATCHING", "text": "We model our local feature matching procedure in close analogy to related approaches (Bai et al., 2018; 2019; Wang et al., 2019b; Zhang & Lee, 2019; Wang & Solomon, 2019) by computing similarities between nodes in the source graph Gs and the target graph Gt based on node embeddings. That is, given latent node embeddings Hs = Ψθ1(Xs,As,Es) ∈ R|Vs|×· and Ht = Ψθ1(Xt,At,Et) ∈ R|Vt|×· computed by a shared neural network Ψθ1 for source graph Gs and target graph Gt, respectively, we obtain initial soft correspondences as\nS(0) = sinkhorn(Ŝ(0)) ∈ [0, 1]|Vs|×|Vt| with Ŝ(0) = HsH>t ∈ R|Vs|×|Vt|. Here, sinkhorn normalization is applied to obtain rectangular doubly-stochastic correspondence matrices that fulfill the constraints ∑ j∈Vt Si,j = 1 ∀i ∈ Vs and ∑ i∈Vs Si,j ≤ 1 ∀j ∈ Vt (Sinkhorn & Knopp, 1967; Adams & Zemel, 2011; Cour et al., 2006).\nWe interpret the i-th row vector S(0)i,: ∈ [0, 1] |Vt| as a discrete distribution over potential correspondences in Gt for each node i ∈ Vs. We train Ψθ1 in a dicriminative, supervised fashion against ground truth correspondences πgt(·) by minimizing the negative log-likelihood of correct correspondence scores L (initial) = −∑i∈Vs log(S(0)i,πgt(i)). We implement Ψθ1 as a Graph Neural Network (GNN) to obtain localized, permutation equivariant vectorial node representations (Bronstein et al., 2017; Hamilton et al., 2017; Battaglia et al., 2018; Goyal & Ferrara, 2018). Formally, a GNN follows a neural message passing scheme (Gilmer et al., 2017) and updates its node features ~h(t−1)i in layer t by aggregating localized information via\n~a (t) i = AGGREGATE\n(t) ({(\n~h (t−1) j , ~ej,i ) : j ∈ N1(i) }) , ~h (t) i = UPDATE (t) ( ~h (t−1) i ,~a (t) i ) (2)\nwhere ~h(0)i = ~xi ∈ X and {{. . .}} denotes a multiset. The recent work in the fields of geometric deep learning and relational representation learning provides a large number of operators to choose from (Kipf & Welling, 2017; Gilmer et al., 2017; Veličković et al., 2018; Schlichtkrull et al., 2018; Xu et al., 2019c), which allows for precise control of the properties of extracted features." }, { "heading": "3.2 SYNCHRONOUS MESSAGE PASSING FOR NEIGHBORHOOD CONSENSUS", "text": "Due to the purely local nature of the used node embeddings, our feature matching procedure is prone to finding false correspondences which are locally similar to the correct one. Formally, those cases pose a violation of the neighborhood consensus criteria employed in Equation (1). Since finding a\nglobal optimum is NP-hard, we aim to detect violations of the criteria in local neighborhoods and resolve them in an iterative fashion.\nWe utilize graph neural networks to detect these violations in a neighborhood consensus step and iteratively refine correspondences S(l), l ∈ {0, . . . , L}, starting from S(0). Key to the proposed algorithm is the following observation: The soft correspondence matrix S ∈ [0, 1]|Vs|×|Vt| is a map from the node function space L(Gs) = L(R|Vs|) to the node function space L(Gt) = L(R|Vt|). Therefore, we can use S to pass node functions ~xs ∈ L(Gs), ~xt ∈ L(Gt) along the soft correspondences by\n~x ′t = S >~xs and ~x ′s = S~xt (3)\nto obtain functions ~x ′t ∈ L(Gt), ~x ′s ∈ L(Gs) in the other domain, respectively. Then, our consensus method works as follows: Using S(l), we first map node indicator functions, given as an injective node coloring Vs → {0, 1}|Vs| in the form of an identity matrix I|Vs|, from Gs to Gt. Then, we distribute this coloring in corresponding neighborhoods by performing synchronous message passing on both graphs via a shared graph neural network Ψθ2 , i.e.,\nOs = Ψθ2(I|Vs|,As,Es) and Ot = Ψθ2(S > (l)I|Vs|,At,Et). (4)\nWe can compare the results of both GNNs to recover a vector ~di,j = ~o (s) i −~o (t) j which measures the neighborhood consensus between node pairs (i, j) ∈ Vs ×Vt. This measure can be used to perform trainable updates of the correspondence scores\nS (l+1) i,j = sinkhorn(Ŝ (l+1))i,j with Ŝ (l+1) i,j = Ŝ (l) i,j + Φθ3( ~dj,i) (5) based on an MLP Φθ3 . The process can be applied L times to iteratively improve the consensus in neighborhoods. The final objective L = L (initial) + L (refined) with L (refined) = −∑i∈Vs log(S(L)i,πgt(i)) combines both the feature matching error and neighborhood consensus error. This objective is fullydifferentiable and can hence be optimized in an end-to-end fashion using stochastic gradient descent. Overall, the consensus stage distributes global node colorings to resolve ambiguities and false matchings made in the first stage of our architecture by only using purely local operators. Since an initial matching is needed to test for neighborhood consensus, this task cannot be fulfilled by Ψθ1 alone, which stresses the importance of our two-stage approach.\nThe following two theorems show that ~di,j is a good measure of how well local neighborhoods around i and j are matched by the soft correspondence between Gs and Gt. The proofs can be found in Appendix B and C, respectively. Theorem 1. Let Gs and Gt be two isomorphic graphs and let Ψθ2 be a permutation equivariant GNN, i.e., P>Ψθ2(X,A) = Ψθ2(P\n>X,P>AP ) for any permutation matrix P ∈ {0, 1}|V|×|V|. If S ∈ {0, 1}|Vs|×|Vt| encodes an isomorphism between Gs and Gt, then ~di,π(i) = ~0 for all i ∈ Vs. Theorem 2. Let Gs and Gt be two graphs and let Ψθ2 be a permutation equivariant and T -layered GNN for which both AGGREGATE(t) and UPDATE(t) are injective for all t ∈ {1, . . . , T}. If ~di,j = ~0, then the resulting submatrix SNT (i),NT (j) ∈ [0, 1]\n|NT (i)|×|NT (j)| is a permutation matrix describing an isomorphism between the T -hop subgraph Gs[NT (i)] around i ∈ Vs and the T -hop subgraph Gt[NT (j)] around j ∈ Vt. Moreover, if ~di,argmaxSi,: = ~0 for all i ∈ Vs, then S denotes a full isomorphism between Gs and Gt. Hence, a GNN Ψθ2 that satisfies both criteria in Theorem 1 and 2 provides equal node embeddings ~o\n(s) i and ~o (t) j if and only if nodes in a local neighborhood are correctly matched to each other. A value ~di,j 6= ~0 indicates the existence of inconsistent matchings in the local neighborhoods around i and j, and can hence be used to refine the correspondence score Ŝi,j .\nNote that both requirements, permutation equivariance and injectivity, are easily fulfilled: (1) All common graph neural network architectures following the message passing scheme of Equation (2) are equivariant due to the use of permutation invariant neighborhood aggregators. (2) Injectivity of graph neural networks is a heavily discussed topic in recent literature. It can be fulfilled by using a GNN that is as powerful as the Weisfeiler & Lehman (1968) (WL) heuristic in distinguishing graph structures, e.g., by using sum aggregation in combination with MLPs on the multiset of neighboring node features, cf. (Xu et al., 2019c; Morris et al., 2019)." }, { "heading": "3.3 RELATION TO THE GRADUATED ASSIGNMENT ALGORITHM", "text": "Theoretically, we can relate our proposed approach to classical graph matching techniques that consider a doubly-stochastic relaxation of the problem defined in Equation (1), cf. (Lyzinski et al., 2016) and Appendix F for more details. A seminal work following this method is the graduated assignment algorithm (Gold & Rangarajan, 1996). By starting from an initial feasible solution S(0), a new solution S(l+1) is iteratively computed from S(l) by approximately solving a linear assignment problem according to\nS(l+1) ← softassign S ∑ i∈Vs ∑ j∈Vt Qi,jSi,j with Qi,j = 2 ∑ i′∈Vs ∑ j′∈Vt A (s) i,i′A (t) j,j′S (l) i′,j′ (6)\nwhere Q denotes the gradient of Equation (1) at S(l).1 The softassign operator is implemented by applying sinkhorn normalization on rescaled inputs, where the scaling factor grows in every iteration to increasingly encourage integer solutions. Our approach also resembles the approximation of the linear assignment problem via sinkhorn normalization.\nMoreover, the gradient Q is closely related to our neighborhood consensus scheme for the particular simple, non-trainable GNN instantiation Ψ(X,A,E) = AX . Given Os = AsI|Vs| = As and Ot = AtS >I|Vs| = AtS >, we obtain Q = 2OsO>t by substitution. Instead of updating S (l) based on the similarity between Os and Ot obtained from a fixed-function GNN Ψ, we choose to update correspondence scores via trainable neural networks Ψθ2 and Φθ3 based on the difference between Os and Ot. This allows us to interpret our model as a deep parameterized generalization of the graduated assignment algorithm. In addition, specifying node and edge attribute similarities in graph matching is often difficult and complicates its computation (Zhou & De la Torre, 2016; Zhang et al., 2019c), whereas our approach naturally supports continuous node and edge features via established GNN models. We experimentally verify the benefits of using trainable neural networks Ψθ2 instead of Ψ(X,A,E) = AX in Appendix D." }, { "heading": "3.4 SCALING TO LARGE INPUT", "text": "We apply a number of optimizations to our proposed algorithm to make it scale to large input domains. See Algorithm 1 in Appendix A for the final optimized algorithm.\nSparse correspondences. We propose to sparsify initial correspondences S(0) by filtering out low score correspondences before neighborhood consensus takes place. That is, we sparsify S(0) by computing top k correspondences with the help of the KEOPS library (Charlier et al., 2019) without ever storing its dense version, reducing its required memory footprint fromO(|Vs||Vt|) toO(k|Vs|). In addition, the time complexity of the refinement phase is reduced from O(|Vs||Vt| + |Es| + |Et|) toO(k|Vs|+ |Es|+ |Et|), where |Es| and |Et| denote the number of edges in Gs and Gt, respectively. Note that sparsifying initial correspondences assumes that the feature matching procedure ranks the correct correspondence within the top k elements for each node i ∈ Vs. Hence, also optimizing the initial feature matching loss L (initial) is crucial, and can be further accelerated by training only against sparsified correspondences with ground-truth entries topk(S (0) i,: ) ∪ {S (0) i,πgt(i) }.\nReplacing node indicators functions. Although applying Ψθ2 on node indicator functions I|Vs| is computationally efficient, it requires a parameter complexity of O(|Vs|). Hence, we propose to replace node indicator functions I|Vs| with randomly drawn node functions R (l) s ∼ N (0, 1), where R(l)s ∈ R|Vs|×r with r |Vs|, in iteration l. By sampling from a continuous distribution, node indicator functions are still guaranteed to be injective (DeGroot & Schervish, 2012). Note that Theorem 1 still holds because it does not impose any restrictions on the function space L(Gs). Theorem 2 does not necessarily hold anymore, but we expect our refinement strategy to resolve any ambiguities by re-sampling R(l)s in every iteration l. We verify this empirically in Section 4.1.\n1For clarity of presentation, we closely follow the original formulation of the method for simple graphs but ignore the edge similarities and adapt the constant factor of the gradient according to our objective function.\nSoftmax normalization. The sinkhorn normalization fulfills the requirements of rectangular doubly-stochastic solutions. However, it may eventually push correspondences to inconsistent integer solutions very early on from which the neighborhood consensus method cannot effectively recover. Furthermore, it is inherently inefficient to compute and runs the risk of vanishing gradients ∂S(l)/∂Ŝ(l) (Zhang et al., 2019b). Here, we propose to relax this constraint by only applying row-wise softmax normalization on Ŝ(l), and expect our supervised refinement procedure to naturally resolve violations of ∑ i∈Vs Si,j ≤ 1 on its own by re-ranking false correspondences via neighborhood consensus. Experimentally, we show that row-wise normalization is sufficient for our algorithm to converge to the correct solution, cf. Section 4.1.\nNumber of refinement iterations. Instead of holding L fixed, we propose to differ the number of refinement iterations L(train) and L(test), L(train) L(test), for training and testing, respectively. This does not only speed up training runtime, but it also encourages the refinement procedure to reach convergence with as few steps as necessary while we can run the refinement procedure until convergence during testing. We show empirically that decreasing L(train) does not affect the convergence abilities of our neighborhood consensus procedure during testing, cf. Section 4.1." }, { "heading": "4 EXPERIMENTS", "text": "We verify our method on three different tasks. We first show the benefits of our approach in an ablation study on synthetic graphs (Section 4.1), and apply it to the real-world tasks of supervised keypoint matching in natural images (Sections 4.2 and 4.3) and semi-supervised cross-lingual knowledge graph alignment (Section 4.4) afterwards. All dataset statistics can be found in Appendix H.\nOur method is implemented in PYTORCH (Paszke et al., 2017) using the PYTORCH GEOMETRIC (Fey & Lenssen, 2019) and the KEOPS (Charlier et al., 2019) libraries. Our implementation can process sparse mini-batches with parallel GPU acceleration and minimal memory footprint in all algorithm steps. For all experiments, optimization is done via ADAM (Kingma & Ba, 2015) with a fixed learning rate of 10−3. We use similar architectures for Ψθ1 and Ψθ2 except that we omit dropout (Srivastava et al., 2014) in Ψθ2 . For all experiments, we report Hits@k to evaluate and compare our model to previous lines of work, where Hits@k measures the proportion of correctly matched entities ranked in the top k." }, { "heading": "4.1 ABLATION STUDY ON SYNTHETIC GRAPHS", "text": "In our first experiment, we evaluate our method on synthetic graphs where we aim to learn a matching for pairs of graphs in a supervised fashion. Each pair of graphs consists of an undirected Erdős & Rényi (1959) graph Gs with |Vs| ∈ {50, 100} nodes and edge probability p ∈ {0.1, 0.2}, and a target graph Gt which is constructed from Gs by removing edges with probability ps without disconnecting any nodes (Heimann et al., 2018). Training and evaluation is done on 1 000 graphs each for different configurations ps ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5}. In Appendix E, we perform additional experiments to also verify the robustness of our approach towards node addition or removal.\nArchitecture and parameters. We implement the graph neural network operators Ψθ1 and Ψθ2 by stacking three layers (T = 3) of the GIN operator (Xu et al., 2019c)\n~h (t+1) i = MLP (t+1)\n(( 1 + (t+1) ) · ~h(t)i + ∑ j→i ~h (t) j ) (7)\ndue to its expressiveness in distinguishing raw graph structures. The number of layers and hidden dimensionality of all MLPs is set to 2 and 32, respectively, and we apply ReLU activation (Glorot et al., 2011) and Batch normalization (Ioffe & Szegedy, 2015) after each of its layers. Input features are initialized with one-hot encodings of node degrees. We employ a Jumping Knowledge style concatenation ~hi = W [~h (1) i , . . . , ~h (T ) i ] (Xu et al., 2018) to compute final node representations ~hi. We train and test our procedure with L(train) = 10 and L(test) = 20 refinement iterations, respectively.\nResults. Figures 2(a) and 2(b) show the matching accuracy Hits@1 for different choices of |Vs| and p. We observe that the purely local matching approach via softmax(Ŝ(0)) starts decreasing in\nperformance with the structural noise ps increasing. This also holds when applying global sinkhorn normalization on Ŝ(0). However, our proposed two-stage architecture can recover all correspondences, independent of the applied structural noise ps. This applies to both variants discussed in the previous sections, i.e., our initial formulation sinkhorn(Ŝ(L)), and our optimized architecture using random node indicator sampling and row-wise normalization softmax(Ŝ(L)). This highlights the overall benefits of applying matching consensus and justifies the usage of the enhancements made towards scalability in Section 3.4.\nIn addition, Figure 2(c) visualizes the test error L (refined) for varying number of iterations L(test). We observe that even when training to non-convergence, our procedure is still able to converge by increasing the number of iterations L(test) during testing.\nMoreover, Figure 2(d) shows the performance of our refinement strategy when operating on sparsified top k correspondences. In contrast to its dense version, it cannot match all nodes correctly due to the poor initial feature matching quality. However, it consistently converges to the perfect solution of Hits@1 ≈ Hits@k in case the correct match is included in the initial top k ranking of correspondences. Hence, with increasing k, we can recover most of the correct correspondences, making it an excellent option to scale our algorithm to large graphs, cf. Section 4.4." }, { "heading": "4.2 SUPERVISED KEYPOINT MATCHING IN NATURAL IMAGES", "text": "We perform experiments on the PASCALVOC (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009) and WILLOW-OBJECTCLASS (Cho et al., 2013) datasets which contain sets of image categories with labeled keypoint locations. For PASCALVOC, we follow the experimental setups of Zanfir & Sminchisescu (2018) and Wang et al. (2019b) and use the training and test splits provided by Choy et al. (2016). We pre-filter the dataset to exclude difficult, occluded and truncated objects, and require examples to have at least one keypoint, resulting in 6 953 and 1 671 annotated images for training and testing, respectively. The PASCALVOC dataset contains instances of varying scale, pose and illumination, and the number of keypoints ranges from 1 to 19. In contrast, the WILLOW-OBJECTCLASS dataset contains at least 40 images with consistent orientations for each of its five categories, and each image consists of exactly 10 keypoints. Following the experimental setup of peer methods (Cho et al., 2013; Wang et al., 2019b), we pre-train our model on PASCALVOC and fine-tune it over 20 random splits with 20 per-class images used for training. We construct graphs via the Delaunay triangulation of keypoints. For fair comparison with Zanfir & Sminchisescu (2018) and Wang et al. (2019b), input features of keypoints are given by the concatenated output of relu4 2 and relu5 1 of a pre-trained VGG16 (Simonyan & Zisserman, 2014) on IMAGENET (Deng et al., 2009).\nArchitecture and parameters. We adopt SPLINECNN (Fey et al., 2018) as our graph neural network operator\n~h (t+1) i = σ\n( W (t+1)~h\n(t) i + ∑ j→i Φ (t+1) θ (~ej,i) · ~h (t) j ) (8)\nwhose trainable B-spline based kernel function Φθ(·) is conditioned on edge features ~ej,i between node-pairs. To align our results with the related work, we evaluate both isotropic and anisotropic\nMethod Aero Bike Bird Boat Bottle Bus Car Cat Chair Cow Table Dog Horse M-Bike Person Plant Sheep Sofa Train TV Mean\nGMN 31.1 46.2 58.2 45.9 70.6 76.5 61.2 61.7 35.5 53.7 58.9 57.5 56.9 49.3 34.1 77.5 57.1 53.6 83.2 88.6 57.9 PCA-GM 40.9 55.0 65.8 47.9 76.9 77.9 63.5 67.4 33.7 66.5 63.6 61.3 58.9 62.8 44.9 77.5 67.4 57.5 86.7 90.9 63.8\nΨθ1 = MLP isotropic L = 0 34.7 42.6 41.5 50.4 50.3 72.2 60.1 59.4 24.6 38.1 86.2 47.7 56.3 37.6 35.4 58.0 45.8 74.8 64.1 75.3 52.8 L = 10 45.8 58.2 45.5 57.6 68.2 82.1 75.3 60.2 31.7 52.9 88.2 56.2 68.2 50.7 46.5 66.3 58.8 89.0 85.1 79.9 63.3 L = 20 45.3 57.1 54.9 54.7 71.7 82.6 75.3 65.9 31.6 50.8 86.1 56.9 67.1 53.1 49.2 77.3 59.2 91.7 82.0 84.2 64.8\nΨθ1 = GNN isotropic L = 0 44.3 62.0 48.4 53.9 73.3 80.4 72.2 64.2 30.3 52.7 79.4 56.6 62.3 56.2 47.5 74.0 59.8 79.9 81.9 83.0 63.1 L = 10 46.5 63.7 54.9 60.9 79.4 84.1 76.4 68.3 38.5 61.5 80.6 59.7 69.8 58.4 54.3 76.4 64.5 95.7 87.9 81.3 68.1 L = 20 50.1 65.4 55.7 65.3 80.0 83.5 78.3 69.7 34.7 60.7 70.4 59.9 70.0 62.2 56.1 80.2 70.3 88.8 81.1 84.3 68.3\nΨθ1 = MLP anisotropic L = 0 34.3 45.9 37.3 47.7 53.3 75.2 64.5 61.7 27.7 40.5 85.9 46.6 50.2 39.0 37.3 58.0 49.2 82.9 65.0 74.2 53.8 L = 10 44.6 51.2 50.7 58.5 72.3 83.3 76.6 65.6 31.0 57.5 91.7 55.4 69.5 56.2 47.5 85.1 57.9 92.3 86.7 85.9 66.0 L = 20 48.7 57.2 47.0 65.3 73.9 87.6 76.7 70.0 30.0 55.5 92.8 59.5 67.9 56.9 48.7 87.2 58.3 94.9 87.9 86.0 67.6\nΨθ1 = GNN anisotropic L = 0 42.1 57.5 49.6 59.4 83.8 84.0 78.4 67.5 37.3 60.4 85.0 58.0 66.0 54.1 52.6 93.9 60.2 85.6 87.8 82.5 67.3 L = 10 45.5 67.6 56.5 66.8 86.9 85.2 84.2 73.0 43.6 66.0 92.3 64.0 79.8 56.6 56.1 95.4 64.4 95.0 91.3 86.3 72.8 L = 20 47.0 65.7 56.8 67.6 86.9 87.7 85.3 72.6 42.9 69.1 84.5 63.8 78.1 55.6 58.4 98.0 68.4 92.2 94.5 85.5 73.0\nedge features which are given as normalized relative distances and 2D Cartesian coordinates, respectively. For SPLINECNN, we use a kernel size of 5 in each dimension, a hidden dimensionality of 256, and apply ReLU as our non-linearity function σ. Our network architecture consists of two convolutional layers (T = 2), followed by dropout with probability 0.5, and a final linear layer. During training, we form pairs between any two training examples of the same category, and evaluate our model by sampling a fixed number of test graph pairs belonging to the same category.\nResults. We follow the experimental setup of Wang et al. (2019b) and train our models using negative log-likelihood due to its superior performance in contrast to the displacement loss used in Zanfir & Sminchisescu (2018). We evaluate our complete architecture using isotropic and anisotropic GNNs for L ∈ {0, 10, 20}, and include ablation results obtained from using Ψθ1 = MLP for the local node matching procedure. Results of Hits@1 are shown in Table 1 and 2 for PASCALVOC and WILLOW-OBJECTCLASS, respectively. We visualize qualitative results of our method in Appendix I.\nWe observe that our refinement strategy is able to significantly outperform competing methods as well as our non-refined baselines. On the WILLOW-OBJECTCLASS dataset, our refinement stage at least reduces the error of the initial model (L = 0) by half across all categories. The benefits of the second stage are even more crucial when starting from a weaker initial feature matching baseline (Ψθ1 = MLP), with overall improvements of up to 14 percentage points on PASCALVOC. However, good initial matchings do help our consensus stage to improve its performance further, as indicated by the usage of task-specific isotropic or anisotropic GNNs for Ψθ1 .\nMethod Aero Bike Bird Boat Bottle Bus Car Cat Chair Cow Table Dog Horse M-Bike Person Plant Sheep Sofa Train TV Mean\n(Zhang & Lee, 2019) 76.1 89.8 93.4 96.4 96.2 97.1 94.6 82.8 89.3 96.7 89.7 79.5 82.6 83.5 72.8 76.7 77.1 97.3 98.2 99.5 88.5\nOurs L = 0 69.2 87.7 77.3 90.4 98.7 98.3 92.5 91.6 94.7 79.4 95.8 90.1 80.0 79.5 72.5 98.0 76.5 89.6 93.4 97.8 87.6 L = 10 81.3 92.2 94.2 98.8 99.3 99.1 98.6 98.2 99.6 94.1 100.0 99.4 86.6 86.6 88.7 100.0 100.0 100.0 100.0 99.3 95.8 L = 20 81.1 92.0 94.7 100.0 99.3 99.3 98.9 97.3 99.4 93.4 100.0 99.1 86.3 86.2 87.7 100.0 100.0 100.0 100.0 99.3 95.7" }, { "heading": "4.3 SUPERVISED GEOMETRIC KEYPOINT MATCHING", "text": "We also verify our approach by tackling the geometric feature matching problem, where we only make use of point coordinates and no additional visual features are available. Here, we follow the experimental training setup of Zhang & Lee (2019), and test the generalization capabilities of our model on the PASCALPF dataset (Ham et al., 2016). For training, we generate a synthetic set of graph pairs: We first randomly sample 30–60 source points uniformly from [−1, 1]2, and add Gaussian noise from N (0, 0.052) to these points to obtain the target points. Furthermore, we add 0–20 outliers from [−1.5, 1.5]2 to each point cloud. Finally, we construct graphs by connecting each node with its k-nearest neighbors (k = 8). We train our unmodified anisotropic keypoint architecture from Section 4.2 with input ~xi = ~1 ∈ R1 ∀i ∈ Vs ∪ Vt until it has seen 32 000 synthetic examples.\nResults. We evaluate our trained model on the PASCALPF dataset (Ham et al., 2016) which consists of 1 351 image pairs within 20 classes, with the number of keypoints ranging from 4 to 17. Results of Hits@1 are shown in Table 3. Overall, our consensus architecture improves upon the state-of-the-art results of Zhang & Lee (2019) on almost all categories while our L = 0 baseline is weaker than the results reported in Zhang & Lee (2019), showing the benefits of applying our consensus stage. In addition, it shows that our method works also well even when not taking any visual information into account." }, { "heading": "4.4 SEMI-SUPERVISED CROSS-LINGUAL KNOWLEDGE GRAPH ALIGNMENT", "text": "We evaluate our model on the DBP15K datasets (Sun et al., 2017) which link entities of the Chinese, Japanese and French knowledge graphs of DBPEDIA into the English version and vice versa. Each dataset contains exactly 15 000 links between equivalent entities, and we split those links into training and testing following upon previous works. For obtaining entity input features, we follow the experimental setup of Xu et al. (2019d): We retrieve monolingual FASTTEXT embeddings (Bojanowski et al., 2017) for each language separately, and align those into the same vector space afterwards (Lample et al., 2018). We use the sum of word embeddings as the final entity input representation (although more sophisticated approaches are just as conceivable).\nArchitecture and parameters. Our graph neural network operator mostly matches the one proposed in Xu et al. (2019d) where the direction of edges is retained, but not their specific relation type:\n~h (t+1) i = σ\n( W\n(t+1) 1 ~h (t) i + ∑ j→i W (t+1) 2 ~h (t) j + ∑ i→j W (t+1) 3 ~h (t) j ) (9)\nWe use ReLU followed by dropout with probability 0.5 as our non-linearity σ, and obtain final node representations via ~hi = W4[~h (1) i , . . . , ~h (T ) i ]. We use a three-layer GNN (T = 3) both for obtaining initial similarities and for refining alignments with dimensionality 256 and 32, respectively. Training is performed using negative log likelihood in a semi-supervised fashion: For each training node i in Vs, we train L (initial) sparsely by using the corresponding ground-truth node in Vt, the top k = 10 entries in Si,: and k randomly sampled entities in Vt. For the refinement phase, we update the sparse top k correspondence matrix L = 10 times. For efficiency reasons, we train L (initial) and L (refined) sequentially for 100 epochs each.\nResults. We report Hits@1 and Hits@10 to evaluate and compare our model to previous lines of work, see Table 4. In addition, we report results of a simple three-layer MLP which matches nodes purely based on initial word embeddings, and a variant of our model without the refinement of initial correspondences (L = 0). Our approach improves upon the state-of-the-art on all categories with\ngains of up to 9.38 percentage points. In addition, our refinement strategy consistently improves upon the Hits@1 of initial correspondences by a significant margin, while results of Hits@10 are shared due to the refinement operating only on sparsified top 10 initial correspondences. Due to the scalability of our approach, we can easily apply a multitude of refinement iterations while still retaining large hidden feature dimensionalities." }, { "heading": "5 LIMITATIONS", "text": "Our experimental results demonstrate that the proposed approach effectively solves challenging realworld problems. However, the expressive power of GNNs is closely related to the WL heuristic for graph isomorphism testing (Xu et al., 2019c; Morris et al., 2019), whose power and limitations are well understood (Arvind et al., 2015). Our method generally inherits these limitations. Hence, one possible limitation is that whenever two nodes are assigned the same color by WL, our approach may fail to converge to one of the possible solutions. For example, there may exist two nodes i, j ∈ Vt with equal neighborhood setsN1(i) = N1(j). One can easily see that the feature matching procedure generates equal initial correspondence distributions S(0):,i = S (0) :,j , resulting in the same mapped node indicator functions I>|Vs|S (0) :,i = I > |Vs|S (0) :,j from Gs to nodes i and j, respectively. Since both nodes share the same neighborhood, Ψθ2 also produces the same distributed functions ~o\n(t) i = ~o (t) j . As a result, both column vectors Ŝ (l) :,i and Ŝ (l) :,j receive the same update, leading to non-convergence. In theory, one might resolve these ambiguities by adding a small amount of noise to Ŝ(0). However, the general amount of feature noise present in real-world datasets already ensures that this scenario is unlikely to occur." }, { "heading": "6 RELATED WORK", "text": "Identifying correspondences between the nodes of two graphs has been studied in various domains and an extensive body of literature exists. Closely related problems are summarized under the terms maximum common subgraph (Kriege et al., 2019b), network alignment (Zhang, 2016), graph edit distance (Chen et al., 2019) and graph matching (Yan et al., 2016). We refer the reader to the Appendix F for a detailed discussion of the related work on these problems. Recently, graph neural networks have become a focus of research leading to various proposed deep graph matching techniques (Wang et al., 2019b; Zhang & Lee, 2019; Xu et al., 2019d; Derr et al., 2019). In Appendix G, we present a detailed overview of the related work in this field while highlighting individual differences and similarities to our proposed graph matching consensus procedure." }, { "heading": "7 CONCLUSION", "text": "We presented a two-stage neural architecture for learning node correspondences between graphs in a supervised or semi-supervised fashion. Our approach is aimed towards reaching a neighborhood consensus between matchings, and can resolve violations of this criteria in an iterative fashion. In addition, we proposed enhancements to let our algorithm scale to large input domains. We evaluated our architecture on real-world datasets on which it consistently improved upon the state-of-the-art." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work has been supported by the German Research Association (DFG) within the Collaborative Research Center SFB 876 Providing Information by Resource-Constrained Analysis, projects A6 and B2." }, { "heading": "A OPTIMIZED GRAPH MATCHING CONSENSUS ALGORITHM", "text": "Our final optimized algorithm is given in Algorithm 1:\nAlgorithm 1 Optimized graph matching consensus algorithm Input: Gs = (Vs,As,Xs,Es), Gt = (Vt,At,Xt,Et), hidden node dimensionality d, sparsity parameter k, number of consensus iterations L, number of random functions r Output: Sparse soft correspondence matrix S(L) ∈ [0, 1]|Vs|×|Vt| with k · |Vs| non-zero entries ——————————————————————————————————————– Hs ← Ψθ1(Xs,As,Es) . Compute node embeddings Hs ∈ R|Vs|×· Ht ← Ψθ1(Xs,At,Et) . Compute node embeddings Ht ∈ R|Vt|×· Ŝ(0) ←HsH>t . Local feature matching Ŝ\n(0) i,: ← topk(Ŝ (0) i,: ) . Sparsify to top k candidates ∀i ∈ {1, . . . , |Vs|} for l in {1, . . . , L} do . L ∈ {L(train), L(test)} S\n(l−1) i,: ← softmax(Ŝ (l−1) i,: ) . Normalize scores ∀i ∈ {1, . . . , |Vs|} R (l) s ∼ N (0, 1) . Sample random node function R(l)s ∈ R|Vs|×r\nR (l) t ← S>(l−1)R (l) s . Map random node functions R (l) s from Gs to Gt Os ← Ψθ2(R(l)s ,As,Es) . Distribute function R(l)s on Gs Ot ← Ψθ2(R(l)t ,At,Et) . Distribute function R(l)t on Gt ~di,j ← ~o (s)i − ~o (t) j . Compute neighborhood consensus measure Ŝ (l) i,j ← Ŝ (l−1) i,j + Φθ3(\n~di,j) . Perform trainable correspondence update end for S\n(L) i,: ← softmax(Ŝ (L) i,: ) . Normalize scores ∀i ∈ {1, . . . , |Vs|}\nreturn S(L)" }, { "heading": "B PROOF FOR THEOREM 1", "text": "Proof. Since Ψθ2 is permutation equivariant, it holds for any node feature matrix Xs ∈ R|Vs|×· that Ψθ2(S >Xs,S>AsS) = S>Ψθ2(Xs,As). With Xt = S >Xs and At = S>AsS, it follows that\nOt = Ψθ2(Xt,At) = Ψθ2(S >Xs,S >AsS) = S >Ψθ2(Xs,As) = S >Os.\nHence, it shows that ~o (s)i = (S >Os)π(i) = ~o (t) π(i) for any node i ∈ Vs, resulting in ~di,π(i) = ~0." }, { "heading": "C PROOF FOR THEOREM 2", "text": "Proof. Let be ~di,j = ~o (s) i − ~o (t) j = ~0. Then, the T -layered GNN Ψθ2 maps both T -hop neighborhoods around nodes i ∈ Vs and j ∈ Vt to the same vectorial representation: ~o\n(s) i = Ψθ2(I |Vs| NT (i),:,A (s) NT (i),NT (i))i = Ψθ2((S >I|Vs|)NT (j),:,A (t) NT (j),NT (j))j = ~o (t) j (10)\nBecause Ψθ2 is as powerful as the WL heuristic in distinguishing graph structures (Xu et al., 2019c; Morris et al., 2019) and is operating on injective node colorings I|V|s , it has the power to distinguish any graph structure from Gs[NT (i)] = (NT (i), I |Vs|NT (i),:,A (s) NT (i),NT (i)), cf. (Murphy et al., 2019). Since ~o (s)i holds information about every node in Gs[NT (i)], it necessarily holds that Gs[NT (i)] ' Gt[NT (j)] in case ~o (s)i = ~o (t) j , where ' denotes the labeled graph isomorphism relation. Hence, there exists an isomorphism P ∈ {0, 1}|NT (i)|×|NT (j)| between Gs[NT (j)] and Gt[NT (j)] such that I |Vs| NT (i),: = P (S >I|Vs|)NT (j),: and A (s) NT (i),NT (i) = PA (t) NT (j),NT (j)P > (11) With I|Vs| being the identity matrix, it follows that I |Vs| NT (i),: = PS > NT (j),:. Furthermore, it holds that I |Vs|NT (i),NT (i) = PS > NT (j),NT (i) when reducing I |Vs| NT (i),: to its column-wise non-zero entries. It follows that SNT (i),NT (j) = P is a permutation matrix describing an isomorphism.\nMethod Aero Bike Bird Boat Bottle Bus Car Cat Chair Cow Table Dog Horse M-Bike Person Plant Sheep Sofa Train TV Mean\nisotropic L = 0 44.3 62.0 48.4 53.9 73.3 80.4 72.2 64.2 30.3 52.7 79.4 56.6 62.3 56.2 47.5 74.0 59.8 79.9 81.9 83.0 63.1\nΨθ2 = AX L = 10 45.9 60.5 49.0 59.7 72.8 80.9 77.4 67.2 34.1 56.3 80.4 59.5 68.6 53.9 48.6 75.5 60.8 91.5 84.8 80.3 65.4 L = 20 44.7 61.5 53.0 63.1 73.6 81.2 75.2 68.1 33.9 57.1 80.5 59.7 66.5 54.4 51.6 74.9 63.6 85.4 79.6 82.3 65.5\nΨθ2 = GNN L = 10 46.5 63.7 54.9 60.9 79.4 84.1 76.4 68.3 38.5 61.5 80.6 59.7 69.8 58.4 54.3 76.4 64.5 95.7 87.9 81.3 68.1 L = 20 50.1 65.4 55.7 65.3 80.0 83.5 78.3 69.7 34.7 60.7 70.4 59.9 70.0 62.2 56.1 80.2 70.3 88.8 81.1 84.3 68.3\nMoreover, if ~di,argmaxSi,: = ~0 for all i ∈ Vs, it directly follows that S is holding submatrices describing isomorphisms between any T -hop subgraphs around i ∈ Vs and argmaxSi,: ∈ Vt. Assume there exists nodes i, i′ ∈ Vs that map to the same node j = argmaxSi,: = argmaxSi′,: ∈ Vt. It follows that ~o (s)i = ~o (t) j = ~o (s) i′ which contradicts the injectivity requirements of AGGREGATE (t) and UPDATE(t) for all t ∈ {1, . . . , T}. Hence, S must be itself a permutation matrix describing an isomorphism between Gs and Gt." }, { "heading": "D COMPARISON TO THE GRADUATED ASSIGNMENT ALGORITHM", "text": "As stated in Section 3.3, our algorithm can be viewed as a generalization of the graduated assignment algorithm (Gold & Rangarajan, 1996) extending it by trainable parameters. To evaluate the impact of a trainable refinement procedure, we replicated the experiments of Sections 4.2 and 4.4 by implementing Ψθ2 via a non-trainable, one-layer GNN instantiation Ψθ2(X,A,E) = AX .\nThe results in Tables 5 and 6 show that using trainable neural networks Ψθ2 consistently improves upon the results of using the fixed-function message passing scheme. While it is difficult to encode meaningful similarities between node and edge features in a fixed-function pipeline, our approach is able to learn how to make use of those features to guide the refinement procedure further. In addition, it allows us to choose from a variety of task-dependent GNN operators, e.g., for learning geometric/edge conditioned patterns or for fulfilling injectivity requirements. The theoretical expressivity discussed in Section 5 could even be enhanced by making use of higher-order GNNs, which we leave for future work." }, { "heading": "E ROBUSTNESS TOWARDS NODE ADDITION OR REMOVAL", "text": "To experimentally validate the robustness of our approach towards node addition (or removal), we conducted additional synthetic experiments in a similar fashion to Xu et al. (2019b). We form graph-pairs by treating an Erdős & Rényi graph with |Vs| ∈ {50, 100} nodes and edge probability p ∈ {0.1, 0.2} as our source graph Gs. The target graph Gt is then constructed by first adding q% noisy nodes to the source graph, i.e., |Vt| = (1 + q%)|Vs|, and generating edges between these nodes and all other nodes based on the edge probability p afterwards. We use the same network architecture and training procedure as described in Section 4.1.\nFigure 3 visualizes the Hits@1 for different choices of |Vs|, p and q ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5}. As one can see, our consensus stage is extremely robust to the addition or removal of nodes while the first stage alone has major difficulties in finding the right matching. This can be explained by the fact that unmatched nodes do not have any influence on the neighborhood consensus error since those nodes do not obtain a color from the functional map given by S. Our neural architecture is able to detect and gradually decrease any false positive influence of these nodes in the refinement stage." }, { "heading": "F RELATED WORK I", "text": "Identifying correspondences between the nodes of two graphs is a problem arising in various domains and has been studied under different terms. In graph theory, the combinatorial maximum common subgraph isomorphism problem is studied, which asks for the largest graph that is contained as subgraph in two given graphs. The problem is NP-hard in general and remains so even in trees (Garey & Johnson, 1979) unless the common subgraph is required to be connected (Matula, 1978). Moreover, most variants of the problem are difficult to approximate with theoretical guarantees (Kann, 1992). We refer the reader to the survey by Kriege et al. (2019b) for a overview of the complexity results noting that exact polynomial-time algorithms are available for specific problem variants only that are most relevant in cheminformatics.\nFundamentally different techniques have been developed in bioinformatics and computer vision, where the problem is commonly referred to as network alignment or graph matching. In these areas large networks without any specific structural properties are common and the studied techniques are non-exact. In graph matching, for two graphs of order n with adjacency matrix As and At, respectively, typically the function\n‖As − S>AtS‖ 2 F = ‖As‖ 2 F + ‖At‖ 2 F − 2 ∑ i,i′∈Vs j,j′∈Vt A (s) i,i′A (t) j,j′Si,jSi′,j′ (12)\nis to be minimized, where S ∈ P with P the set of n × n permutation matrices and ‖A‖2F =∑ i,i′∈V A 2 i,i′ denotes the squared Frobenius norm. Since the first two terms of the right-hand side do not depend on S, minimizing Equation (12) is equivalent in terms of optimal solutions to the problem of Equation (1). We briefly summarize important related work in graph matching and refer the reader to the recent survey by Yan et al. (2016) for a more detailed discussion. There is a long line of research trying to minimize Equation (12) for S ∈ [0, 1]n×n by a Frank-Wolfe type algorithm (Jaggi, 2013) and finally projecting the fractional solution to P (Gold & Rangarajan, 1996; Zaslavskiy et al., 2009; Leordeanu et al., 2009; Egozi et al., 2013; Zhou & De la Torre, 2016). However, the applicability of relaxation and projection is still poorly understood and only few theoretical results exist (Aflalo et al., 2015; Lyzinski et al., 2016). A classical result by Tinhofer (1991) states that the WL heuristic distinguishes two graphs Gs and Gt if and only if there is no fractional S such that the objective function in Equation (12) takes 0. Kersting et al. (2014) showed how the Frank-Wolfe algorithm can be modified to obtain the WL partition. Aflalo et al. (2015) proved that the standard relaxation yields a correct solution for a particular class of asymmetric graphs, which can be characterized by the spectral properties of their adjacency matrix. Finally, Bento & Ioannidis (2018) studied various relaxations, their complexity and properties. Other approaches to graph matching exist, e.g., based on spectral relaxations (Umeyama, 1988; Leordeanu & Hebert, 2005) or random walks (Gori et al., 2005). The problem of graph matching is closely related to the notoriously hard quadratic assignment problem (QAP) (Zhou & De la Torre, 2016), which has been studied in operations research for decades. Equation (1) can be directly interpreted as KoopmansBeckmann’s QAP. The more recent literature on graph matching typically considers a weighted version, where node and edge similarities are taken into account. This leads to the formulation as Lawler’s QAP, which involves an affinity matrix of size n2 × n2 and is computational demanding.\nZhou & De la Torre (2016) proposed to factorize the affinity matrix into smaller matrices and incorporated global geometric constraints. Zhang et al. (2019c) studied kernelized graph matching, where the node and edge similarities are kernels, which allows to express the graph matching problem again as Koopmans-Beckmann’s QAP in the associated Hilbert space. Inspired by established methods for Maximum-A-Posteriori (MAP) inference in conditional random fields, Swoboda et al. (2017) studied several Lagrangean decompositions of the graph matching problem, which are solved by dual ascent algorithms also known as message passing. Specific message passing schedules and update mechanisms leading to state-of-the-art performance in graph matching tasks have been identified experimentally. Recently, functional representation for graph matching has been proposed as a generalizing concept with the additional goal to avoid the construction of the affinity matrix (Wang et al., 2019a).\nGraph edit distance. A related concept studied in computer vision is the graph edit distance, which measures the minimum cost required to transform a graph into another graph by adding, deleting and substituting vertices and edges. The idea has been proposed for pattern recognition tasks more than 30 years ago (Sanfeliu & Fu, 1983). However, its computation is NP-hard, since it generalizes the maximum common subgraph problem (Bunke, 1997). Moreover, it is also closely related to the quadratic assignment problem (Bougleux et al., 2017). Recently several elaborated exact algorithms for computing the graph edit distance have been proposed (Gouda & Hassaan, 2016; Lerouge et al., 2017; Chen et al., 2019), but are still limited to small graphs. Therefore, heuristics based on the assignment problem have been proposed (Riesen & Bunke, 2009) and are widely used in practice (Stauffer et al., 2017). The original approach requires cubic running time, which can be reduced to quadratic time using greedy strategies (Riesen et al., 2015a;b), and even linear time for restricted cost functions (Kriege et al., 2019a).\nNetwork alignment. The problem of network alignment typically is defined analogously to Equation (1), where in addition a similarity function between pairs of nodes is given. Most algorithms follow a two step approach: First, an n×n node-to-node similarity matrix M is computed from the given similarity function and the topology of the two graphs. Then, in the second step, an alignment is computed by solving the assignment problem for M . Singh et al. (2008) proposed ISORANK, which is based on the adjacency matrix of the product graph K = As ⊗At of Gs and Gt, where ⊗ denotes the Kronecker product. The matrix M is obtained by applying PAGERANK (Page et al., 1999) using a normalized version of K as the GOOGLE matrix and the node similarities as the personalization vector. Kollias et al. (2012) proposed an efficient approximation of ISORANK by decomposition techniques to avoid generating the product graph of quadratic size. Zhang (2016) present an extension supporting vertex and edge similarities and propose its computation using nonexact techniques. Klau (2009) proposed to solve network alignment by linearizing the quadratic optimization problem to obtain an integer linear program, which is then approached via Lagrangian relaxation. Bayati et al. (2013) developed a message passing algorithm for sparse network alignment, where only a small number of matches between the vertices of the two graphs are allowed.\nThe techniques briefly summarized above aim to find an optimal correspondence according to a clearly defined objective function. In practical applications, it is often difficult to specify node and edge similarity functions. Recently, it has been proposed to learn such functions for a specific task, e.g., in form of a cost model for the graph edit distance (Cortés et al., 2019). A more principled approach has been proposed by Caetano et al. (2009) where the goal is to learn correspondences." }, { "heading": "G RELATED WORK II", "text": "The method presented in this work is related to different lines of research. Deep graph matching procedures have been investigated from multiple perspectives, e.g., by utilizing local node feature matchings and cross-graph embeddings (Li et al., 2019). The idea of refining local feature matchings by enforcing neighborhood consistency has been relevant for several years for matching in images (Sattler et al., 2009). Furthermore, the functional maps framework aims to solve a similar problem for manifolds (Halimi et al., 2019).\nDeep graph matching. Recently, the problem of graph matching has been heavily investigated in a deep fashion. For example, Zanfir & Sminchisescu (2018); Wang et al. (2019b); Zhang & Lee\n(2019) develop supervised deep graph matching networks based on displacement and combinatorial objectives, respectively. Zanfir & Sminchisescu (2018) model the graph matching affinity via a differentiable, but unlearnable spectral graph matching solver (Leordeanu & Hebert, 2005). In contrast, our matching procedure is fully-learnable. Wang et al. (2019b) use node-wise features in combination with dense node-to-node cross-graph affinities, distribute them in a local fashion, and adopt sinkhorn normalization for the final task of linear assignment. Zhang & Lee (2019) propose a compositional message passing algorithm that maps point coordinates into a high-dimensional space. The final matching procedure is done by computing the pairwise inner product between point embeddings. However, neither of these approaches can naturally resolve violations of inconsistent neighborhood assignments as we do in our work.\nXu et al. (2019b) tackles the problem of graph matching by relating it to the Gromov-Wasserstein discrepancy (Peyré et al., 2016). In addition, the optimal transport objective is enhanched by simultaneously learning node embeddings which shall account for the noise in both graphs. In a follow-up work, Xu et al. (2019a) extend this concept to the tasks of multi-graph partioning and matching by learning a Gromov-Wasserstein barycenter. Our approach also resembles the optimal transport between nodes, but works in a supervised fashion for sets of graphs and is therefore able to generalize to unseen graph instances.\nIn addition, the task of network alignment has been recently investigated from multiple perspectives. Derr et al. (2019) leverage CYCLEGANs (Zhu et al., 2017) to align NODE2VEC embeddings (Grover & Leskovec, 2016) and find matchings based on the nearest neighbor in the embedding space. Zhang et al. (2019a) design a deep graph model based on global and local network topology preservation as auxiliary tasks. Heimann et al. (2018) utilize a fast, but purely local and greedy matching procedure based on local node embedding similarity.\nFurthermore, Bai et al. (2019) use shared graph neural networks to approximate the graph edit distance between two graphs. Here, a (non-differentiable) histogram of correspondence scores is used to fine-tune the output of the network. In a follow-up work, Bai et al. (2018) proposed to order the correspondence matrix in a breadth-first-search fashion and to process it further with the help of traditional CNNs. Both approaches only operate on local node embeddings, and are hence prone to match correspondences inconsistently.\nIntra- and inter-graph message passing. The concept of enhanching intra-graph node embeddings by inter-graph node embeddings has been already heavily investigated in practice (Li et al., 2019; Wang et al., 2019b; Xu et al., 2019d). Li et al. (2019) and Wang et al. (2019b) enhance the GNN operator by not only aggregating information from local neighbors, but also from similar embeddings in the other graph by utilizing a cross-graph matching procedure. Xu et al. (2019d) leverage alternating GNNs to propagate local features of one graph throughout the second graph. Wang & Solomon (2019) tackle the problem of finding an unknown rigid motion between point clouds by relating it to a point cloud matching problem followed by a differentiable SVD module. Intra-graph node embeddings are passed via a Transformer module before feature matching based on inner product similarity scores takes place. However, neither of these approaches is designed to achieve a consistent matching, due to only operating on localized node embeddings which are alone not sufficient to resolve ambiguities in the matchings. Nonetheless, we argue that these methods can be used to strengthen the initial feature matching procedure, making our approach orthogonal to improvements in this field.\nNeighborhood consensus for image matching. Methods to obtain consistency of correspondences in local neighborhoods have a rich history in computer vision, dating back several years (Sattler et al., 2009; Sivic & Zisserman, 2003; Schmid & Mohr, 1997). They are known for heavily improving results of local feature matching procedures while being computational efficient. Recently, a deep neural network for neighborhood consensus using 4D convolution was proposed (Rocco et al., 2018). While it is related to our method, the 4D convolution can not be efficiently transferred to the graph domain directly, since it would lead to applying a GNN on the product graph with O(n2) nodes and O(n4) edges. Our algorithm also infers errors for the (sparse) product graph but performs the necessary computations on the original graphs.\nFunctional maps. The functional maps framework was proposed to provide a way to define continuous maps between function spaces on manifolds and is commonly applied to solve the task of\nTable 7: Statistics of the WILLOWOBJECTCLASS dataset.\nCategory Graphs Keypoints Edges\nFace 108 10 21− 22 Motorbike 40 10 21− 22 Car 40 10 18− 21 Duck 50 10 19− 21 Winebottle 66 10 19− 22\nTable 9: Statistics of the PASCALVOC dataset with Berkeley annotations.\nCategory Train graphs Test graphs Keypoints Edges Category Train graphs Test graphs Keypoints Edges\nAeroplane 468 136 1− 16 0− 41 Diningtable 27 5 2− 8 2− 8 Bicycle 210 53 2− 11 1− 26 Dog 608 147 1− 16 0− 41 Bird 613 117 1− 12 0− 30 Horse 217 45 2− 16 1− 38 Boat 411 88 1− 11 0− 25 Motorbike 234 60 1− 10 0− 23 Bottle 466 120 1− 8 0− 17 Person 539 156 4− 19 5− 49 Bus 288 52 1− 8 0− 17 Pottedplant 429 99 1− 6 0− 11 Car 522 160 1− 13 0− 27 Sheep 338 73 1− 16 0− 39 Cat 415 101 3− 16 3− 40 Sofa 73 8 2− 12 1− 27 Chair 298 63 1− 10 0− 23 Train 166 43 1− 6 0− 10 Cow 257 55 1− 16 0− 40 TV Monitor 374 90 1− 8 0− 17\n3D shape correspondence (Ovsjanikov et al., 2012; Litany et al., 2017; Rodolà et al., 2017; Halimi et al., 2019). Recently, a similar approach was presented to find functional correspondences between graph function spaces (Wang et al., 2019a). The functional map is established by using a low-dimensional basis representation, e.g., the eigenbasis of the graph Laplacian as generalized Fourier transform. Since the basis is usually truncated to the k vectors with the largest eigenvalues, these approaches focus on establishing global correspondences. However, such global methods have the inherent disadvantage that they often fail to find partial matchings due to the domain-dependent eigenbasis. Furthermore, the basis computation has to be approximated in order to scale to large inputs." }, { "heading": "H DATASET STATISTICS", "text": "We give detailed descriptions of all datasets used in our experiments, cf. Tables 7, 8, 9 and 10." }, { "heading": "I QUALITATIVE KEYPOINT MATCHING RESULTS", "text": "Figure 4 visualizes qualitative examples from the task of keypoint matching on the WILLOW-OBJECTCLASS dataset. Examples were selected as follows: Figure 4(a), (b) and (c) show examples where the initial feature matching procedure fails, but where our refinement procedure is able to recover all correspondences succesfully. Figure 4(d) visualizes a rare failure case. However, while the initial feature matching procedure maps most of the keypoints to the same target keypoint, our refinement strategy is still able to succesfully resolve this violation. In addition, note that the target image contains wrong labels, e.g., the eye of the duck, so that some keypoint mappings are mistakenly considered to be wrong." } ]
2,020
DEEP GRAPH MATCHING CONSENSUS
SP:454c98d15b785ccd0128dbf7d8209adbda1fd2e8
[ "The paper provide an extensive review of current advances in uncertainty estimation in neural networks with the analysis of drawbacks of currently used uncertainty metrics and comparison on scale the recent method to estimate uncertainty. The paper covers a lot of uncertainty metrics and a wide range of methods. The paper focuses on in-domain uncertainty estimation complementing the recent similar review on out-of-domain uncertainty estimation.", "This paper mainly concerns the quality of in-domain uncertainty for image classification. After exploring common standards for uncertainty quantification, the authors point out pitfalls of existing metrics by investigating different ensembling techniques and introduce a novel metric called deep ensemble equivalent (DEE) that essentially measures the number of independent models in an ensemble of DNNs. Based on the DEE score, a detailed evaluation of modern DNN ensembles is performed on CIFAR-10/100 and ImageNet datasets." ]
Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance. video / code / blog post
[ { "affiliations": [], "name": "DEEP LEARNING" }, { "affiliations": [], "name": "Arsenii Ashukha" }, { "affiliations": [], "name": "Alexander Lyzhov" }, { "affiliations": [], "name": "Dmitry Molchanov" }, { "affiliations": [], "name": "Dmitry Vetrov" } ]
[ { "authors": [ "Andrei Atanov", "Arsenii Ashukha", "Dmitry Molchanov", "Kirill Neklyudov", "Dmitry Vetrov" ], "title": "Uncertainty estimation via stochastic batch normalization", "venue": "In International Symposium on Neural Networks,", "year": 2019 }, { "authors": [ "Anoop Korattikara Balan", "Vivek Rathod", "Kevin P Murphy", "Max Welling" ], "title": "Bayesian dark knowledge", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "Glenn W Brier" ], "title": "Verification of forecasts expressed in terms of probability", "venue": "Monthly weather review,", "year": 1950 }, { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yufei Cui", "Wuguannan Yao", "Qiao Li", "Antoni B Chan", "Chun Jason Xue" ], "title": "Accelerating monte carlo bayesian inference via approximating predictive uncertainty over simplex", "venue": null, "year": 1905 }, { "authors": [ "Yukun Ding", "Jinglan Liu", "Jinjun Xiong", "Yiyu Shi" ], "title": "Evaluation of neural network uncertainty estimation with application to resource-constrained platforms", "venue": null, "year": 1903 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in deep learning", "venue": "PhD thesis, PhD thesis, University of Cambridge,", "year": 2016 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Fredrik K Gustafsson", "Martin Danelljan", "Thomas B Schön" ], "title": "Evaluating scalable bayesian deep learning methods for robust computer vision", "venue": null, "year": 1906 }, { "authors": [ "Lars Kai Hansen", "Peter Salamon" ], "title": "Neural network ensembles", "venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence,", "year": 1990 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E Hopcroft", "Kilian Q Weinberger" ], "title": "Snapshot ensembles: Train 1, get m for free", "venue": "arXiv preprint arXiv:1704.00109,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Max Welling" ], "title": "Variational dropout and the local reparameterization trick", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Ananya Kumar", "Percy S Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christos Louizos", "Max Welling" ], "title": "Multiplicative normalizing flows for variational bayesian neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Wesley Maddox", "Timur Garipov", "Pavel Izmailov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": null, "year": 1902 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Predictive uncertainty estimation via prior networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Marcin Możejko", "Mateusz Susik", "Rafał Karczewski" ], "title": "Inhibited softmax for uncertainty estimation in neural networks", "venue": "arXiv preprint arXiv:1810.01861,", "year": 2018 }, { "authors": [ "Malik Sajjad Ahmed Nadeem", "Jean-Daniel Zucker", "Blaise Hanczar" ], "title": "Accuracy-rejection curves (arcs) for comparing classification methods with a reject option", "venue": "In Machine Learning in Systems Biology,", "year": 2009 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Jeremy Nixon", "Mike Dusenberry", "Linchuan Zhang", "Ghassen Jerfel", "Dustin Tran" ], "title": "Measuring calibration in deep learning", "venue": null, "year": 2019 }, { "authors": [ "Yaniv Ovadia", "Emily Fertig", "Jie Ren", "Zachary Nado", "D Sculley", "Sebastian Nowozin", "Joshua V Dillon", "Balaji Lakshminarayanan", "Jasper Snoek" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "arXiv preprint arXiv:1906.02530,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In NIPS Autodiff Workshop,", "year": 2017 }, { "authors": [ "Joaquin Quinonero-Candela", "Carl Edward Rasmussen", "Fabian Sinz", "Olivier Bousquet", "Bernhard Schölkopf" ], "title": "Evaluating predictive uncertainty challenge", "venue": "In Machine Learning Challenges Workshop,", "year": 2005 }, { "authors": [ "Hippolyt Ritter", "Aleksandar Botev", "David Barber" ], "title": "A scalable laplace approximation for neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Burr Settles" ], "title": "Active learning", "venue": "Synthesis Lectures on Artificial Intelligence and Machine Learning,", "year": 2012 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Leslie N Smith", "Nicholay Topin" ], "title": "Super-convergence: Very fast training of neural networks using large learning rates", "venue": "In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications,", "year": 2019 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "How to train deep variational autoencoders and probabilistic ladder networks", "venue": "In 33rd International Conference on Machine Learning", "year": 2016 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Mattias Teye", "Hossein Azizpour", "Kevin Smith" ], "title": "Bayesian uncertainty estimation for batch normalized deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Marcin B Tomczak", "Siddharth Swaroop", "Richard E Turner" ], "title": "Neural network ensembles and variational inference revisited", "venue": "In 1st Symposium on Advances in Approximate Bayesian Inference,", "year": 2018 }, { "authors": [ "Linh Tran", "Bastiaan S Veeling", "Kevin Roth", "Jakub Swiatkowski", "Joshua V Dillon", "Jasper Snoek", "Stephan Mandt", "Tim Salimans", "Sebastian Nowozin", "Rodolphe Jenatton" ], "title": "Hydra: Preserving ensemble diversity for model distillation", "venue": "arXiv preprint arXiv:2001.04694,", "year": 2020 }, { "authors": [ "Karen Ullrich", "Edward Meeds", "Max Welling" ], "title": "Soft weight-sharing for neural network compression", "venue": "arXiv preprint arXiv:1702.04008,", "year": 2017 }, { "authors": [ "Juozas Vaicenavicius", "David Widmann", "Carl Andersson", "Fredrik Lindsten", "Jacob Roll", "Thomas B Schon" ], "title": "Evaluating model calibration in classification", "venue": null, "year": 1902 }, { "authors": [ "Kuan-Chieh Wang", "Paul Vicol", "James Lucas", "Li Gu", "Roger Grosse", "Richard Zemel" ], "title": "Adversarial distillation of bayesian neural network posteriors", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sida Wang", "Christopher Manning" ], "title": "Fast dropout training", "venue": "In international conference on machine learning,", "year": 2013 }, { "authors": [ "Max Welling", "Yee W Teh" ], "title": "Bayesian learning via stochastic gradient langevin dynamics", "venue": "In Proceedings of the 28th international conference on machine learning", "year": 2011 }, { "authors": [ "Anqi Wu", "Sebastian Nowozin", "Edward Meeds", "Richard E Turner", "José Miguel Hernández-Lobato", "Alexander L Gaunt" ], "title": "Deterministic variational inference for robust bayesian neural networks. 2018", "venue": null, "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Ruqi Zhang", "Chunyuan Li", "Jianyi Zhang", "Changyou Chen", "Andrew Gordon Wilson" ], "title": "Cyclical stochastic gradient mcmc for bayesian deep learning", "venue": null, "year": 1902 } ]
[ { "heading": null, "text": "video / code / blog post" }, { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have become one of the most popular families of machine learning models. The predictive performance of DNNs for classification is often measured in terms of accuracy. However, DNNs have been shown to yield inaccurate and unreliable probability estimates, or predictive uncertainty (Guo et al., 2017). This has brought considerable attention to the problem of uncertainty estimation with deep neural networks.\nThere are many faces to uncertainty estimation. Different desirable uncertainty estimation properties of a model require different settings and metrics to capture them. Out-of-domain uncertainty of the model is measured on the data that does not follow the same distribution as the training dataset (out-of-domain data). Out-of-domain data can include images corrupted with rotations or blurring, adversarial attacks (Szegedy et al., 2013) or data points from a completely different dataset. The model is expected to be resistant to data corruptions and to be more uncertain on out-of-domain data than on in-domain data. On the contrary, in-domain uncertainty of the model is measured on data taken from the training data distribution, i.e. data from the same domain. In this setting the model is expected to produce reliable probability estimates, e.g. the model shouldn’t be too overconfident in its wrong predictions.\nPitfalls of metrics We show that many common metrics of in-domain uncertainty estimation (e.g. log-likelihood, Brier score, calibration metrics, etc.) are either not comparable across different models or fail to provide a reliable ranking. We address some of the stated pitfalls and point out more reasonable evaluation schemes. For instance, although temperature scaling is not a standard for ensembling techniques, it is a must for a fair evaluation. With this in mind, the\n∗Equal contribution §HSE refers to National Research University Higher School of Economics †Skoltech refers to Skolkovo Institute of Science and Technology ‡HSE refers to Samsung-HSE Laboratory, National Research University Higher School of Economics\ncalibrated log-likelihood avoids most of the stated pitfalls and generally is a reasonable metric for in-domain uncertainty estimation task.\nPitfalls of ensembles Equipped with the proposed evaluation framework, we are revisiting the evaluation of ensembles of DNNs—one of the major tools for uncertainty estimation. We introduce the deep ensemble equivalent (DEE) score that measures the number of independently trained models that, when ensembled, achieve the same performance as the ensembling technique of interest. The DEE score allows us to compare ensembling techniques across different datasets and architectures using a unified scale. Our study shows that most of the popular ensembling techniques require averaging predictions across dozens of samples (members of an ensemble), yet are essentially equivalent to an ensemble of only few independently trained models.\nMissing part of ensembling In our study, test-time data augmentation (TTA) turned out to be a surprisingly strong baseline for uncertainty estimation and a simple way to improve ensembles. Despite being a popular technique in large-scale classification, TTA seems to be overlooked in the community of uncertainty estimation and ensembling." }, { "heading": "2 SCOPE OF THE PAPER", "text": "We use standard benchmark problems of image classification which comprise a common setting in research on learning ensembles of neural networks. There are other relevant settings where the correctness of probability estimates can be a priority, and ensembling techniques are used to improve it. These settings include, but are not limited to, regression, language modeling (Gal, 2016), image segmentation (Gustafsson et al., 2019), active learning (Settles, 2012) and reinforcement learning (Buckman et al., 2018; Chua et al., 2018).\nWe focus on in-domain uncertainty, as opposed to out-of-domain uncertainty. Out-of-domain uncertainty includes detection of inputs that come from a completely different domain or have been corrupted by noise or adversarial attacks. This setting has been thoroughly explored by (Ovadia et al., 2019).\nWe only consider methods that are trained on clean data with simple data augmentation. Some other methods use out-of-domain data (Malinin & Gales, 2018) or more elaborate data augmentation, e.g. mixup (Zhang et al., 2017) or adversarial training (Lakshminarayanan et al., 2017) to improve accuracy, robustness and uncertainty.\nWe use conventional training procedures. We use the stochastic gradient descent (SGD) and use batch normalization (Ioffe & Szegedy, 2015), both being the de-facto standards in modern deep learning. We refrain from using more elaborate optimization techniques including works on superconvergence (Smith & Topin, 2019) and stochastic weight averaging (SWA) (Izmailov et al., 2018). These techniques can be used to drastically accelerate training and to improve the predictive performance. Thus, we do not comment on the training time of different ensembling methods since the use of these and other more efficient training techniques would render such a comparison obsolete.\nA number of related works study ways of approximating and accelerating prediction in ensembles. The distillation mechanism allows to approximate the prediction of an ensemble by a single neural network (Hinton et al., 2015; Balan et al., 2015; Tran et al., 2020), whereas fast dropout (Wang & Manning, 2013) and deterministic variational inference (Wu et al., 2018) allow to approximate the predictive distribution of specific stochastic computation graphs. We measure the raw power of ensembling techniques without these approximations.\nAll of the aforementioned alternative settings are orthogonal to the scope of this paper and are promising points of interest for further research." }, { "heading": "3 PITFALLS OF IN-DOMAIN UNCERTAINTY ESTIMATION", "text": "No single metric measures all the desirable properties of uncertainty estimates obtained by a model of interest. Because of this, the community is using many different metrics in an attempt to capture the quality of uncertainty estimation, such as the Brier score (Brier, 1950), log-likelihood (Quinonero-Candela et al., 2005), metrics of calibration (Guo et al., 2017; Nixon et al., 2019), performance of misclassification detection (Malinin & Gales, 2018), and threshold–accuracy curves\n(Lakshminarayanan et al., 2017). In the section we highlight the pitfalls of the aforementioned metrics, and demonstrate that these pitfalls can significantly affect evaluation, changing the ranking of the methods.\nNotation We consider a classification problem with a dataset that consists of N training and n testing pairs (xi, y∗i ) ∼ p(x, y), where xi is an object and y∗i ∈ {1, . . . , C} is a discrete class label. A probabilistic classifier maps an object xi into a predictive distribution p̂(y |xi). The predictive distribution p̂(y |xi) of a deep neural network is typically defined by the softmax function p̂(y |x) = Softmax(z(x)/T ), where z(x) is a vector of logits and T is a scalar parameter standing for the temperature of the predictive distribution. This scalar parameter is usually set to T = 1 or is tuned on a validation set (Guo et al., 2017). The maximum probability maxc p̂(y = c |xi) is called the confidence of a classifier p̂ on an object xi. I[·] denotes the indicator function throughout the text." }, { "heading": "3.1 LOG-LIKELIHOOD AND BRIER SCORE", "text": "The average test log-likelihood LL = 1n ∑n i=1 log p̂(y = y ∗ i |xi) is a popular metric for measuring the quality of in-domain uncertainty of deep learning models. It directly penalizes high probability scores assigned to incorrect labels and low probability scores assigned to the correct labels y∗i .\nLL is sensitive to the softmax temperature T . The temperature that has been implicitly learned during training can be far from optimal for the test data. However, a nearly optimal temperature can be found post-hoc by maximizing the log-likelihood on validation data. This approach is called temperature scaling or calibration (Guo et al., 2017). Despite its simplicity, the temperature scaling results in a notable improvement in the LL.\nWhile ensembling techniques tend to have better temperature than single models, the default choice of T = 1 is still suboptimal. Comparing the LL with suboptimal temperatures—that is often the case in practice—can potentially produce an arbitrary ranking of different methods.\nComparison of the log-likelihood should only be performed at the optimal temperature.\nEmpirically, we demonstrate that the overall ordering of methods and also the best ensembling method according to the LL can vary depending on temperature T . While this applies to most\nensembling techniques (see Figure 10), this effect is most noticeable on experiments with data augmentation on ImageNet (Figure 1).\nWe introduce a new metric called the calibrated log-likelihood that is the log-likelihood at the optimal temperature.\nThe calibrated log-likelihood considers a model and a post-training calibration as a unified system, targeting to measure all models in the equal conditions of optimal temperature. That allows to avoid measuring calibration error that can be eliminated by a simple temperature scaling. The metric significantly affects the results of the comparison. For example, in Figure 10 the differences between Bayesian (VI, K-FAC, SWAG, dropout) and conventional non-Bayesian networks become much less pronounced, and in most cases making conventional non-Bayesian networks match the performance of Bayesian ones (VI, K-FAC, Dropout) on ResNet110, ResNet164, and WideResNet.\nWe show how to obtain an unbiased estimate of the calibrated log-likelihood without a held-out validation set in Section 3.5.\nLL also demonstrates a high correlation with accuracy (ρ > 0.86), that in case of calibrated LL becomes even stronger (ρ > 0.95). That suggests that while (calibrated) LL measures the uncertainty of the model, it still has a significant dependence on the accuracy and vice versa. A model with higher accuracy would likely have a higher log-likelihood. See Figure 9 in Appendix C for more details.\nBrier score BS = 1n 1 C ∑n i=1 ∑C c=1(I[y∗i = c]− p̂(y = c |xi))2 has also been known for a long time as a metric for verification of predicted probabilities (Brier, 1950). Similarly to the log-likelihood, the Brier score penalizes low probabilities assigned to correct predictions and high probabilities assigned to wrong ones. It is also sensitive to the temperature of the softmax distribution and behaves similarly to the log-likelihood. While these metrics are not strictly equivalent, they show a high empirical correlation for a wide range of models on CIFAR-10, CIFAR-100 and ImageNet datasets (see Figure 8 in Appendix C )." }, { "heading": "3.2 MISCLASSIFICATION DETECTION", "text": "Detection of wrong predictions of the model, or misclassifications, is a popular downstream problem relevant to the problem of in-domain uncertainty estimation. Since misclassification detection is essentially a binary classification problem, some papers measure its quality using conventional metrics for binary classification such as AUC-ROC and AUC-PR (Malinin & Gales, 2018; Cui et al., 2019; Możejko et al., 2018). These papers use an uncertainty criterion like confidence or predictive entropyH[p̂(y |xi)] as a prediction score. While these metrics can be used to assess the misclassification detection performance of a single model, they cannot be used to directly compare misclassification performance across different models. Correct and incorrect predictions are specific for every model, therefore, every model induces its own binary classification problem. The induced problems can differ significantly, since different models produce different confidences and misclassify different objects. In other words, comparing such metrics implies a comparison of performance of classifiers that solve different classification problems. Such metrics are therefore incomparable.\nAUCs for misclassification detection cannot be directly compared between different models.\nWhile comparing AUCs is incorrect in the setting of misclassification detection, it is correct to compare these metrics in many out-of-domain data detection problems. In that case, both objects and targets of the induced binary classification problems remain the same for all models. All outof-domain objects have a positive label and all in-domain objects have a negative label. Note that this condition does not necessarily hold in the problem of detection of adversarial attacks. Different models generally have different inputs after an adversarial attack, so such AUC-based metrics might still be flawed." }, { "heading": "3.3 CLASSIFICATION WITH REJECTION", "text": "Accuracy-confidence curves are another way to measure the performance of misclassification detection. These curves measure the accuracy on the set of objects with confidence maxc p̂(y = c |xi) above a certain threshold τ (Lakshminarayanan et al., 2017) and ignoring or rejecting the others.\nThe main problem with accuracy-confidence curves is that they rely too much on calibration and the actual values of confidence. Models with different temperatures have different numbers of objects at each confidence level which does not allow for a meaningful comparison. To overcome this problem, one can swhich from thresholding by the confidence level to thresholding by the number of rejected objects. The corresponding curves are then less sensitive to temperature scaling and thus allow to compare the rejection ability in a more meaningful way. Such curves have been known as accuracyrejection curves (Nadeem et al., 2009). In order to obtain a scalar metric for easy comparisons, one can compute the area under this curve, resulting in AU-ARC (Nadeem et al., 2009)." }, { "heading": "3.4 CALIBRATION METRICS", "text": "Informally speaking, a probabilistic classifier is calibrated if any predicted class probability is equal to the true class probability according to the underlying data distribution (see (Vaicenavicius et al., 2019) for formal definitions). Any deviation from the perfect calibration is called miscalibration. For brevity, we will use p̂i,c to denote p̂(y = c |xi) in the current section. Expected calibration error (ECE) (Naeini et al., 2015) is a metric that estimates model miscalibration by binning the assigned probability scores and comparing them to average accuracies inside these bins. Assuming Bm denotes the m-th bin and M is overall number of bins, the ECE is defined as follows:\nECE = M∑\nm=1\n|Bm| n |acc(Bm)− conf(Bm)| , (1)\nwhere acc(B) = |B|−1 ∑ i∈B I[arg maxc p̂i,c = y∗i ] and conf(B) = |B|−1 ∑ i∈B p̂i,y∗i .\nA recent line of works on measuring calibration in deep learning (Vaicenavicius et al., 2019; Kumar et al., 2019; Nixon et al., 2019) outlines several problems of the ECE score. Firstly, ECE is a biased estimate of the true calibration. Secondly, ECE-like scores cannot be optimized directly since they are minimized by a model with constant uniform predictions, making the infinite temperature T = +∞ its global optimum. Thirdly, ECE only estimates miscalibration in terms of the maximum assigned probability whereas practical applications may require the full predicted probability vector to be calibrated. Finally, biases of ECE on different models may not be equal, rendering the miscalibration estimates incompatible. Similar concerns are also discussed by Ding et al. (2019).\nThresholded adaptive calibration error (TACE) was proposed as a step towards solving some of these problems (Nixon et al., 2019). TACE disregards all predicted probabilities that are less than a certain threshold (hence thresholded), chooses the bin locations adaptively so that each bin has the same number of objects (hence adaptive), and estimates miscalibration of probabilties across all classes in the prediction (not just the top-1 predicted class as in ECE). Assuming that BTAm denotes the m-th thresholded adaptive bin and M is the overall number of bins, TACE is defined as follows:\nTACE = 1\nCM C∑ c=1 M∑ m=1 |BTAm | n ∣∣objs(BTAm , c)− conf(BTAm , c)∣∣ , (2) where objs(BTA, c) = |BTA|−1 ∑ i∈BTA I[y∗i = c] and conf(BTA, c) = |BTA|−1 ∑ i∈BTA p̂i,c.\nAlthough TACE does solve several problems of ECE and is useful for measuring calibration of a specific model, it still cannot be used as a reliable criterion for comparing different models. Theory suggests that it is still a biased estimate of true calibration with different bias for each model (Vaicenavicius et al., 2019). In practice, we find that TACE is sensitive to its two parameters, the number of bins and the threshold, and does not provide a consistent ranking of different models, as shown in Figure 2." }, { "heading": "3.5 CALIBRATED LOG-LIKELIHOOD AND TEST-TIME CROSS-VALIDATION", "text": "There are two common ways to perform temperature scaling using a validation set when training on datasets that only feature public training and test sets (e.g. CIFARs). The public training set might\nbe divided into a smaller training set and validation set, or the public test set can be split into test and validation parts (Guo et al., 2017; Nixon et al., 2019). The problem with the first method is that the resulting models cannot be directly compared with all the other models that have been trained on the full training set. The second approach, however, provides an unbiased estimate of metrics such as log-likelihood and Brier score but introduces more variance.\nIn order to reduce the variance of the second approach, we perform a “test-time cross-validation”. We randomly divide the test set into two equal parts, then compute metrics for each half of the test set using the temperature optimized on another half. We repeat this procedure five times and average the results across different random partitions to reduce the variance of the computed metrics." }, { "heading": "4 A STUDY OF ENSEMBLING & DEEP ENSEMBLE EQUIVALENT", "text": "Ensembles of deep neural networks have become a de-facto standard for uncertainty estimation and improving the quality of deep learning models (Hansen & Salamon, 1990; Krizhevsky et al., 2009; Lakshminarayanan et al., 2017). There are two main directions of training ensembles of DNNs: training stochastic computation graphs and obtaining separate snapshots of neural network parameters.\nMethods based on the paradigm of stochastic computation graphs introduce some kind of random noise over the weights or activations of deep learning models. When the model is trained, each sample of the noise corresponds to a member of the ensemble. During test time, the predictions are averaged across the noise samples. These methods include (test-time) data augmentation, dropout (Srivastava et al., 2014; Gal & Ghahramani, 2016), variational inference (Blundell et al., 2015; Kingma et al., 2015; Louizos & Welling, 2017), batch normalization (Ioffe & Szegedy, 2015; Teye et al., 2018; Atanov et al., 2019), Laplace approximation (Ritter et al., 2018) and many more.\nSnapshot-based methods aim to obtain sets of weights for deep learning models and then to average the predictions across these weights. The weights can be trained independently (e.g. deep ensembles (Lakshminarayanan et al., 2017)), collected on different stages of a training trajectory (e.g. snapshot ensembles (Huang et al., 2017) and fast geometric ensembles (Garipov et al., 2018)), or obtained from a sampling process (e.g. MCMC-based methods (Welling & Teh, 2011; Zhang et al., 2019)). These two paradigms can be combined. Some works suggest construction of ensembles of stochastic computation graphs (Tomczak et al., 2018), while others make use of the collected snapshots to construct a stochastic computation graph (Wang et al., 2018; Maddox et al., 2019).\nIn this paper we consider the following ensembling techniques: deep ensembles (Lakshminarayanan et al., 2017), snapshot ensembles (SSE by Huang et al. (2017)), fast geometric ensembling (FGE by Garipov et al. (2018)), SWA-Gaussian (SWAG by Maddox et al. (2019)), cyclical SGLD (cSGLD by Zhang et al. (2019)), variational inference (VI by Blundell et al. (2015)), K-FAC Laplace approximation (Ritter et al., 2018), dropout (Srivastava et al., 2014) and test-time data augmentation (Krizhevsky et al., 2009). These techniques were chosen to cover a diverse set of approaches keeping their predictive performance in mind.\nAll these techniques can be summarized as distributions qm(ω) over parameters ω of computation graphs, where m stands for the technique. During testing, one can average the predictions across parameters ω ∼ qm(ω) to approximate the predictive distribution\np̂(yi |xi) ≈ ∫ p(yi |xi, ω)qm(ω) dω ' 1\nK K∑ k=1 p(yi |xi, ωk), ωk ∼ qm(ω) (3)\nFor example, a deep ensemble of S networks can be represented in this form as a mixture of S Dirac’s deltas qDE(ω) = 1S ∑S s=1 δ(ω − ωs), centered at independently trained snapshots ωs. Similarly, a Bayesian neural network with a fully-factorized Gaussian approximate posterior distribution over the weight matrices and convolutional kernels ω is represented as qVI(ω) = N (ω |µ,diag(σ2)), µ and σ2 being the optimal variational means and variances respectively.\nIf one considers data augmentation as a part of the computational graph, it can be parameterized by the coordinates of the random crop and the flag for whether to flip the image horizontally or not. Sampling from the corresponding qaug(ω) would generate different ways to augment the data during inference. However, as data augmentation is present by default during the training of all othe\nmentioned ensembling techniques, it is suitable to study it in combination with these methods and not as a separate ensembling technique. We perform such an evaluation in Section 4.3.\nTypically, the approximation (equation 3) requires K independent forward passes through a neural network, making the test-time budget directly comparable across all methods." }, { "heading": "4.1 DEEP ENSEMBLE EQUIVALENT", "text": "Most ensembling techniques under consideration are either bounded to a single mode, or provide positively correlated samples. Deep ensemble, on the other hand, is a simple technique that provides independent samples from different modes of the loss landscape, which, intuitively, should result in a better ensemble. Therefore deep ensembles can be considered a strong baseline for the performance of other ensembling techniques given a fixed test-time computation budget.\nComparing the performance of ensembling techniques is, however, a challenging problem. Different models on different datasets achieve different values of metrics; their dependence on the number of samples is non-trivial, and varies depending on a specific model and dataset. Values of the metrics are thus lacking in interpretability as the gain in performance has to be compared against a modeland dataset-specific baseline.\nAiming to introduce perspective and interpretability in our study, we introduce the deep ensemble equivalent score that employs deep ensembles to measure the performance of other ensembling techniques. Specifically, the deep ensemble equivalent score answers the following question:\nWhat size of deep ensemble yields the same performance as a particular ensembling method?\nFollowing the insights from the previous sections, we base the deep ensemble equivalent on the calibrated log-likelihood (CLL). Formally speaking, we define the deep ensemble equivalent (DEE) for an ensembling method m and its upper and lower bounds as follows:\nDEEm(k) = min { l ∈ R, l ≥ 1 ∣∣∣CLLmeanDE (l) ≥ CLLmeanm (k)}, (4) DEE upper lower m (k) = min { l ∈ R, l ≥ 1 ∣∣∣CLLmeanDE (l)∓ CLLstdDE(l) ≥ CLLmeanm (k)}, (5)\nwhere CLLmean/stdm (l) are the mean and the standard deviation of the calibrated log-likelihood achieved by an ensembling method m with l samples. We compute CLLmeanDE (l) and CLL std DE(l) for natural numbers l ∈ N>0 and use linear interpolation to define them for real values l ≥ 1. In the following plots we report DEEm(k) for different methods m with different numbers of samples k, and shade the area between the respective lower and upper bounds DEElowerm (k) and DEE upper m (k)." }, { "heading": "4.2 EXPERIMENTS", "text": "We compute the deep ensemble equivalent (DEE) of various ensembling techniques for four popular deep architectures: VGG16 (Simonyan & Zisserman, 2014), PreResNet110/164 (He et al., 2016), and WideResNet28x10 (Zagoruyko & Komodakis, 2016) on CIFAR-10/100 datasets (Krizhevsky et al., 2009), and ResNet50 (He et al., 2016) on ImageNet dataset (Russakovsky et al., 2015). We use PyTorch (Paszke et al., 2017) for implementation of these models, building upon available public implementations. Our implementation closely matches the quality of methods that has been reported in original works. Technical details on training, hyperparameters and implementations can be found in Appendix B. The source code and all computed metrics are available on GitHub1.\nAs one can see on Figure 3, ensembling methods clearly fall into three categories. SSE and cSGLD outperform all other techniques except deep ensembles and enjoy a near-linear scaling of DEE with the number of samples on CIFAR datasets. The investigation of weight-space trajectories of cSGLD and SSE (Huang et al., 2017; Zhang et al., 2019) suggests that these methods can efficiently explore different modes of the loss landscape. In terms of the deep ensemble equivalent, these methods do not saturate unlike other methods that are bound to a single mode. We found SSE to still saturate on ImageNet. This is likely due to suboptimal hyperparameters of the cyclic learning rate schedule. More verbose results are presented in Figures 11–13 and in Table 5 and Table 8 in Appendix C.\nIn our experiments SSE typically outperforms cSGLD. This is mostly due to the fact that SSE has a much larger training budget. The cycle lengths and learning rates of SSE and cSGLD are comparable, however, SSE collects one snapshot per cycle while cSGLD collects three snapshots. This makes samples from SSE less correlated with each other while increasing the training budget threefold. Both SSE and cSGLD can be adjusted to obtain a different trade-off between the training budget and the DEE-to-samples ratio. We reused the schedules provided in the original papers (Huang et al., 2017; Zhang et al., 2019).\n1Source code: https://github.com/bayesgroup/pytorch-ensembles\nBeing more “local” methods, FGE and SWAG perform worse than SSE and cSGLD, but still significantly outperform “single-snapshot” methods like dropout, K-FAC Laplace approximation and variational inference. We hypothesize that by covering a single mode with a set of snapshots, FGE and SWAG provide a better fit for the local geometry than models trained as stochastic computation graphs. This implies that the performance of FGE and SWAG should be achievable by singlesnapshot methods. However, one might need more elaborate posterior approximations and better inference techniques in order to match the performance of FGE and SWAG by training a stochastic computation graph end-to-end (as opposed to SWAG that constructs a stochastic computation graph post-hoc).\nThe deep ensemble equivalent curves allow us to notice the common behaviour of different methods, e.g. the relation between deep ensembles, snapshot methods, advanced local methods and singlesnapshot local methods. They also allow us to notice inconsistencies that may indicate a suboptimal choice of hyperparameters. For example, we find that SSE on ImageNet quickly saturates, unlike SSE on CIFAR datasets (Figure 3). This may indicate that the hyperparameters used on ImageNet are not good enough for efficient coverage of different modes of the loss landscape. We also find that SSE on WideResNet on CIFAR-10 achieves a DEE score of 100 on approx. 70 samples (Figure 12). This may indicate that the members of the deep ensemble for this dataset-architecture pair are underfitted and may benefit from longer training or a different learning rate schedule. Such inconsistencies might be more difficult to spot using plain calibrated log-likelihood plots." }, { "heading": "4.3 TEST-TIME DATA AUGMENTATION IMPROVES ENSEMBLES FOR FREE", "text": "Data augmentation is a time-honored technique that is widely used in deep learning, and is a crucial component for training modern DNNs. Test-time data augmentation has been used for a long time to improve the performance of convolutional networks. For example, multi-crop evaluation has been a standard procedure for the ImageNet challenge (Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016). It is, however, often overlooked in the literature on ensembling techniques in deep learning. In this section, we study the effect of test-time data augmentation on the aforementioned ensembling techniques. To keep the test-time computation budget the same, we sample one random augmentation for each member of an ensemble. Figure 5 reports the calibrated log-likelihood on combination of ensembles and test-time data augmentation for ImageNet. Other metrics and results on CIFAR-10/100 datasets are reported in Appendix C. We have used the standard data augmen-\ntation: random horizontal flips and random padded crops for CIFAR-10/100 datasets, and random horizontal flips and random resized crops for ImageNet (see more details in Appendix B).\nTest-time data augmentation (Figure 4) consistently improves most ensembling methods, especially on ImageNet, where we see a clear improvement across all methods (Figure 5 and Table 7). The performance gain for powerful ensembles (deep ensembles, SSE and cSGLD) on CIFAR datasets is not as dramatic (Figures 14–15 and Table 4). This is likely due to the fact that CIFAR images are small, making data augmentation limited, whereas images from ImageNet allow for a large number of diverse samples of augmented images. On the other hand, while the performance of “single-snapshot” methods (e.g. variational inference, K-FAC Laplace and dropout) is improved significantly, they perform approximately as good as an augmented version of a single model across all datasets.w\nInterestingly, test-time data augmentation on ImageNet improves accuracy but decreases the uncalibrated log-likelihood of deep ensembles (Table 7 in Appendix C). Test-time data augmentation breaks the nearly optimal temperature of deep ensembles and requires temperature scaling to reveal the actual performance of the method, as discussed in Section 3.1. The experiment demonstrates that ensembles may be highly miscalibrated by default while still providing superior predictive performance after calibration.\nWe would like to note that test-time data augmentation does not always break the calibration of an ensemble, and, on the contrary, test-time data augmentation often improves the calibration of an ensemble. In our experiments, decalibration was caused by the extreme magnitude of a random crop, that is conventionally used for ImageNet augmentation. Using less extreme magnitude of the random crop fixes decalibration, that makes test-time data augmentation a more practical method that provides out-of-the-box calibration. Although, as we demonstrated earlier, there is no guarantee that any ensemble is calibrated out-of-the-box. If we are willing to apply post-hoc calibration, the final performance can be much better with more severe augmentations." }, { "heading": "5 DISCUSSION & CONCLUSION", "text": "We have explored the field of in-domain uncertainty estimation and performed an extensive evaluation of modern ensembling techniques. Our main findings can be summarized as follows:\n• Temperature scaling is a must even for ensembles. While ensembles generally have better calibration out-of-the-box, they are not calibrated perfectly and can benefit from the procedure. A comparison of log-likelihoods of different ensembling methods without temperature scaling might not provide a fair ranking, especially if some models happen to be miscalibrated.\n• Many common metrics for measuring in-domain uncertainty are either unreliable (ECE and analogues) or cannot be used to compare different methods (AUC-ROC, AUC-PR for misclassification detection; accuracy-confidence curves). In order to perform a fair comparison of different methods, one needs to be cautious of these pitfalls.\n• Many popular ensembling techniques require dozens of samples for test-time averaging, yet are essentially equivalent to a handful of independently trained models. Deep ensembles dominate other methods given a fixed test-time budget. The results indicate, in particular, that exploration of different modes in the loss landscape is crucial for good predictive performance.\n• Methods that are stuck in a single mode are unable to compete with methods that are designed to explore different modes of the loss landscape. Would more elaborate posterior approximations and better inference techniques shorten this gap?\n• Test-time data augmentation is a surprisingly strong baseline for in-domain uncertainty estimation. It can significantly improve other methods without increasing training time or model size since data augmentation is usually already present during training.\nOur takeaways are aligned with the take-home messages of Ovadia et al. (2019) that relate to indomain uncertainty estimation. We also observe a stable ordering of different methods in our experiments, and observe that deep ensembles with few members outperform methods based on stochastic computation graphs.\nA large number of unreliable metrics inhibits a fair comparison of different methods. Because of this, we urge the community to aim for more reliable benchmarks in the numerous setups of uncertainty estimation." }, { "heading": "ACKNOWLEDGMENTS", "text": "Dmitry Vetrov and Dmitry Molchanov were supported by the Russian Science Foundation grant no. 19-71-30020. This research was supported in part through computational resources of HPC facilities at NRU HSE." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "Implementations of deep ensembles, SWAG, FGE and K-FAC Laplace are heavily based on the original PyTorch implementations of stochastic weight averaging (SWA) 2 and SWAG 3. Implementations of cyclical MCMC and snapshot ensembles are based on the original implementation of cyclical MCMC 4. We hypothesize that the optimal hyperparameters of ensembling methods may vary widely depending on the computational budget and the number of samples in the ensemble. Searching for the optimal values for each configuration is outside the scope of this paper so we stick to the originally proposed hyperparameters whenever possible.\nImplied probabilistic model Conventional neural networks for classification are usually trained using the average cross-entropy loss function with weight decay regularization hidden inside an optimizer in a deep learning framework like PyTorch. The underlying optimization problem can be written as follows:\nL(w) = − 1 N N∑ i=1 log p̂(y∗i |xi, w) + λ 2 ‖w‖2 → min w , (6)\nwhere {(xi, y∗i )}Ni=1 is the training dataset of N objects xi with corresponding labels y∗i , λ is the weight decay scale and p̂(j |xi, w) denotes the probability that a neural network with parameters w assigns to class j when evaluated on object xi.\nThe cross-entropy loss defines a likelihood function p(y∗ |x,w) and weight decay regularization, or L2 regularization corresponding to a certain Gaussian prior distribution p(w). The whole optimization objective then corresponds to the maximum a posteriori inference in the following probabilistic model:\np(y∗, w |x) = p(y∗ |x,w)p(w), (7)\nlog p(y∗ |x,w) = log N∏ i=1 p(y∗i |xi, w) = N∑ i=1 log p̂(y∗i |xi, w), (8)\nlog p(w) = −Nλ\n2 ‖w‖2 + const ⇐⇒ p(w) = N\n( w ∣∣ 0, (Nλ)−1I) (9)\nIn order to make the results comparable across all ensembling techniques, we used the same probabilistic model for all methods, choosing fixed weight decay parameters for each architecture. We used the softmax-based likelihood for all models. We also use the fully-factorized zero-mean Gaussian prior distribution with variances σ2 = (Nλ)−1 where the number of objects N and the weight decay scale λ are dictated by the particular datasets and neural architectures as defined in the following paragraph.\nConventional networks To train a single network on CIFAR-10/100, we used SGD with batch size of 128, momentum 0.9 and model-specific parameters, i.e. the initial learning rate (lrinit), the weight decay coefficient (wd), and the number of optimization epochs (epoch). Specific hyperparameters are shown in Table 1. The models were trained with a unified learning rate scheduler that is shown in equation 10. All models have been trained using data augmentation that consists of horizontal flips and a random crop of 32 pixels with a padding of 4 pixels5. The standard data normalization has also been applied. Weight decays, initial learning rates, and the learning rate scheduler were taken from (Garipov et al., 2018) paper. Compared with hyperparameters of (Garipov et al., 2018), we increased the number of optimization epochs since we found that all models were underfitted. While the original WideResNet28x10 network includes a number of dropout layers with p = 0.3 and is trained for 200 epochs, we find that the WideResNet28x10 underfits in this setting and requires a longer training. Therefore, we used p = 0 which reduces training time while bearing\n2https://github.com/timgaripov/swa 3https://github.com/wjmaddox/swa_gaussian 4https://github.com/ruqizhang/csgmcmc/tree/master/experiments 5Compose([RandomHorizontalFlip(), RandomCrop(32, padding=4)])\nno significant effect on final model performance in our experiments.\nlr(i) = lrinit, i ∈ [0, 0.5 · epochs] lrinit · (1.0− 0.99 ∗ (i/epochs− 0.5)/0.4), i ∈ [0.5 · epochs, 0.9 · epochs] lrinit · 0.01, otherwise\n(10)\nOn ImageNet dataset we used ResNet50 with default hyperparameters taken from PyTorch examples 6. Specifically, we used SGD with momentum 0.9, batch size of 256, initial learning rate 0.1, weight decay 1e − 4. Training included data augmentation7 (scaling, random crops of size 224 × 224, horizontal flips), normalization and learning rate scheduler lr = lrinit · 0.1epoch//30 where // denotes integer division. We only deviated from standard parameters by increasing the number of training epochs from 90 to 130. Or models achieve top-1 error of 23.81± 0.15 that closely matches the accuracy of ResNet50 provided by PyTorch which is 23.85 8. Training of one model on a single NVIDIA Tesla V100 GPU takes approximately 5.5 days.\nDeep ensembles Deep ensembles (Lakshminarayanan et al., 2017) average the predictions across networks trained independently starting from different initializations. To obtain a deep ensemble we repeat the described procedure of training standard networks 128 times for all architectures on CIFAR-10 and CIFAR-100 datasets (1024 networks over all) and 50 times for ImageNet dataset. Every member of deep ensembles was trained with exactly the same hyperparameters as conventional models of the same architecture.\nDropout Binary dropout (or MC dropout) (Srivastava et al., 2014; Gal & Ghahramani, 2016) is one of the most widely known ensembling techniques. It involves putting a multiplicative Bernoulli noise with a parameter p over the activations of either a fully connected layer or a convolutional layer, averaging predictions of the network w.r.t. noise at test-time. Dropout layers were applied to VGG and WideResNet networks on CIFAR-10 and CIFAR-100 datasets. Dropout for VGG was applied to fully connected layers with p = 0.5. Two dropout layers were applied: one before the first fully connected layer and one before the second one. While the original version of VGG for CIFARs (Zagoruyko, 2015) exploits more dropout layers, we observed that any additional dropout layer deteriorates the performance of the model in ether deterministic or stochastic mode. Dropout for WideResNet was applied in accordance with the original paper (Zagoruyko & Komodakis, 2016) with p = 0.3. Dropout usually increases the time needed to achieve convergence. Because of this, WideResNet networks with dropout were trained for 400 epochs instead of 300 epochs for deterministic case, and VGG networks have always been trained with dropout. All the other hyperparameters were the same as in the case of conventional models.\nVariational Inference Variational Inference (VI) approximates the true posterior distribution over weights p(w |Data) with a tractable variational approximation qθ(w) by maximizing a so-called variational lower bound L (eq. 11) w.r.t. the parameters of variational approximation θ. We used a fully-factorized Gaussian approximation q(w) and Gaussian prior distribution p(w).\nL(θ) = Eq log p(y∗ |x,w)−KL(qθ(w) || p(w))→ max θ\n(11)\nq(w) = N (w |µ, diag(σ2)) p(w) = N(w | 0, diag(σ2p)), where σ2p = (N · wd)−1 (12)\n6https://github.com/pytorch/examples/tree/ee964a2/imagenet 7Compose([RandomResizedCrop(224), RandomHorizontalFlip()]) 8https://pytorch.org/docs/stable/torchvision/models.html\nIn the case of such a prior, the probabilistic model remains consistent with the conventional training which corresponds to MAP inference in the same probabilistic model. We used variational inference for both convolutional and fully-connected layers where variances of the weights were parameterized by log σ. For fully-connected layers we applied the local reparameterization trick (LRT, (Kingma et al., 2015)).\nWhile variational inference provides a theoretically grounded way to approximate the true posterior, it tends to underfit deep learning models in practice (Kingma et al., 2015). The following tricks are applied to deal with it: pre-training (Molchanov et al., 2017) or equivalently annealing of β (Sønderby et al., 2016), and scaling β down (Kingma et al., 2015; Ullrich et al., 2017).\nDuring pre-training we initialize µ with a snapshot of weights of a pre-trained conventional model, and initialize log σ with a model-specific constant log σinit. The KL-divergence – except for the term corresponding to the weight decay – is scaled with a model-specific parameter β. The weight decay term is implemented as a part of the optimizer. We used a fact that the KL-divergence between two Gaussian distributions can be rewritten as two terms, one of which is equivalent to the weight decay regularization.\nOn CIFAR-10 and CIFAR-100 we used β equal to 1e-4 for VGG, ResNet100 and ResNet164 networks, and β equal to 1e-5 for WideResNet. Log-variance log σinit was initialized with −5 for all models. Parameters µ were optimized with SGD in the same manner as in the case of conventional networks except that the initial learning rate lrinit was set to 1e-3. We used a separate Adam optimizer with a constant learning rate of 1e-3 to optimize log-variances of the weights log σ. Pretraining was done for 300 epochs, and after that the remaining part of training was done for 100 epochs. On ImageNet we used β = 1e-3, lrinit = 0.01, log σinit = −6, and trained the model for 45 epochs after pre-training.\nK-FAC Laplace The Laplace approximation uses the curvature information of the appropriately scaled loss function to construct a Gaussian approximation to the posterior distribution. Ideally, one would use the Hessian of the loss function as a covariance matrix and use the maximum a posteriori estimate wMAP as a mean of the Gaussian approximation:\nlog p(w |x, y∗) = log p(y∗ |x,w) + log p(w) + const (13) wMAP = arg max\nw log p(w |x, y∗); Σ = −∇∇ log p(w |x, y∗) (14)\np(w |x, y∗) ≈ N (w |wMAP ,Σ) (15)\nIn order to keep the method scalable, we use the Fisher Information Matrix as an approximation to the true Hessian (Martens & Grosse, 2015). For K-FAC Laplace, we use the whole dataset to construct an approximation to the empirical Fisher Information Matrix, and use the π correction to reduce the bias (Ritter et al., 2018; Martens & Grosse, 2015). Following (Ritter et al., 2018), we find the optimal noise scale for K-FAC Laplace on a held-out validation set by averaging across five random initializations. We then reuse this scale for networks trained without a hold-out validation set. We report the optimal values of scales in Table 2. Note that the optimal scale is different depending on whether we use test-time data augmentation or not. Since the data augmentation also introduces some amount of additional noise, the optimal noise scale for K-FAC Laplace with data augmentation is lower.\nSnapshot ensembles Snapshot ensembles (SSE) (Huang et al., 2017) is a simple example of an array of methods which collect samples from a training trajectory of a network in weight space to construct an ensemble. Samples are collected in a cyclical manner: during each cycle the learning rate goes from a large value to near-zero and snapshot of weights of the network is taken at the end of the cycle. SSE uses SGD with a cosine learning schedule defined as follows:\nα(t) = α0 2\n( cos ( π mod (t− 1, dT/Me)\ndT/Me\n) + 1 ) , (16)\nwhere α0 is the initial learning rate, T is the total number of training iterations and M is the number of cycles.\nFor all datasets and models hyperparameters from the original SSE paper are reused. For CIFAR10/100 length of the cycle is 40 epochs, maximum learning rate is 0.2, batch size is 64. On ResNet50 and ImageNet length of the cycle is 45 epochs, maximum learning rate is 0.1, batch size is 256.\nCyclical SGLD Cyclical Stochastic Gradient Langevin Dynamics (cSGLD) (Zhang et al., 2019) is a state-of-the-art ensembling method for deep neural networks pertaining to stochastic Markov Chain Monte Carlo family of methods. It bears similarity to SSE, e.g. it employs SGD with a learning rate schedule described with the equation 16 and training is cyclical in the same manner. Its main differences from SSE are the introduction of gradient noise and the capturing of several snapshots per cycle, both of which can aid in sampling from posterior distribution over neural network weights efficiently.\nSome parameters from the original paper are reused: length of cycle is 50 epochs, maximum learning rate is 0.5, batch size is 64. Number of epochs with gradient noise per cycle is 3 epochs. This was found to yield much higher predictive performance and better uncertainty estimation compared to the original paper choice of 10 epochs for CIFAR-10 and 3 epochs for CIFAR-100.\nFinally, the results of cyclical Stochastic Gradient Hamiltonian Monte Carlo (SGHMC), (Zhang et al., 2019) which reportedly has marginally better performance compared with cyclical SGLD, could not be reproduced with any value of SGD momentum term. Because of this, we only include cyclical SGLD in our benchmark.\nFGE Fast Geometric Ensembling (FGE) is an ensembling method that is similar to SSE in that it collects weight samples from a training trajectory to construct an ensemble. Its main differences from SSE are pretraining, short cycle length and a piecewise-linear learning rate schedule:\nα(i) = { (1− 2t(i))α1 + 2t(i)α2 0 < t(i) ≤ 12 (2− 2t(i))α2 + (2t(i)− 1)α1 12 < t(i) ≤ 1 . (17)\nHyperparameters of the original implementation of FGE are reused. Model pretraining is done with SGD for 160 epochs according to the standard learning rate schedule described in equation 10 with maximum learning rates from Table 1. After that, a desired number of FGE cycles is done with one snapshot per cycle collected. For VGG the learning rate is changed with parameters α1 = 1e − 2, α2 = 5e − 4 in a cycle with cycle length of 2 epochs. For other networks the learning rate is changed with parameters α1 = 5e− 2, α2 = 5e− 4 with cycle length of 4 epochs. Batch size is 128.\nSWAG SWA-Gaussian (SWAG) (Maddox et al., 2019) is an ensembling method based on fitting a Gaussian distribution to model weights on the SGD training trajectory and sampling from this distribution to construct an ensemble.\nLike FGE, SWAG has a pretraining stage which is done according to the standard learning rate schedule described in equation 10 with maximum learning rates from Table 1. After that, training continues with a constant learning rate of 1e-2 for all models except for PreResNet110 and PreResNet164 on CIFAR-100 where it continues with a constant learning rate of 5e-2 in accordance with the original paper. Rank of the empirical covariance matrix which is used for estimation of Gaussian distribution parameters is set to be 20." }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS", "text": "Error (%) Negative calibrated log-likelihood\nModel Method 1 5 10 100 1 5 10 100\nDropout 5.86±0.09 5.81±0.08 5.82±0.06 5.79±0.07 0.232±0.005 0.225±0.004 0.224±0.004 0.223±0.003 SWA-Gaussian 7.03±0.50 5.66±0.08 5.49±0.12 5.25±0.13 0.230±0.014 0.182±0.003 0.171±0.002 0.160±0.002 Cyclic SGLD 7.37±0.16 6.56±0.09 5.71±0.06 4.84±0.04 0.234±0.004 0.196±0.004 0.176±0.003 0.147±0.003 Fast Geometric Ens. 6.52±0.16 5.95±0.16 5.69±0.16 5.10±0.13 0.213±0.005 0.187±0.003 0.178±0.003 0.155±0.004\nVGG16 Deep Ensembles 5.95±0.14 4.79±0.11 4.57±0.07 4.39±NA 0.226±0.001 0.158±0.002 0.148±0.001 0.134±NA CIFAR-10 Single model 5.83±0.11 5.83±0.11 5.83±0.11 5.83±0.11 0.223±0.002 0.223±0.002 0.223±0.002 0.223±0.002\nVariational Inf. (FFG) 6.57±0.09 5.63±0.13 5.50±0.10 5.46±0.03 0.239±0.002 0.192±0.002 0.184±0.002 0.175±0.001 KFAC-Laplace 6.00±0.13 5.82±0.12 5.82±0.19 5.80±0.19 0.210±0.005 0.203±0.007 0.201±0.007 0.200±0.008 Snapshot Ensembles 7.76±0.22 5.52±0.13 5.00±0.10 4.54±0.05 0.247±0.005 0.176±0.001 0.160±0.001 0.137±0.001\nSWA-Gaussian 5.77±0.45 4.56±0.17 4.46±0.12 4.34±0.13 0.178±0.009 0.143±0.004 0.139±0.003 0.131±0.003 Cyclic SGLD 6.18±0.20 5.32±0.15 4.55±0.13 3.83±0.02 0.185±0.006 0.156±0.005 0.138±0.002 0.115±0.001 Fast Geometric Ens. 5.52±0.09 4.83±0.08 4.73±0.10 4.28±0.05 0.163±0.002 0.141±0.003 0.137±0.003 0.126±0.002\nResNet110 Deep Ensembles 4.66±0.11 3.77±0.11 3.63±0.07 3.53±NA 0.148±0.004 0.117±0.002 0.112±0.002 0.106±NA CIFAR-10 Single model 4.69±0.11 4.69±0.11 4.69±0.11 4.69±0.11 0.150±0.002 0.150±0.002 0.150±0.003 0.150±0.002\nVariational Inf. (FFG) 5.57±0.26 4.91±0.15 4.72±0.13 4.60±0.03 0.178±0.003 0.149±0.001 0.144±0.001 0.140±0.000 KFAC-Laplace 5.81±0.39 5.14±0.15 4.90±0.14 4.78±0.08 0.187±0.014 0.160±0.007 0.153±0.005 0.147±0.003 Snapshot Ensembles 8.41±0.27 4.85±0.11 4.16±0.16 3.52±0.10 0.252±0.006 0.153±0.002 0.132±0.002 0.107±0.001\nSWA-Gaussian 5.41±0.71 4.21±0.19 4.21±0.23 4.02±0.14 0.171±0.028 0.130±0.004 0.128±0.004 0.121±0.002 Cyclic SGLD 5.80±0.21 4.97±0.12 4.30±0.08 3.66±0.06 0.178±0.004 0.149±0.004 0.131±0.003 0.110±0.001 Fast Geometric Ens. 5.22±0.07 4.49±0.06 4.36±0.07 4.09±0.12 0.157±0.003 0.134±0.002 0.130±0.001 0.119±0.002\nResNet164 Deep Ensembles 4.53±0.11 3.51±0.09 3.50±0.06 3.34±NA 0.147±0.002 0.113±0.001 0.107±0.001 0.100±NA CIFAR-10 Single model 4.52±0.11 4.52±0.11 4.52±0.11 4.52±0.11 0.144±0.002 0.144±0.003 0.144±0.002 0.144±0.003\nVariational Inf. (FFG) 5.62±0.14 4.78±0.05 4.66±0.05 4.55±0.08 0.183±0.004 0.151±0.001 0.146±0.001 0.141±0.001 KFAC-Laplace 5.23±0.29 4.77±0.23 4.65±0.17 4.60±0.09 0.168±0.008 0.151±0.007 0.146±0.005 0.142±0.004 Snapshot Ensembles 8.06±0.10 4.50±0.04 3.89±0.09 3.50±0.05 0.241±0.004 0.144±0.003 0.124±0.002 0.104±0.001\nDropout 3.88±0.12 3.70±0.18 3.63±0.19 3.64±0.17 0.130±0.002 0.120±0.002 0.119±0.001 0.117±0.002 SWA-Gaussian 4.98±1.17 3.53±0.09 3.34±0.14 3.28±0.10 0.157±0.036 0.111±0.004 0.105±0.003 0.101±0.002 Cyclic SGLD 4.78±0.16 4.09±0.11 3.63±0.13 3.19±0.04 0.155±0.003 0.128±0.002 0.114±0.001 0.099±0.002 Fast Geometric Ens. 4.86±0.17 3.95±0.07 3.77±0.10 3.34±0.06 0.148±0.003 0.120±0.002 0.113±0.002 0.102±0.001\nWideResNet Deep Ensembles 3.65±0.02 3.11±0.10 3.01±0.06 2.83±NA 0.123±0.002 0.097±0.001 0.095±0.001 0.090±NA CIFAR-10 Single model 3.70±0.15 3.70±0.15 3.70±0.15 3.70±0.15 0.124±0.005 0.124±0.005 0.125±0.005 0.124±0.005\nVariational Inf. (FFG) 5.61±0.04 4.15±0.15 3.94±0.10 3.64±0.07 0.189±0.002 0.134±0.002 0.127±0.002 0.117±0.001 KFAC-Laplace 4.03±0.19 3.90±0.15 3.88±0.22 3.83±0.16 0.134±0.004 0.124±0.004 0.122±0.005 0.120±0.003 Snapshot Ensembles 5.56±0.15 3.68±0.09 3.33±0.10 2.89±0.07 0.179±0.005 0.119±0.001 0.105±0.001 0.090±0.001\nDropout 26.10±0.20 25.68±0.18 25.66±0.14 25.60±0.17 1.176±0.008 1.111±0.008 1.098±0.009 1.084±0.009 SWA-Gaussian 27.74±1.87 24.53±0.09 23.64±0.28 22.97±0.20 1.109±0.073 0.931±0.007 0.879±0.007 0.826±0.005 Cyclic SGLD 29.75±0.17 26.79±0.19 24.14±0.11 21.15±0.11 1.114±0.003 0.976±0.004 0.881±0.006 0.749±0.004 Fast Geometric Ens. 27.07±0.24 25.35±0.29 24.68±0.40 22.78±0.22 1.057±0.010 0.965±0.003 0.930±0.003 0.827±0.004\nVGG16 Deep Ensembles 25.72±0.17 21.60±0.13 20.79±0.16 19.88±NA 1.092±0.004 0.840±0.005 0.794±0.002 0.723±NA CIFAR-100 Single model 25.44±0.29 25.44±0.29 25.44±0.29 25.44±0.29 1.087±0.006 1.087±0.006 1.087±0.006 1.087±0.006\nVariational Inf. (FFG) 27.24±0.09 25.24±0.11 24.85±0.05 24.56±0.07 1.154±0.004 1.001±0.002 0.973±0.002 0.939±0.001 KFAC-Laplace 27.11±0.59 25.98±0.21 25.84±0.38 25.70±0.38 1.174±0.037 1.089±0.007 1.069±0.005 1.050±0.008 Snapshot Ensembles 31.19±0.33 23.87±0.18 22.31±0.31 21.03±0.10 1.170±0.012 0.899±0.004 0.834±0.005 0.751±0.003\nSWA-Gaussian 27.75±0.76 22.31±0.22 21.52±0.30 20.69±0.19 0.960±0.033 0.781±0.011 0.745±0.010 0.701±0.008 Cyclic SGLD 25.73±0.14 23.30±0.19 21.20±0.21 18.07±0.16 0.914±0.006 0.818±0.004 0.753±0.002 0.630±0.002 Fast Geometric Ens. 22.84±0.16 21.22±0.20 20.79±0.23 19.64±0.15 0.798±0.006 0.729±0.003 0.713±0.002 0.679±0.002\nResNet110 Deep Ensembles 22.55±0.28 18.30±0.22 17.59±0.21 16.97±NA 0.847±0.007 0.675±0.001 0.638±0.001 0.594±NA CIFAR-100 Single model 22.66±0.31 22.66±0.31 22.66±0.31 22.66±0.31 0.848±0.014 0.848±0.015 0.848±0.014 0.848±0.015\nVariational Inf. (FFG) 24.27±0.26 22.41±0.13 22.14±0.12 21.86±0.07 0.924±0.007 0.829±0.003 0.813±0.001 0.795±0.001 KFAC-Laplace 24.88±0.97 22.87±0.44 22.41±0.26 22.14±0.29 0.948±0.036 0.858±0.014 0.836±0.010 0.812±0.010 Snapshot Ensembles 30.30±0.40 22.83±0.23 21.13±0.14 18.48±0.25 1.069±0.006 0.820±0.003 0.761±0.002 0.662±0.002\nSWA-Gaussian 24.38±0.93 20.62±0.18 20.08±0.19 19.48±0.19 0.844±0.042 0.719±0.006 0.700±0.006 0.667±0.004 Cyclic SGLD 24.87±0.39 22.37±0.27 20.23±0.22 17.13±0.18 0.888±0.008 0.790±0.009 0.722±0.009 0.606±0.005 Fast Geometric Ens. 21.92±0.15 20.10±0.22 19.87±0.25 18.73±0.25 0.765±0.003 0.699±0.004 0.686±0.004 0.650±0.003\nResNet164 Deep Ensembles 21.41±0.25 17.53±0.17 16.90±0.15 16.50±NA 0.819±0.008 0.647±0.003 0.615±0.002 0.574±NA CIFAR-100 Single model 21.39±0.40 21.39±0.40 21.39±0.40 21.39±0.40 0.817±0.014 0.817±0.014 0.817±0.014 0.817±0.014\nVariational Inf. (FFG) 23.47±0.26 21.35±0.11 21.10±0.16 20.82±0.04 0.910±0.001 0.801±0.002 0.782±0.002 0.762±0.000 KFAC-Laplace 23.44±0.45 21.77±0.20 21.29±0.23 21.03±0.38 0.902±0.019 0.813±0.006 0.792±0.005 0.772±0.007 Snapshot Ensembles 29.48±0.19 21.92±0.18 20.27±0.23 17.68±0.07 1.045±0.005 0.789±0.005 0.729±0.004 0.634±0.003\nDropout 20.19±0.11 19.41±0.17 19.36±0.12 19.22±0.15 0.823±0.008 0.768±0.005 0.760±0.006 0.751±0.005 SWA-Gaussian 20.45±0.73 17.57±0.17 17.21±0.22 17.08±0.19 0.794±0.025 0.653±0.004 0.634±0.005 0.614±0.005 Cyclic SGLD 21.42±0.32 19.42±0.28 17.88±0.16 16.29±0.10 0.813±0.010 0.713±0.009 0.654±0.005 0.583±0.004 Fast Geometric Ens. 21.48±0.31 18.54±0.16 18.00±0.19 17.12±0.16 0.770±0.007 0.652±0.006 0.630±0.006 0.596±0.003\nWideResNet Deep Ensembles 19.38±0.20 16.55±0.08 16.17±0.15 15.77±NA 0.797±0.007 0.623±0.003 0.595±0.003 0.571±NA CIFAR-100 Single model 19.31±0.24 19.31±0.24 19.31±0.24 19.31±0.24 0.797±0.010 0.797±0.010 0.797±0.010 0.797±0.010\nError (%) Negative calibrated log-likelihood\nModel Method 1 5 10 50 1 5 10 50\nFast Geometric Ens. 23.71±0.00 23.61±0.00 23.56±0.00 23.28±0.00 0.929±0.000 0.921±0.000 0.916±0.000 0.904±0.000 Deep Ensembles 23.79±0.14 21.19±0.14 20.90±0.08 20.63±NA 0.935±0.007 0.823±0.002 0.805±0.000 0.788±NA\nResNet50 Single model 23.86±0.20 23.86±0.20 23.86±0.20 23.86±0.20 0.938±0.006 0.938±0.006 0.938±0.006 0.938±0.006 Variational Inf. (FFG) 24.50±0.06 23.82±0.03 23.77±0.04 23.67±0.00 0.957±0.001 0.927±0.000 0.923±0.001 0.920±0.000 KFAC-Laplace 25.01±0.49 24.19±0.29 23.93±0.20 23.86±0.16 0.988±0.022 0.948±0.013 0.939±0.011 0.934±0.008 Snapshot Ensembles 24.92±NA 22.21±NA 21.75±NA 21.48±NA 0.983±NA 0.865±NA 0.843±NA 0.830±NA" } ]
2,021
null
SP:f68087fb27b761dc1b71889ab84723526b621a6c
[ "This work proposes a framework for solving de-mixing problems. The hard constraints from human inputs about a specific problem are relaxed into continuous constraints (the \"slow\" reasoning part), and a reconstruction loss measures the fitness of the inferred labels with the observations (the \"fast\" pattern recognition part). Due to the relaxation inference becomes an optimization problem, and on a Sudoku task and a crystal-structure-phase-mapping recovery task (both de-mixing tasks), the proposed method gets very good performance (100% for all Sudoku tasks including one in the appendix).", "This paper proposes a new encoder-decoder framework that combines prior knowledge-based regularization and constrained reconstruction for unsupervised and weakly-supervised classification in structure rich scenarios. This framework injects prior knowledge in the form of relaxed constraints that act as regularization during the training of the encoder network. Some of the constraints concern sets of training examples. In this case, the paper proposes corresponding sampling schemes. Three experiments demonstrate the efficacy of the model. The first is a synthetically created 4x4 Sudoku made of overlaid MNIST digits. The other two are based on predicting crystal structures from x-ray diffraction measurements. Here, the first experiment is on simulated data for the Al-Li-Fe oxide system, while the other is performed on real measurements for the Bi-Cu-V oxide system." ]
We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with reasoning for solving pattern de-mixing problems, typically in an unsupervised or weakly-supervised setting. DRNets exploit problem structure and prior knowledge by tightly combining logic and constraint reasoning with stochastic-gradient-based neural network optimization. We illustrate the power of DRNets on de-mixing overlapping hand-written Sudokus (MultiMNIST-Sudoku) and on a substantially more complex task in scientific discovery that concerns inferring crystal structures of materials from X-ray diffraction data (Crystal-Structure-Phase-Mapping). DRNets significantly outperform the state of the art and experts’ capabilities on Crystal-Structure-Phase-Mapping, recovering more precise and physically meaningful crystal structures. On Multi-MNISTSudoku, DRNets perfectly recovered the mixed Sudokus’ digits, with 100% digit accuracy, outperforming the supervised state-of-the-art MNIST de-mixing models.
[]
[ { "authors": [ "Saeed Amizadeh", "Sergiy Matusevych", "Markus Weimer" ], "title": "Pdp: A general neural framework for learning constraint satisfaction solvers", "venue": "arXiv preprint arXiv:1903.01969,", "year": 2019 }, { "authors": [ "Junwen Bai", "Johan Bjorck", "Yexiang Xue", "Santosh K Suram", "John Gregoire", "Carla Gomes" ], "title": "Relaxation methods for constrained matrix factorization problems: solving the phase mapping problem in materials discovery", "venue": "In International Conference on AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems,", "year": 2017 }, { "authors": [ "Junwen Bai", "Sebastian Ament", "Guillaume Perez", "John Gregoire", "Carla Gomes" ], "title": "An efficient relaxed projection method for constrained non-negative matrix factorization with application to the phase-mapping problem in materials science", "venue": "In International Conference on the Integration of Constraint Programming,", "year": 2018 }, { "authors": [ "Stefano Ermon", "Ronan Le Bras", "Santosh K Suram", "John M Gregoire", "Carla P Gomes", "Bart Selman", "Robert B Van Dover" ], "title": "Pattern decomposition with complex combinatorial constraints: Application to materials discovery", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Kuzman Ganchev", "Jennifer Gillenwater", "Ben Taskar" ], "title": "Posterior regularization for structured latent variable models", "venue": "Journal of Machine Learning Research,", "year": 2001 }, { "authors": [ "Artur d’Avila Garcez", "Marco Gori", "Luis C Lamb", "Luciano Serafini", "Michael Spranger", "Son N Tran" ], "title": "Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning", "venue": null, "year": 1905 }, { "authors": [ "Carla P Gomes", "Bart Selman", "Henry Kautz" ], "title": "Boosting combinatorial search through randomization", "venue": "AAAI/IAAI, 98:431–437,", "year": 1998 }, { "authors": [ "Gordon Royle" ], "title": "Deep residual learning for image recognition", "venue": "Minimum sudoku,", "year": 2014 }, { "authors": [ "Geoffrey E Hinton", "Zoubin Ghahramani", "Yee Whye Teh" ], "title": "Learning to parse images", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Zhiting Hu", "Xuezhe Ma", "Zhengzhong Liu", "Eduard Hovy", "Eric Xing" ], "title": "Harnessing deep neural networks with logic rules", "venue": "arXiv preprint arXiv:1603.06318,", "year": 2016 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Ruslan Salakhutdinov", "Eric Xing" ], "title": "Deep neural networks with massive learned knowledge", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Longlong Jing", "Yingli Tian" ], "title": "Self-supervised visual feature learning with deep neural networks: A survey", "venue": "arXiv preprint arXiv:1902.06162,", "year": 2019 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ronan Le Bras", "Richard Bernstein", "John M Gregoire", "Santosh K Suram", "Carla P Gomes", "Bart Selman", "R Bruce Van Dover" ], "title": "Challenges in materials discovery–synthetic generator and real datasets", "venue": "In Twenty-Eighth AAAI Conference on Artificial Intelligence,", "year": 2014 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "CJ Long", "D Bunker", "X Li", "VL Karen", "I Takeuchi" ], "title": "Rapid identification of structural phases in combinatorial thin-film libraries using x-ray diffraction and non-negative matrix factorization", "venue": "Review of Scientific Instruments,", "year": 2009 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "David Mitchell", "Bart Selman", "Hector Levesque" ], "title": "Hard and easy distributions of sat problems", "venue": "In AAAI,", "year": 1992 }, { "authors": [ "Rasmus Berg Palm", "Ulrich Paquet", "Ole Winther" ], "title": "Recurrent relational networks for complex relational reasoning", "venue": "arXiv preprint arXiv:1711.08028,", "year": 2017 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Trevor Darrell" ], "title": "Constrained convolutional neural networks for weakly supervised segmentation", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "In Herbert Robbins Selected Papers,", "year": 1985 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Daniel Selsam", "Matthew Lamm", "Benedikt Bünz", "Percy Liang", "Leonardo de Moura", "David L Dill" ], "title": "Learning a sat solver from single-bit supervision", "venue": "arXiv preprint arXiv:1802.03685,", "year": 2018 }, { "authors": [ "Radhika Shivhare", "Ch Aswani Kumar" ], "title": "On the cognitive process of abstraction", "venue": "Procedia Computer Science,", "year": 2016 }, { "authors": [ "Valentin Stanev", "Velimir V Vesselinov", "A Gilad Kusne", "Graham Antoszewski", "Ichiro Takeuchi", "Boian S Alexandrov" ], "title": "Unsupervised phase mapping of x-ray diffraction data by nonnegative matrix factorization integrated with custom clustering", "venue": "npj Computational Materials,", "year": 2018 }, { "authors": [ "Po-Wei Wang", "Priya L Donti", "Bryan Wilder", "Zico Kolter" ], "title": "Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver", "venue": null, "year": 1905 }, { "authors": [ "Bing Xu", "Naiyan Wang", "Tianqi Chen", "Mu Li" ], "title": "Empirical evaluation of rectified activations in convolutional network", "venue": "arXiv preprint arXiv:1505.00853,", "year": 2015 }, { "authors": [ "Jingyi Xu", "Zilu Zhang", "Tal Friedman", "Yitao Liang", "Guy Van den Broeck" ], "title": "A semantic loss function for deep learning with symbolic knowledge", "venue": "arXiv preprint arXiv:1711.11157,", "year": 2017 }, { "authors": [ "Yexiang Xue", "Junwen Bai", "Ronan Le Bras", "Brendan Rappazzo", "Richard Bernstein", "Johan Bjorck", "Liane Longpre", "Santosh K Suram", "Robert B van Dover", "John Gregoire" ], "title": "Phase-mapper: an ai platform to accelerate high throughput materials discovery", "venue": "In Twenty-Ninth IAAI Conference,", "year": 2017 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William L Hamilton", "Jure Leskovec" ], "title": "Graphrnn: Generating realistic graphs with deep auto-regressive models", "venue": "arXiv preprint arXiv:1802.08773,", "year": 2018 }, { "authors": [ "Ning Zhang", "Junchi Yan", "Yuchen Zhou" ], "title": "Weakly supervised audio source separation via spectrum", "venue": null, "year": 2020 }, { "authors": [ "Xingyi Zhou", "Qixing Huang", "Xiao Sun", "Xiangyang Xue", "Yichen Wei" ], "title": "Weaklysupervised transfer", "venue": "European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Sabour" ], "title": "MULTI-MNIST-SUDOKU For Multi-MNIST-Sudoku, we compared DRNets with CapsuleNet (Sabour et al., 2017) and ResNet (He et al., 2016)", "venue": null, "year": 2017 }, { "authors": [ "Le Bras" ], "title": "ICDD stick patterns and the physical model", "venue": null, "year": 2014 }, { "authors": [ "Royle" ], "title": "Sudoku instance has 24 to 32 (uniformly distributed) known cells", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has achieved tremendous success in areas such as vision, speech recognition, language translation, and autonomous driving. Nevertheless, certain limitations of deep learning are generally recognized, in particular, limitations due to the fact that deep learning approaches heavily depend on the availability of large amounts of labeled data. In certain domains, such as scientific discovery, it is often the case that scientists don’t have large amounts of labeled data and instead have to rely on prior knowledge to make sense of the data. One grand challenge in scientific discovery is to perform high-throughput unsupervised interpretation of scientific data, given its exponential growth in generation rates, dramatically outpacing humans’ ability to analyze them. Herein we consider pattern de-mixing problems, which involve decomposing a mixed signal into the collection of source patterns, such as separating mixtures of X-ray diffraction (XRD) signals into the source XRD signals of the corresponding crystal structures, a key challenge in materials discovery. More generally, pattern de-mixing problems are pervasive in scientific areas as diverse as biology, astronomy, and materials science, as well as in commercial applications for e.g., healthcare and music.\nWe propose Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with logical and constraint reasoning for solving unsupervised or very-weakly-supervised pattern de-mixing tasks. We illustrate the power of DRNets for disentangling two overlapping handwritten Sudokus (Multi-MNIST-Sudoku) (see Fig.1) and for solving a substantially more complex de-mixing task in scientific discovery that concerns inferring crystal structures of materials from X-ray diffraction data, which we refer to as Crystal-Structure-Phase-Mapping. Both de-mixing tasks require probabilistic reasoning to interpret noisy and uncertain data, while satisfying a set of rules: Sudoku rules and thermodynamic rules, respectively. For example, de-mixing hand written digits is challenging, but it becomes more feasible when we reason about the prior knowledge concerning the two overlapping Sudokus. Crystal structure phase mapping is yet substantially more complex. In fact, crystal structure phase mapping easily becomes too complex for experts to solve and is a major bottleneck in high-throughput materials discovery. DRNets are inspired and motivated by problems from scientific discovery, such as crystal structure phase mapping.\nOur contributions: (1) We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with logical and constraint reasoning for unsupervised or veryweakly-supervised de-mixing tasks. Specifically, DRNets perform end-to-end deep reasoning by\nencoding a latent space of the input data that captures the structure and prior knowledge constraints within and among data points (Fig.2). The latent space is used by a generative decoder to generate the targeted output, which should be consistent with the input data and prior knowledge. Subsequently, DRNets optimize an objective function capturing the overall problem objective as well as prior knowledge in the form of weighted constraints. (2) To instantiate the logical constraints in DRNets, we introduce a group of entropy-based continuous relaxations that use probabilistic modeling to encode general discrete constraints including sparsity, cardinality and so-called AllDifferent constraints.To optimize those constraints, we introduce a variant of standard SGD method (Robbins & Monro, 1985) called constraint-aware stochastic gradient descent, which batches data points involved in the same constraint component together and dynamically adjust the constraints’ weights as a function of their satisfiability. In the following sections, we show how to encode Multi-MNIST-Sudoku and Crystal-Structure-Phase-Mapping as DRNets, by properly defining the structure of the latent space, additional reasoning modules to model the problem constraints (prior knowledge), and the components of the objective function. De facto, these examples illustrate how to develop “gadgets” to encode a variety of constraints and prior knowledge in DRNets. (3) We demonstrate the potential of DRNets on two de-mixing tasks with detailed experimental results. We show how (3.1) DRNets significantly outperformed the state of the art and human experts on Crystal-Structure-Phase-Mapping instances, recovering more precise, interpretable, and physically meaningful crystal structure pattern decompositions. In this task, DRNets solve a previously unsolved chemical system, which subsequently led to the discovery of a new material that is important\nfor solar fuels technology. (3.2) On Multi-MNIST-Sudoku instances, without direct supervision, DRNets perfectly recovered the digits in the mixed Sudokus with 100% digit accuracy, outperforming the supervised state-of-the-art MNIST de-mixing models, including CapsuleNet (Sabour et al., 2017) and ResNet (He et al., 2016)." }, { "heading": "2 RELATED WORK", "text": "DRNets have been motivated by scientific tasks such as crystal phase mapping that involve identifying or de-mixing patterns in data that satisfy prior scientific knowledge. In general, for such tasks there are no labeled datasets. So our work focus on unsupervised or weakly supervised learning, using prior knowledge.\nMost closely related work: Unsupervised or weakly supervised de-mixing approaches. Pattern de-mixing approaches have been developed under the name of source separation in the signal processing community. The unsupervised methods in this area mostly try to solve the de-mixing, which is in general ill-posed, using different regularizations. Among existing methods, recent work for weakly supervised audio source separation (Zhang et al., 2017) is most related to DRNets since they also employed a generative adversarial network (GAN) in their model. However, their model mainly employs the decoder of GAN to discriminate the reality of separated sources, while DRNets only utilize the generator of GAN as the generative model of possible sources. Moreover, the weakly supervised setting in their paper is actually too strong: they need the true labels of mixed sources, which is almost the goal of our tasks, and therefore it is not applicable to our settings. We now consider the state-of-the-art models for the tasks considered in this paper. For Crystal-structurephase-mapping, due to the lack of labeled datasets, existing models (Ermon et al., 2015; Xue et al., 2017; Bai et al., 2017; 2018; Stanev et al., 2018) are mainly based on non-negative matrix factorization (NMF), which is in general unsupervised. Stanev et al. (2018) proposed the NMF-k algorithm, which applies a customized clustering process over the results of thousands of runs of pure NMF algorithm (Long et al., 2009) to cluster the common phase patterns. However, NMF-k does not enforce prior knowledge (namely thermodynamic rules) and therefore the solutions produced are often not completly physically meaningful. To address this limitation several approaches have been developed that use external mixed-integer programming modules to interact with the NMF de-mixing module to enforce prior knowledge (Ermon et al., 2015; Bai et al., 2017; 2018). However, the coordination barrier between the NMF de-mixing module and the reasoning module often results in inferiror performance, where the solution satisfies constraints at the cost of huge reconstruction loss. In contrast to existing models, DRNets seamlessly integrate the pattern de-mixing module and the reasoning module, recovering almost exact ground truth decomposition. In our experiments we thoroughly compare DRNets’ performance against the state of the art (IAFD and NMF-k) for crystal-structure pattern de-mixing. MNIST de-mixing was first studied by Hinton et al. in 2000, where the aim is to identify or de-mix overlapping digits coming from the MNIST datasets (LeCun et al., 1998). More recently, it has been tackled with state-of-the-art neural network models such as CapsuleNet (Sabour et al., 2017) and ResNet (He et al., 2016). Existing works concerning this task are mainly in supervised settings, where we have labels of digits for each overlapping image. However, in this paper, we aim to tackle this task in a weakly supervised setting, where we only have access to the prototypes of single digits and the extra Sudoku rules. Due to the lack of existing models with the same setting, we compared DRNets’s performance against the state-of-the-art supervised models (CapsuleNet and ResNet). By utilizing the supervision from prior knowledge and reasoning, we show that DRNets’ outperformed all supervised models with 100% digit accuracy.\nEnhancing deep learning with symbolic prior knowledge. Exploiting problem structure and reasoning about prior knowledge has been of increasing interest to facilitate deep learning (Garcez et al., 2019). In computer vision, symmetry constraints, bone-length constraints and linear constraints were introduced for human pose estimation (Zhou et al., 2017; 2016) and image segmentation (Pathak et al., 2015) to regularize the output and enhance generalization. In natural language processing, Hu et al. (2016a;b) introduced the posterior regularization (Ganchev et al., 2010) framework into deep learning to incorporate rule-based grammatical knowledge using first order logic. Xu et al. (2017) proposed a semantic loss function to enforce propositional logic constraints on the output of neural networks for semi-supervised multi-class classification tasks. Wang et al. (2019) proposed SATNet, which approximately encodes a MAXSAT solver into a neural network layer called SATNet layer, to explicitly learn the logical structures (e.g., parity function and Sudoku) from the labeled training data.\nPrevious works in this area primarily focus on supervised or semi-supervised settings for data-rich domains, where direct supervision from labels reduce the importance of explicitly reasoning about prior knowledge. In contrast, with an unsupervised setting, the supervision of DRNets comes from reasoning about prior knowledge and self-reconstruction, which is strongly desired for problems in scientific discovery due to the lack of labeled datasets, and strongly motivated by extensive prior knowledge from sources ranging from fundamental principles to the intuitive experience of scientists.\nAmong existing works, SATNet is mostly related to DRNets in the sense of bridging logical reasoning with deep learning. However, SATNet is essentially designed for learning logical structures (prior knowledge) from labeled training examples while DRNets aim to facilitate unsupervised learning with known logical constraints. In terms of the encoding of the reasoning module, the semantic loss (Xu et al., 2017) is mostly related to ours. However, the semantic loss encodes constraints by propositional logic, which requires enumerating all possible Boolean assignments that satisfy the constraints. Consequently, the semantic loss has to enumerate a large number of assignments to encode constraints such as k-sparsity constraints and All-Different constraints, which is not applicable to tasks considered in this paper." }, { "heading": "3 DEEP REASONING NETWORKS", "text": "DRNets (see Fig.2) are inspired by human thinking (Shivhare & Kumar, 2016): we abstract patterns to higher-level descriptions and combine them with prior-knowledge to fill-in the gaps. Consider the Multi-MNIST-Sudoku example (Fig.1): we first guess the digits in each cell based on the patterns; we re-adjust our initial beliefs and re-image the overlapping patterns by reasoning about Sudoku rules and comparing them to the original ones, potentially involving several iterations. Analogously, in a reasoning system, an inference procedure derives what follows from an initial set of axioms and rules. For example, in a standard 9x9 Sudoku, an inference procedure identifies the missing cell values of the input Sudoku. A constraint solver is a particular type of reasoning system in which axioms and rules are expressed as constraints and the inference procedure is a search method. Formally, DRNets formulate unsupervised pattern de-mixing as a data-driven constrained optimization, incorporating abstractions and reasoning about structure and prior knowledge:\nmin θ\n1\nN N∑ i=1 L(G(φθ(xi)),xi) s.t. φθ(xi) ∈ Ωlocal and (φθ(x1), ..., φθ(xN )) ∈ Ωglobal (1)\nIn this formulation, xi ∈ Rn is the i-th n-dimensional input data point, φθ(·) is the function of the encoder in DRNets parameterized by θ,G(·) denotes the generative decoder,L(·, ·) is the loss function (e.g., evaluating the reconstruction of patterns), Ωlocal and Ωglobal are the constrained spaces w.r.t. a single input data point and several input data points, respectively. G(·) is in general a fixed pre-trained or parametric model. For example, in Multi-MNIST-Sudoku, G(·) is a pre-trained conditional GAN (Mirza & Osindero, 2014) using hand-written digits, and for Crystal-Structure-Phase-Mapping, G(·) is a Gaussian Mixture model. Note that constraints can involve several (potentially all) data points: e.g., in Sudoku, all digits should form a valid Sudoku and in crystal-structure-phase-mapping, all data points in a composition graph should form a valid phase diagram. Thus, we specify local and global constraints in DRNets – local constraints only involve a single input data point whereas global constraints involve several input data points, and they are optimized using different strategies.\nSolving the constrained optimization problem (1) directly is extremely challenging since the objective function in general involves deep neural networks, which are highly non-linear and non-convex, and prior knowledge often even involves combinatorial constraints (Fig.3). Therefore, we use Lagrangian relaxation to approximate equation (1) with an unconstrained optimization problem, i.e.,\nmin θ\n1\nN N∑ i=1 L(G(φθ(xi)),xi) + λlψl(φθ(xi)) + Ng∑ j=1 λgjψ g j ({φθ(xk)|k ∈ Sj}) (2)\nN is the number of input data points, Ng denotes the number of global constraints, Sj denotes the set of indices w.r.t. the data points involved in the j-th global constraint, and ψl, ψgj denote the penalty\nfunctions for local constraints and global constraints, respectively, along with their corresponding penalty weights λl and λgj . In the following, we propose two mechanisms to tackle the above unconstrained optimization task (Fig.3).\nContinuous Relaxation: Prior knowledge often involves combinatorial constraints with discrete variables that are difficult to optimize in an end-to-end manner using gradient-based methods. Therefore, we need to design proper continuous relaxations for discrete constraints to make the overall objective function differentiable. Existing works (Hu et al., 2016a; Xu et al., 2017) proposed several relaxations for injecting first-order logic and propositional logic into deep learning. However, limited by the expressive power of those logic formulas, we need a large number of logical terms to express constraints such as k-sparsity constraints or All-Different constraints. Therefore, to instantiate DRNets for our tasks, we propose a group of entropy-based continuous relaxations to encode general discrete constraints such as sparsity, cardinality and All-Different constraints (see Fig.4). We construct continuous relaxations based on probabilistic modelling of discrete variables,\nwhere we model a probability distribution over all possible values for each discrete variable. For example, in Multi-MNIST-Sudoku, a way of encoding the possible two digits in the cell indicated by data point xi (one from {1...4} and the other from {5...8}), is to use 8 binary variables ei,j ∈ {0, 1}, while requiring ∑4 j=1 ei,j = 1 and ∑8 j=5 ei,j = 1. In DRNets, we model probability distribution Pi and Qi over digits 1 to 4 and 5 to 8 respectively: Pi,j ,j=1...4 and Qi,j ,j=1...4 denote the probability of digit j and the probability of digit j + 4, respectively. We approximate the cardinality constraint of ei,j by minimizing the entropy of Pi and Qi, which encourages Pi and Qi to collapse to one value. Another combinatorial constraint in Multi-MNIST-Sudoku is the All-Different constraint, where all the cells in a constrained set S, i.e., each row, column, and any of four 2x2 boxes involving the corner cells, must be filled with non-repeating digits. For a probabilistic relaxation of the All-Different constraint, we analogously define the entropy of the averaged digit distribution for all cells in a constrained set S, i.e., H(P̄S) :\nH(P̄S) = − 4∑ j=1 P̄S,j log P̄S,j = − 4∑ j=1 ( 1 |S| ∑ i∈S Pi,j ) log ( 1 |S| ∑ i∈S Pi,j ) (3)\nIn this equation, a larger value implies that the digits in the cells of S distribute more uniformly. Thus, we can analogously approximate All-Different constraints by maximizing H(P̄S) and H(Q̄S). One can see, by minimizing all H(Pi) and H(Qi) to 0 as well as maximizing all H(P̄S) and H(Q̄S) to log |S|, we find a valid solution for the two 4x4 Sudoku puzzles, where all Pi,j are either 0 or 1. We also relax k-sparsity constraints, which for example in Crystal-Phase-Mapping state the maximum number k of pure phases in an XRD-pattern, by minimizing the entropy of the phase distribution PM below a threshold c < log k. We choose the threshold c < log k because the entropy of a discrete distribution PM concentrated on at most k values cannot exceed log k. Note that other relaxations can be adapted in DRNets, for these and other tasks. See also additional relaxations (e.g., for SAT constraints), detailed relaxation derivations, and implementation details in supplementary materials.\nConstraint-Aware Stochastic Gradient Descent: We introduce a variant of standard SGD method called constraint-aware SGD, which is conceptually similar to the optimization process in GraphRNN (You et al., 2018), to tackle the optimization of global penalty functions ψgj ({φθ(xk)|k ∈ Sj}), which involve several (potentially all) data points. We define a constraint graph, an undirected\nAlgorithm 1 Constraint-aware stochastic gradient descent optimization of deep reasoning networks.\nInput: (i) Data points {xi}Ni=1. (ii) Constraint graph. (iii) Penalty functions ψl(·) and ψ g j (·) for the\nlocal and the global constraints. (iv) Pre-trained or parametric generative decoder G(·). 1: Initialize the penalty weights λl, λgj and thresholds for all constraints. 2: for number of optimization iterations do 3: Batch data points {x1, ...,xm} from the sampled (maximal) connected components. 4: Collect the global penalty functions {ψgj (·)}Mj=1 concerning those data points. 5: Compute the latent space {φθ(x1), ..., φθ(xm)} from the encoder. 6: Adjust the penalty weights λl, λ g j and thresholds accordingly.\n7: minimize 1m (∑m i=1 L(G(φθ(xi)),xi) + λlψl(φθ(xi)) ) + ∑M j=1 λ g jψ g j ({φθ(xk)|k ∈ Sj}) using any standard gradient-based optimization method and update the parameters θ. 8: end for\ngraph in which each data point forms a vertex and two data points are linked if they are in the same global constraint. Constraint-aware SGD batches data points from the randomly sampled (maximal) connected components in the constraint graph, and optimizes the objective function w.r.t. the subset of global constraints concerning those data points and the associated local constraints. For example, in Multi-MNIST-Sudoku, each overlapping Sudoku forms a maximal connected component, we batch the data points from several randomly sampled overlapping Sudokus and optimize the All-Different constraints (global) as well as the cardinality constraints (local) within them. However, in Crystal-Structure-Phase-Mapping, the maximal connected component becomes too large to batch together, due to the constraints (phase field connectivity and Gibbs-alloying rule) concerning all data points in the composition graph. Thus, we instead only batch a subset (still a connected component) of the maximal connected component – e.g., a path in the composition graph, and optimize the objective function that only concerns constraints within the subset (along the path). By iteratively solving sampled local structures of the ”large” maximal component, we cost-efficiently approximate the entire global constraint. Moreover, for optimizing the overall objective, constraint-aware SGD dynamically adjusts the thresholds and the weights of constraints according to their satisfiability, which can involve non-differentiable functions (See details in appendix). For efficiency and potential capability of generalization, DRNets solve all instances together using constraint-aware SGD (see Algorithm 2)." }, { "heading": "4 EXPERIMENTS", "text": "We illustrate the power of DRNets mainly on two pattern de-mixing tasks – disentangling two overlapping hand-written Sudokus (Multi-MNIST-Sudoku) and inferring crystal structures of materials from X-ray diffraction data (Crystal-Structure-Phase-Mapping). Limited by the space, we put the details of the experiments and the experimental results of DRNets on other tasks in supplementary material. Note that, since DRNets are an unsupervised framework, we can apply the restart (Gomes et al., 1998) mechanism, i.e., we can re-run DRNets for unsolved instances.\nMulti-MNIST-Sudoku: We generated 160,000 input data points for each training set, validation set and test set, where each data point corresponds to a 32x32 image of overlapping digits coming from MNIST (LeCun et al., 1998) and every 16 data points form a 4-by-4 overlapping Sudokus. For Multi-MNIST-Sudoku, DRNets batch every 16 data points together to enforce the All-Different constraints among the cells of each Sudoku. The encoder of DRNets is composed of two ResNet-18 He et al. (2016) and we use a conditional GAN (Mirza & Osindero, 2014) as our generative decoder (denoted as G(·)), which is trained using the digits in the training set of MNIST. For each cell xi, the encoder encodes a latent space, which consists of two parts: The first part includes two distribution Pi and Qi (see Fig.5) concerning the possible digits in the cell, and the second part is the latent encodings zi,1, ..., zi,8 of each possible digit conditioned on the overlapping digits, which is used by the generative decoder to generate the corresponding digits G(zi,j). We estimate the two digits in the cell by computing the expected digits over Pi and Qi, i.e.,\n∑4 j=1 Pi,jG(zi,j) and∑4\nj=1Qi,jG(zi,j+4), and reconstruct the original input mixture (see Fig.5). As described above, we impose the continuous relaxation of the cardinality and All-Different constraints to reason about the the Sudoku structure among cells of the overlapping Sudokus. To demonstrate the power of reasoning, we compared our unsupervised DRNets with supervised start-of-the-art MNIST de-mixing models – CapsuleNet (Sabour et al., 2017) and ResNet (He et al., 2016), and a variant of DRNets that removes the reasoning modules (”DRNets w/o Reasoning”). To saturate the performance of baseline models, we also applied a post-process local search for them to incorporate the Sudoku Rules. Specifically, we did a local search for the top-2 (top-3 would take too long to search) most likely choice of digits for each Sudoku of the two overlapping Sudokus and try to satisfy Sudoku rules with minimal modification compared with the original prediction. We evaluate both the percentage of digits that are correctly de-mixed (digit accuracy) and the percentage of overlapping Sudokus that have all digits correctly de-mixed (Sudoku accuracy). Empowered by reasoning, DRNets significantly outperformed CapsuleNet, ResNet, and DRNets without reasoning, perfectly recovered all digits with the restart mechanism (see Fig.5), and additionally reconstructed the mixture with high-quality (see Fig.1). Moreover, because DRNets solve all instances together (see Algorithm 2), not only can DRNets solve instances directly on the test set from random initialization, DRNets can also generalize from the training set to test set, given enough training examples. DRNets learn to generalize its de-mixing performance on the test set by solving the training set instances self-supervised (Jing & Tian, 2019) by Sudoku rules, instead of labels, and even outperform CapsuleNet and ResNet (Fig.5). Note that, for unseen instances in the test set, we further optimize the instances for 25 steps to achieve the reported performance (Additional details in the supplementary material).\nCrystal-Structure-Phase-Mapping concerns inferring crystal structures from a set of X-ray diffraction measurements (XRDs) of a given chemical system, satisfying thermodynamic constraints. Crystal structure phase mapping is a very challenging task, a major bottleneck in high-throughput materials discovery: Each X-ray measurement may involve several mixed crystal structures; each chemical system includes hundreds of possible crystal structures; for each crystal structure pattern, we only have a theoretical (idealized) model of pure crystal phases; the thermodynamic rules are also complex; and the crystal patterns are difficult for human experts to interpret. Herein, we illustrate DRNet for crystal structure phase mapping for two chemical systems: (1) a ternary Al-Li-Fe oxide system (Le Bras et al., 2014), which is theoretically based, synthetically generated, with ground truth solutions, and (2) a ternary Bi-Cu-V oxide system, which is a more challenging real experiment-based system, more noisy and uncertain. For each system, each input data point is the XRD of a mixture of crystal structures. Additionally, the input includes the composition graph specifying elemental compositions and the constraint graph of the data points. We also collected a library of possible crystal structures from the International Centre for Diffraction Data (ICDD) database. Each crystal\nstructure (also named phase) is given as a list of diffraction peak location-amplitude pairs, (referred to as stick pattern), representing the ideal phase patterns measured in a perfect condition (see Fig.6). To model more realistic conditions, DRNets simulate the real phase patterns from stick patterns using Gaussian mixture models, where the relative peak locations and mixture coefficients are given by the stick locations and amplitudes. Moreover, the peak width, peak location shift, and peak amplitude variance are parameterized by the latent encoding zi,j and used by the generative decoder to generate the corresponding possible phase patterns in the reconstructed XRD measurement.\nWe compared DRNets with IAFD (Bai et al., 2017) and NMF-k (Stanev et al., 2018), which are both state-of-the-art non-negative matrix factorization (NMF) based unsupervised de-mixing models. NMF-k improves the pure NMF algorithm (Long et al., 2009) by clustering common phase patterns from thousands of runs. However, NMF-k does not directly enforce thermodynamic rules and therefore the solutions produced are often not completely physically meaningful. IAFD uses external mixed-integer programming modules to enforce thermodynamic rules during the de-mixing. However, due to the gap between the external optimizer and NMF module, the solution of IAFD is still far from the ground truth. Our evaluation criteria include reconstruction losses, phase fidelity loss and the satisfaction of thermodynamic rules. Note that, the phase fidelity loss measures the JS distance between the de-mixed phases and the closest ideal phases by fitting the de-mixed phases with the ICDD stick patterns using the physical model (Le Bras et al., 2014). As shown in Fig.7, for the Al-Li-Fe oxide system, the phase concentration (the distribution of de-mixed pure phases over all data points of that chemical system) of either IAFD or NMF-k is far from the ground truth. In contrast, DRNet almost exactly recovered the ground truth solution by seamlessly integrating pattern recognition, reasoning and prior knowledge. Moreover, by explicitly incorporating the ICDD stick pattern information into DRNets, the phases de-mixed by DRNets are much more real than those from IAFD and NMF-k (see phase fidelity loss). For Bi-Cu-V oxide system, DRNets solved this previously unsolved real system, producing valid crystal structures and significantly outperforming IAFD and NMF-k w.r.t. reconstruction errors and phase fidelity loss. In addition, materials science experts thoroughly checked DRNet’s solution of Bi-Cu-V oxide system, approved it, and subsequently discovered a new material that is important for solar fuels technology." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "We propose DRNets, a powerful end-to-end framework that combines deep learning with logical and constraint reasoning for solving unsupervised pattern de-mixing tasks. DRNets outperform the state of the art for de-mixing MNIST Sudokus and crystal-structure phase mapping, solving previously unsolved chemical systems substantially beyond the reach of other methods and materials science experts’ capabilities. While we illustrate the potential of DRNets with unsupervised settings, it is straightforward to impose supervision into DRNets. Future research includes exploring DRNets\nfor incorporating other types of constraints, prior knowledge, and objective functions, for other applications." }, { "heading": "A SUPPLEMENTARY MATERIALS", "text": "Herein, we provide additional details about DRNets and our experimental settings for a better understanding of DRNets and reproducibility of our results. Code and datasets to reproduce the experiments will be provided with the final version of the paper." }, { "heading": "A.1 CONTINUOUS RELAXATION", "text": "In this section, we provide more relxations for other constraints such as SAT constraints and provide an intuitive high-level informal proof that all the relaxations converge to a valid solution of the discrete version when it achieves its minimal value. Fig.8 summarizes the relaxations.\nFor cardinality constraints, when the entropy of distribution Pi and Qi reaches 0, all the probability mass collapses to only one variable. Therefore, all Pi,j and Qi,j are either 0 or 1, which is a valid solution of the original discrete constraints.\nFor All-Different constraints, we maximize the entropy of the averaged digit distribution for all cells in a constrained set S, i.e., H(P̄S). Note that, the All-Different constraints are imposed together with the cardinality constraints. Therefore, when the entropy of the digit distribution in each cell is zero, we know that the digit distribution of each cell converges to one digit. Hence, if H(P̄S) reaches its maximum, i.e., log |S|, we have 1|S| ∑ i∈S Pi,j = 1 |S| for all digit j. Crossed with the fact that Pi,j are either 0 or 1 when the cardinality constraints are satisfied, we know that only one Pi,j is equal to 1 for all cell i in the set S and others are 0, which directly states the All-Different constraints.\nWe derive the k-Sparsity constraints in a similar way as the cardinality constraints except that we now want to force the distribution to concentrate on at most k digits. By normalizing the values of\ndiscrete variables ei,j (j = 1...M ) to a discrete distribution PM , we can minimize the entropy of distribution PM to at most log k, which is the maximal entropy when the distribution concentrates on only k values. Though, H(PM ) < log k is not a sufficient condition for k-sparsity, we can initialize the threshold c of k-sparsity constraints to log k and dynamically adjust the value of c based on the satisfaction of the k-sparsity constraints. In practice, it works well with the supervision from other modules, such as the self-reconstruction.\nFor SAT constraint relaxations, the key idea is to minimize the entropy of the Bernoulli distribution over each literal to force it converge to either 1 or 0. Then, we maximize the sum of the value of literals in each clause (or their negation) to encourage one of the literals to be 1. However, maximizing the sum of the value of literals does not necessarily give you a valid assignment because there could exist an assignment that the sum of literals in some clauses are 0 and the sum of literals in other clauses are very large. Therefore, we use leaky rule (Xu et al., 2015) function to discount the loss when the sum is larger than 1. As shown in Fig.8, we formulate the relaxation loss function in a form to be minimized. For k-SAT problems with Nc clauses, we can set the leaky ratio to be 1Nck , so that any invalid assignment cannot have a loss that is less or equal to 0. On the other hand, for any valid assignment, the sum of literals in each clause is at least 1. Thus, we can obtain a valid assignment of k-SAT constraints by minimizing the loss function to 0.\nWe describe other task specific constraints (e.g., phase field connectivity constraints) in the following experimental sections." }, { "heading": "A.2 CONSTRAINT-AWARE STOCHASTIC GRADIENT DESCENT:", "text": "Algorithm 2 Constraint-aware stochastic gradient descent optimization of deep reasoning networks.\nInput: (i) Data points {xi}Ni=1. (ii) Constraint graph. (iii) Penalty functions ψl(·) and ψ g j (·) for the\nlocal and the global constraints. (iv) Pre-trained or parametric generative decoder G(·). 1: Initialize the penalty weights λl, λgj and thresholds for all constraints. 2: for number of optimization iterations do 3: Batch data points {x1, ...,xm} from the sampled (maximal) connected components. 4: Collect the global penalty functions {ψgj (·)}Mj=1 concerning those data points. 5: Compute the latent space {φθ(x1), ..., φθ(xm)} from the encoder. 6: Adjust the penalty weights λl, λ g j and thresholds accordingly.\n7: minimize 1m (∑m i=1 L(G(φθ(xi)),xi) + λlψl(φθ(xi)) ) + ∑M j=1 λ g jψ g j ({φθ(xk)|k ∈ Sj}) using any standard gradient-based optimization method and update the parameters θ. 8: end for\nWe introduce a variant of standard SGD method called constraint-aware SGD, which is conceptually similar to the optimization process in GraphRNN (You et al., 2018), to tackle the optimization of global penalty functions ψgj ({φθ(xk)|k ∈ Sj}), which involve several (potentially all) data points. We define a constraint graph, an undirected graph in which each data point forms a vertex and two data points are linked if they are in the same global constraint. Constraint-aware SGD batches data points from the randomly sampled (maximal) connected components in the constraint graph, and optimizes the objective function w.r.t. the subset of global constraints concerning those data points and the associated local constraints. For example, in Multi-MNIST-Sudoku, each overlapping Sudoku forms a maximal connected component, we batch the data points from several randomly sampled overlapping Sudokus and optimize the All-Different constraints (global) as well as the cardinality constraints (local) within them. However, in Crystal-Structure-Phase-Mapping, the maximal connected component becomes too large to batch together, due to the constraints (phase field connectivity and Gibbs-alloying rule) concerning all data points in the composition graph. Thus, we instead only batch a subset (still a connected component) of the maximal connected component – e.g., a path in the composition graph, and optimize the objective function that only concerns constraints within the subset (along the path). By iteratively solving sampled local structures of the ”large” maximal component, we cost-efficiently approximate the entire global constraint.\nMoreover, for optimizing the overall objective, constraint-aware SGD dynamically adjusts the thresholds and the weights of constraints according to their satisfiability, which can involve nondifferentiable functions. Specifically, we initialize penalty weights of constraints and thresholds for\npenalty functions using hyper-parameters. During training, we check the satisfiability of constraints (this step could involve non-differentiable functions) after several epochs and increase the penalty for violated constraints. For example, the threshold c of k-sparsity is initialized as log k, which is the entropy of the case that the probability mass is evenly distributed among k entities. Thus, it could be the case that there are more than k entities, but their probability mass is not evenly distributed. Hence, we check the satisfiability of k-sparsity constraint: if the entropy is already below the current threshold (log k) and there are still more than k entities with probability mass more than (0.01), we decrease the threshold c to keep enforcing the model to minimize the entropy to reach the k-sparsity.\nFinally, to better exploit parallelization, DRNets solve all instances together using constraint-aware SGD (see Algorithm 2)." }, { "heading": "A.3 RESTART MECHANISM FOR DRNETS:", "text": "Note that, since DRNets are an unsupervised framework, we can apply the restart (Gomes et al., 1998) mechanism, i.e., we can re-run DRNets for unsolved instances. Specifically, since DRNets directly incorporate logical constraints, we can check whether those constraints are satisfied at the end of a run. If not, for instances with violated constraints, we re-run the algorithm again on them. We only applied restart mechanism on Multi-MNIST-Sudoku and other NP-C problems (in the appendix) such as 3-SAT problems and standard Sudoku problems. For crystal-structure phase mapping, the results generated from one run of DRNets is already good enough." }, { "heading": "A.4 EXPERIMENTAL CONFIGURATION", "text": "All the experiments are performed on one NVIDIA Tesla V100 GPU with 16GB memory. For the training process of our DRNets, we select a learning rate in {0.0001, 0.0005, 0.001} with Adam optimizer (Kingma & Ba, 2014), for all the experiments.\nFor baseline models, we followed their original configurations and further fine-tuned their hyperparameters to saturate their performance on our tasks." }, { "heading": "A.4.1 MULTI-MNIST-SUDOKU", "text": "For Multi-MNIST-Sudoku, we compared DRNets with CapsuleNet (Sabour et al., 2017) and ResNet (He et al., 2016). Because Sabour et al. (2017) did not provide a source code for CapsuleNet, we adopted the implementation of Laodar (2017), with minor modifications. For ResNet, we adopted a 18-layer ResNet architecture (Khanrc, 2017) to saturate its performance.\nIn Multi-MNIST-Sudoku, a data point corresponds to a 32× 32 image of overlapping digits. For the optimization mode of DRNets, we generated 160, 000 input data points that all come from the test set of MNIST (LeCun et al., 1998) and every 16 data points form a 4-by-4 overlapping Sudokus. Thus, these 160, 000 data points form 10, 000 Sudokus. These 10, 000 Sudokus are used as the test set and shared across DRNets and baselines. For the generalization mode of DRNets, we split the training set of MNIST into three parts: 160, 000 data points for DRNets learning, 25, 000 original MNIST images for training conditional GAN and another 160, 000 data points for validation. Note that these three datasets are disjoint. Baselines share the same training set as the generalization mode of DRNets. By using the constraint-aware SGD, DRNet batches every 16 data points together, which forms an overlapping Sudoku as well as a maximal connected component in the constraint graph, to enforce the All-Different constraints among the cells of each Sudoku.\nDRNet for Multi-MNIST-Sudoku: the encoder is made of two ResNet-18 models adapted from the PyTorch source code. The output layer for the first network has 8 dimensions, which models the two distributions Pi and Qi for the two overlapping digits. Another network outputs eight 100-dimensional (800 dimensions in total) latent encoding zi,j to encode the shape of the possible eight digits conditioned on the input mixture, and is used by the generative decoder to generate the reconstructed digits. We use a conditional GAN (Mirza & Osindero, 2014) as our generative decoder, which is pre-trained using the digits in the partial training set (see the paragraph above) of MNIST. Note that this is the only supervision we have in this task, which is even weaker than the general concept of the weakly-supervised setting (Zhang et al., 2017). We adopted the implementation of Linder-Noren (2019) for our conditional GAN. On the other hand, the 10,000 overlapping Sudokus in the test set were all generated using the digits in the test set of MNIST, which had never been\nseen, even by the conditional GAN. Moreover, we overlap the images of two digits pixel-wisely, maximizing the whiteness of the two images. For robustness concern, we used L1 loss as the reconstruction loss between the reconstructed mixture and the original input. For the initial weights, we set 0.01 for the cardinality constraints, 1.0 for the All-Different constraints, and 0.001 for the L1 loss. Finally, we trained DRNets for 100 epochs with a batch size of 100, and it took 50 minutes to finish the optimization and achieve the reported performance for the 10,000 overlapping Sudokus.\nFor the generalization mode of DRNets, we first ”train” DRNets on the training set and validate its generalization performance on the validation set to apply the early stop mechanism. Finally, we start from the ”trained” DRNets and further optimize it for 25 steps on the test set to achieve the reported performance. Note that, to generalize well on the test set, we ”trained” DRNets for a longer time than the optimization mode. Essentially, the procedure of the generalization mode of DRNets is similar to standard supervised learning process except that we do not need labels to supervise DRNets. In contrast, DRNets are really ”self-supervised” (Jing & Tian, 2019) by the Sudoku rules and the self-reconstruction, instead of the standard supervision by labeled data. Note that, during the test, instead of predicting the overlapping digits directly as other networks, we further optimize DRNets on the test set for 25 epochs to achieve a better result." }, { "heading": "A.4.2 CRYSTAL-STRUCTURE-PHASE-MAPPING", "text": "We illustrate DRNets for crystal structure phase mapping for two chemical systems: (1) a ternary Al-Li-Fe oxide system (Le Bras et al., 2014), which is theoretically-based, synthetically generated, with ground-truth solutions, and (2) a ternary Bi-Cu-V oxide system, which is a more challenging real system obtained from chemical experiments and is more noisy and uncertain. For each system, the input data points are mixtures of XRDs, associated with a composition graph identifying elemental compositions and the constraint graph of data points. Specifically, each XRD data point is associated with a 3-dimensional composition vector, which is the proportion of the three different metal elements at that data point (e.g., [80% of Al, 5% of Fe, 15% of Li]) and could help identify possible phases. Then, we can locate each data point into a triangular system. Note that, since the vector is a probability distribution, there are only 2 degrees of freedom and we can plot it in a 2-D triangle (See Fig.11). After locating each data point into the 2-D triangle as vertices, we did a Delaunay triangulation over those points to build edges among vertices. Therefore, we can use Breadth-First Search on this graph to sample paths in the composition graph and infer thermodynamic rules accordingly.\nThe XRD pattern of each data point is aD-dimensional vector representing the intensity of the mixture of XRDs at different diffraction angles (referred asQ values). For Al-Li-Fe oxide system, we have 231 data points (mixtures of XRDs) in the composition graph, 159 stick patterns for the possible phases, and each data point has 650 different Q values Qi ∈ [15◦, 80◦] and the corresponding intensities Ii ∈ [0, 1]. For Bi-Cu-V oxide system, we have 353 data points in the composition graph, 100 stick patterns for the possible phases, and each data point has 4096 different Q values Qi ∈ [5◦, 45◦] and the corresponding intensities Ii ∈ [0, 1]. To better utilize the memory, we down-sampled the raw data of Bi-Cu-V oxide system to 512 different Q values. Note that, though we have hundreds of possible\npure phases for each system, only a few phases would appear. For example, in Al-Fe-Li oxide system, only 6 of them appear and there are 15 different mixtures of those 6 pure phases exist in this system. For the Bi-Cu-V-O system, there are 13 pure phases and 19 different mixtures. Note that, each XRD data point is like a cell in the Multi-MNIST-Sudoku (with mixed pure phases) and each pure phase is like a digit. For Multi-MNIST-Sudoku, we know a priori that there are exact 2 digits in each cell but the number of mixed pure phases in each XRD is undetermined (1 to 3). Moreover, the number of possible candidate phases is way more than possible digits (e.g., 159 vs 8), which is the reason why this task is so challenging.\nWe also collected a library of possible crystal structures from the International Centre for Diffraction Data (ICDD) database. Each crystal structure (also named phase) is given as a list of diffraction peak location-amplitude pairs, (referred to as stick pattern), representing the ideal phase patterns measured in a perfect condition (see Fig.12). To model more realistic conditions, DRNets simulate the real phase patterns from stick patterns using Gaussian mixture models, where the relative peak locations and mixture coefficients are given by the stick locations and amplitudes. Moreover, the peak width, multiplicative location shift, and possible amplitude variance are parameterized by the latent encoding zi,j and used by the generative decoder to generate the corresponding possible phase patterns in the reconstructed XRD measurement.\nImposing thermodynamic rules is challenging, especially when constraints, such as phase field connectivity and Gibbs-alloying rule, potentially concern all data points in the composition graph.\nIn Multi-MNIST-Sudoku, where each overlapping Sudoku naturally forms the maximal connected components in the constraint graph, we can easily batch every 16 data points together to reason about the All-Different constraints among them. However, in Crystal-Structure-Phase-Mapping, since the maximal connected component involves all data points in the composition graph, neither batching all data points into the memory nor reasoning about the whole graph is tractable. Therefore, we devised a strategy of sampling the large connected component through many local structures (still connected components) and iteratively solve each of them. Specifically, for each oxide system, we sampled 100,000 paths in the composition graph via Breadth First Search to construct a path pool. Then, for every iteration, DRNets randomly sample a path from the pool and batches the data points along that path (see 10). Finally, we only reason about the thermodynamic rules along the path. By iteratively solving sampled local structures (paths) of the ”large” maximal component, we can cost-efficiently approximate all global constraints.\nWe summarize the thermodynamic rules we imposed in DRNets:\nGibbs Phase Rule: This rule states the maximum number of co-existing phases, which is imposed via our relaxation of the k-sparsity constraints.\nGibbs-Alloying Rule: This rule states that if ”alloying” happens, then the maximum number of possible co-existing phases should decrease by one. ”Alloying” is a phenomenon that the stick locations of a phase (crystal structure) shift (change) along with adjacent data points. DRNet explicitly models the shifting ratio in the generative decoder and penalize the difference between adjacent data points along our sampled path. The reasoning module keeps track of the difference of shifting ratio between adjacent data points, and when it is larger than a threshold (0.001), we confirm the existence of ”alloying” and reduce the maximum number of possible co-existing phases by one via adjusting the threshold c in the k-Sparsity Constraints.\nPhase Field Connectivity: This states that the distribution (also referred as activation) of a phase field should form a connected component in the composition graph, and the variation of the activation of each phase should also be smooth (see Fig.13). (Herein, the phase field refers to the co-existence of a combination of phases, including the existence of a pure phase.) We impose this rule by penalizing the difference of the phase distribution Pi between adjacent data points along our sampled path.\nMultiplicative Shifting: This states how a cubic crystal structure shifts when ”alloying” happens, and this can also be used to approximate the shifting of other crystal structures. We explicitly modeled the multiplicative shifting in our generative decoder.\nNoise Threshold: To remove negligible activations that are mainly caused by noise we applied simple post-processing that cuts-off all the activations that are lower than 1.0%.\nHere, we visualized the DRNets’ solution of Bi-Cu-V oxide system (see Fig.13 and the comparison among different methods Fig.14).\nIn our comparison, we evaluated the percentage of data points or phase field that satisfy each thermodynamic rule. Though IAFD enforced the thermodynamic rule using an external mixed-integer programming module, it may compromise some rules to achieve a better reconstruction error, which explains IAFD’s result for Bi-Cu-V oxide system. The phase fidelity loss we mentioned in our comparison is the JS distance between the de-mixed pure phase and the closest ideal phase generated\nusing the ICDD stick patterns and the physical model proposed in Le Bras et al. (2014). The reason of using JS distance to measure the fidelity is that the location of peaks are the most important characteristics of a phase pattern. Therefore, by normalizing the XRD patterns of pure phases into probability distributions, we can use the JS distance to measure the mismatch of ”peaks” between them.\nIn terms of the optimization process, DRNets took about 30 minutes to achieve the reported performance for both systems. IAFD and NMF-k have a similar time performance but a much worse performance w.r.t. the solution quality. In fact, for the Bi-Cu-V oxide system, both NMF-k’s solution and IAFD’s solution are not physically meaningful.\nIn summary, by combining reasoning and deep learning, DRNets significantly outperformed the state of the art and human experts on the crystal-Structure-Phase-Mapping instances, recovering more precise, interpretable, and physically meaningful crystal structure pattern decompositions, and even solving phase diagrams of chemical systems that had not been solved before, such as the Bi-Cu-V-O system, but also other systems not reported here." }, { "heading": "A.4.3 OTHER EXPERIMENTS FOR COMBINATORIAL PROBLEMS", "text": "As a proof of concept of how DRNets can encode general combinatorial constraints using our entropybased continuous relaxation, we solved 9-by-9 Sudoku puzzles and Boolean satisfiability problems (SAT) using DRNets. For those two tasks, we use a 3-layer-fully-connected network as our encoder and the reasoning modules.\nFor 9-by-9 Sudoku puzzles, we generated 10,000 instances using the dataset gathered by Gordon Royle (2014), where each Sudoku instance has 24 to 32 (uniformly distributed) known cells and is guaranteed to have one unique solution (e.g., see Fig.15). Because a standard 9x9 Sudoku puzzle requires reasoning about the unknown structure based on given clues, we need to treat each entire Sudoku as a single input data point. Therefore, in this task, even the All-Different constraints are conceptually the local constraints since each of them only concerns a single data point. We used a one-hot encoding for digits 1 to 9 and the empty cell (denoted as 0), and the entire Sudoku is an 810-dimensional input data. We used a 3-layer-fully-connected network with batch normalization (Ioffe & Szegedy, 2015) as the encoder, where every hidden layer has 2048 units and the output is an 81-by-9 matrix, which represents the digit distributions (1 to 9) for 81 cells. Moreover, we enforced the distribution of every known cell to collapse to the digit in that cell. For the initial weights, we set 0.0001 for the cardinality constraints and 1.0 for the All-Different constraints. Finally, we trained DRNets for 800,000 iterations with a batch size of 500, and it took 1 hour to solve the 10,000 9x9 Sudokus with the accuracy reported in this paper.\nIn our experiments, DRNets achieved the same level of performance as the Recurrent Relational Networks (RRNets) (Palm et al., 2017), which is the state-of-the-art supervised deep learning 9x9 Sudoku solver (see Table 1).\nFor SAT problems, we generated 10,000 satisfiable random 3-SAT instances of different difficulties based on the number of literals n and the number of clauses m, and our goal is to find a valid assignment for each literal. We challenged our DRNet with the hardest random 3-SAT instances, where #clauses/#literals=4.3 (Mitchell et al., 1992), i.e., n = 30,m = 129, n = 50,m = 215 and n = 100,m = 430. For easier instances (e.g. #clauses/#literals = 3.0), DRNets can almost solve all instances (see Table 1).\nWe use a 3-layer-fully-connected network as the encoder, where the number of hidden units in the network is 2048, 2048, 2048. We used the standard CNF representation of 3-SAT as the input data, so that each data point is an m-by-3 matrix and the three values in the j-th row represent the three literals in the j-th clauses. For the initial weights, we select a value from {0.05, 0.03, 0.025, 0.02, 0.01} to be the weight of the entropy loss as we described in the Fig.4 of the main paper. For the three settings of different difficulty, we consistently trained DRNets with a batch size of 100 and the running time for solving 10,000 instances varies from several minutes to a couple of hours.\nWe compared DRNets with NeuroSAT Selsam et al. (2018) and PDP (Amizadeh et al., 2019). Both NeuroSAT and PDP are the state-of-the-art deep learning SAT solvers with one-bit supervision. In addition, PDP needs extra optimizing process to solve SAT instances during the test phase, where it also applied the restart mechanism in their framework. For fair comparison, we saturated the performance of all our baseline models. For all instances, DRNets took less than 2 hours to achieve the reported performance with the restart mechanism. Without supervision, DRNets outperformed both supervised baseline models.\nInterestingly, though DRNets are best suited for problems that combine deep learning and reasoning, such as de-mixing Multi-MNIST-Sudokus or crystal structure phase mapping, it still achieved such a promising result in pure combinatorial problems. These results further demonstrate that DRNets can encode a broad range of combinatorial constraints and prior knowledge and effectively combine deep learning with reasoning." } ]
2,019
null
SP:278819106a3ae8c15442e56994fb175a0cad70dd
[ "This work explored the effect of LayerDrop training in efficient pruning at inference time. The authors showed that it is possible to have comparable performance from sub-networks of smaller depth selected from one large network without additional finetuning. More encouraging is that the sub-networks are able to perform better than the same network trained from scratch or learned based on distillation.", "The paper proposes a method, LayerDrop, for pruning layers in Transformer based models. The goal is to explore the stochastic depth of transformer models during training in order to do efficient layer pruning at inference time. The key idea is simple and easy to understand: randomly dropping transformer layers during training to make the model robust to subsequent pruning. The authors perform empirical studies on several sequence modeling task to conclude that the proposed approach allows efficient pruning of deeper models into shallow ones without fine-tuning on downstream tasks. There are also empirical experiments done to demonstrate that the proposed approach outperforms recent model pruning techniques such as DistillBERT under comparable configurations. " ]
Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation.
[ { "affiliations": [], "name": "STRUCTURED DROPOUT" }, { "affiliations": [], "name": "Angela Fan" }, { "affiliations": [], "name": "Edouard Grave" } ]
[ { "authors": [ "Alexei Baevski", "Michael Auli" ], "title": "Adaptive input representations for neural language modeling", "venue": "arXiv preprint arXiv:1809.10853,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Gonçalo M Correia", "Vlad Niculae", "André FT Martins" ], "title": "Adaptively sparse transformers", "venue": "arXiv preprint arXiv:1909.00015,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "William W Cohen", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "Yann N. Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "In Proc. of ICML,", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "William B Dolan", "Chris Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In Proceedings of the International Workshop on Paraphrasing,", "year": 2005 }, { "authors": [ "Sergey Edunov", "Alexei Baevski", "Michael Auli" ], "title": "Pre-trained language model representations for language generation", "venue": null, "year": 2019 }, { "authors": [ "Angela Fan", "Yacine Jernite", "Ethan Perez", "David Grangier", "Jason Weston", "Michael Auli" ], "title": "Eli5: Long form question answering", "venue": null, "year": 1907 }, { "authors": [ "Edouard Grave", "Armand Joulin", "Moustapha Cisse", "David Grangier", "Herve Jegou" ], "title": "Efficient softmax approximation for", "venue": "gpus. arXiv,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "In Proc. of CVPR,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Yihui He", "Xiangyu Zhang", "Jian Sun" ], "title": "Channel pruning for accelerating very deep neural networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Karl Moritz Hermann", "Tomáš Kočiský", "Edward Grefenstette", "Lasse Espeholt", "Will Kay", "Mustafa Suleyman", "Phil Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "In Proc. of NIPS,", "year": 2015 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Q Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Gao Huang", "Shichen Liu", "Laurens Van der Maaten", "Kilian Q Weinberger" ], "title": "Condensenet: An efficient densenet using learned group convolutions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zehao Huang", "Naiyan Wang" ], "title": "Data-driven sparse structure selection for deep neural networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yacine Jernite", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Variable computation in recurrent neural networks", "venue": "arXiv preprint arXiv:1611.06188,", "year": 2016 }, { "authors": [ "Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Matthijs Douze", "Hérve Jégou", "Tomas Mikolov" ], "title": "Fasttext. zip: Compressing text classification models", "venue": "arXiv preprint arXiv:1612.03651,", "year": 2016 }, { "authors": [ "Marcin Junczys-Dowmunt" ], "title": "Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation", "venue": "arXiv preprint arXiv:1907.06170,", "year": 2019 }, { "authors": [ "Guillaume Lample", "Alexis Conneau" ], "title": "Cross-lingual language model pretraining", "venue": "arXiv preprint arXiv:1901.07291,", "year": 2019 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Hao Li", "Asim Kadav", "Igor Durdanovic", "Hanan Samet", "Hans Peter Graf" ], "title": "Pruning filters for efficient convnets", "venue": "arXiv preprint arXiv:1608.08710,", "year": 2016 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A package for automatic evaluation of summaries", "venue": "In Workshop on Text Summarization Branches Out,", "year": 2004 }, { "authors": [ "Liyuan Liu", "Xiang Ren", "Jingbo Shang", "Jian Peng", "Jiawei Han" ], "title": "Efficient contextualized representation: Language model pruning for sequence labeling", "venue": "arXiv preprint arXiv:1804.07827,", "year": 2018 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "arXiv preprint arXiv:1810.05270,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Paul Michel", "Omer Levy", "Graham Neubig" ], "title": "Are sixteen heads really better than one", "venue": "arXiv preprint arXiv:1905.10650,", "year": 2019 }, { "authors": [ "Deepak Mittal", "Shweta Bhardwaj", "Mitesh M Khapra", "Balaraman Ravindran" ], "title": "Recovering from random pruning: On the plasticity of deep convolutional neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2018 }, { "authors": [ "Kenton Murray", "David Chiang" ], "title": "Auto-sizing neural networks: With applications to n-gram language models", "venue": "arXiv preprint arXiv:1508.05051,", "year": 2015 }, { "authors": [ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ], "title": "Scaling neural machine translation", "venue": "In Proc. of WMT,", "year": 2018 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations,", "year": 2019 }, { "authors": [ "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "How to construct deep recurrent neural networks", "venue": "In Proceedings of the Second International Conference on Learning Representations (ICLR", "year": 2014 }, { "authors": [ "Ngoc-Quan Pham", "Thai-Son Nguyen", "Jan Niehues", "Markus Muller", "Alex Waibel" ], "title": "Very deep selfattention networks for end-to-end speech recognition", "venue": null, "year": 1904 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of EMNLP,", "year": 2016 }, { "authors": [ "Victor Sanh" ], "title": "Smaller, faster, cheaper, lighter: Introducing distilbert, a distilled version of bert", "venue": null, "year": 2019 }, { "authors": [ "Abigail See", "Minh-Thang Luong", "Christopher D Manning" ], "title": "Compression of neural machine translation models via pruning", "venue": "arXiv preprint arXiv:1606.09274,", "year": 2016 }, { "authors": [ "Abigail See", "Peter J Liu", "Christopher D Manning" ], "title": "Get to the point: Summarization with pointergenerator networks", "venue": "In Proc. of ACL,", "year": 2017 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "arXiv preprint arXiv:1508.07909,", "year": 2015 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In Proc. of ACL,", "year": 2016 }, { "authors": [ "Dima Shulga" ], "title": "Distilling bert how to achieve bert performance using logistic regression", "venue": "towardsdatascience.com,", "year": 2019 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of EMNLP,", "year": 2013 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": null, "year": 1929 }, { "authors": [ "Sainbayar Sukhbaatar", "Edouard Grave", "Piotr Bojanowski", "Armand Joulin" ], "title": "Adaptive attention span in transformers", "venue": "arXiv preprint arXiv:1905.07799,", "year": 2019 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Raphael Tang", "Yao Lu", "Linqing Liu", "Lili Mou", "Olga Vechtomova", "Jimmy Lin" ], "title": "Distilling taskspecific knowledge from bert into simple neural networks", "venue": null, "year": 1903 }, { "authors": [ "Iulia Turc", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "Well-read students learn better: The impact of student initialization on knowledge distillation", "venue": null, "year": 1908 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Elena Voita", "David Talbot", "Fedor Moiseev", "Rico Sennrich", "Ivan Titov" ], "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "venue": null, "year": 1905 }, { "authors": [ "Li Wan", "Matthew Zeiler", "Sixin Zhang", "Yann Le Cun", "Rob Fergus" ], "title": "Regularization of neural networks using dropconnect", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "venue": "2019a. In the Proceedings of ICLR", "year": 2019 }, { "authors": [ "Qiang Wang", "Bei Li", "Tong Xiao", "Jingbo Zhu", "Changliang Li", "Derek F Wong", "Lidia S Chao" ], "title": "Learning deep transformer models for machine translation", "venue": "arXiv preprint arXiv:1906.01787,", "year": 2019 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Trevor Darrell", "Joseph E Gonzalez" ], "title": "Skipnet: Learning dynamic routing in convolutional networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R. Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In Proceedings of NAACL-HLT,", "year": 2018 }, { "authors": [ "Felix Wu", "Angela Fan", "Alexei Baevski", "Yann Dauphin", "Michael Auli" ], "title": "Pay less attention with lightweight and dynamic convolutions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Lijun Wu", "Yiren Wang", "Yingce Xia", "Fei Tian", "Fei Gao", "Tao Qin", "Jianhuang Lai", "Tie-Yan Liu" ], "title": "Depth growing for neural machine translation", "venue": "arXiv preprint arXiv:1907.01968,", "year": 2019 }, { "authors": [ "Zuxuan Wu", "Tushar Nagarajan", "Abhishek Kumar", "Steven Rennie", "Larry S Davis", "Kristen Grauman", "Rogerio Feris" ], "title": "Blockdrop: Dynamic inference paths in residual networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": null, "year": 1906 }, { "authors": [ "Hongyi Zhang", "Yann N Dauphin", "Tengyu Ma" ], "title": "Fixup initialization: Residual learning without normalization", "venue": "arXiv preprint arXiv:1901.09321,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Transformer architectures (Vaswani et al., 2017) have become the dominant architecture in natural language processing, with state-of-the-art performance across a variety of tasks, including machine translation (Vaswani et al., 2017; Ott et al., 2018), language modeling (Dai et al., 2019; Baevski & Auli, 2018) and sentence representation (Devlin et al., 2018; Yang et al., 2019). Each of its layers contains millions of parameters accessed during the forward pass, making it computationally demanding in terms of memory and latency during both training and inference. In an ideal situation, we would be able to extract sub-networks — automatically and without finetuning — from this over-parameterized network, for any given memory or latency constraint, while maintaining good performance. In contrast, standard pruning or distillation methods follow a strategy that often includes a finetuning or retraining step, and the process must be repeated for each desired depth.\nIn this work, we propose a novel approach to extract any sub-network without a post-hoc pruning process from over-parameterized networks. The core of our method is to sample small sub-networks from the larger model during training by randomly dropping model weights as in Dropout (Hinton et al., 2012) or DropConnect (Wan et al., 2013). This has the advantage of making the network robust to subsequent pruning. If well-chosen groups of weights are dropped simultaneously, the resulting small sub-networks can be very efficient. In particular, we drop entire layers to extract shallow models at inference time. Previous work (Huang et al., 2016) has shown that dropping layers during training can regularize and reduce the training time of very deep convolutional networks. In contrast, we focus on pruning. As illustrated in Figure 1, an advantage of our layer dropping technique, or LayerDrop, is that from one single deep model, we can extract shallow sub-networks of any desired depth on demand at inference time.\nWe validate our findings on a variety of competitive benchmarks, namely WMT14 EnglishGerman for machine translation, WikiText-103 (Merity et al., 2016) for language modeling, CNNDailymail (Hermann et al., 2015) for abstractive summarization, ELI5 (Fan et al., 2017) for long form question answering, and several natural language understanding tasks (Wang et al., 2019a) for sentence representation. Our approach achieves state of the art on most of these benchmarks as a result of the regularization effect, which stabilizes the training of larger and deeper networks. We also show that we can prune Transformer architectures to much smaller models while maintaining com-\npetitive performance, outperforming specific model reduction strategies dedicated to BERT (Devlin et al., 2018; Sanh, 2019) as well as training smaller models from scratch. Overall, applying LayerDrop to Transformer networks provides the following key advantages:\n• LayerDrop regularizes very deep Transformers and stabilizes their training, leading to stateof-the-art performance across a variety of benchmarks.\n• Small and efficient models of any depth can be extracted automatically at test time from a single large pre-trained model, without the need for finetuning.\n• LayerDrop is as simple to implement as dropout." }, { "heading": "2 RELATED WORK", "text": "Our approach is a form of Dropout (Srivastava et al., 2014) applied to model weights instead of activations, as in DropConnect (Wan et al., 2013). Different from DropConnect, we drop groups of weights to induce group redundancy to create models suited for pruning to shallow, efficient models at inference time. Gomez et al. (2018) propose a targeted Dropout and DropConnect, where they learn the drop rate of the weights to match a targeted pruning scheme. Instead, we adapt the masks to the structures that we are interested in pruning. Closer to our work, the Stochastic Depth approach of Huang et al. (2016) drops layers randomly during training. As opposed to our work, they are interested in accelerating the training of very deep ResNets (He et al., 2016), so their dropping schedule is adapted to this goal. Concurrently to this work, Pham et al. (2019) applied Stochastic Depth to train very deep Transformers for speech and show the benefits of its regularization effect.\nMore generally, our method is a form of structured pruning (Liu et al., 2018b). As opposed to weight pruning (LeCun et al., 1990), structured pruning removes coherent groups of weights to preserve the original structure of the network. Structured pruning has been used in some NLP applications, such as machine translation (See et al., 2016), text classification (Joulin et al., 2016) and language modeling (Murray & Chiang, 2015). However, it has been more widely adopted in computer vision and applied to convolutional network to remove filters (Li et al., 2016; Wen et al., 2016), channels (He et al., 2017), or residual blocks (Huang et al., 2018; Huang & Wang, 2018). Similar to Mittal et al. (2018), we take advantage of the plasticity of neural networks to learn models that are resilient to random pruning or skipping connections Wang et al. (2018); Wu et al. (2018); Liu et al. (2018a), rather than learning the pruning itself. We refer the reader to Liu et al. (2018b) for an exhaustive study of these approaches and their evaluation in the context of convolutional networks.\nReducing the memory footprint of Transformer architectures and BERT in particular is an active subject of research. Several works have compressed BERT as a post-processing step using different forms of distillation (Turc et al., 2019; Tang et al., 2019; Shulga, 2019; Sanh, 2019). Similarly, various papers have shown evidence that Transformers are over-parameterized, especially that most self-attention heads can be dropped at test time (Michel et al., 2019; Voita et al., 2019). Different\nfrom these, our models are trained to be resilient to pruning, which significantly reduces the performance drop induced by test time pruning. Others have proposed trainable adaptive mechanisms to control their memory footprint (Jernite et al., 2016; Sukhbaatar et al., 2019; Correia et al., 2019). These approaches are complementary to ours and should benefit from each other." }, { "heading": "3 METHOD", "text": "In this section, we briefly introduce the Transformer, then describe our Structured Dropout technique and its application to layers. We also discuss several inference time pruning strategies." }, { "heading": "3.1 THE TRANSFORMER ARCHITECTURE", "text": "We succinctly review the Transformer architecture and refer the reader to Vaswani et al. (2017) for additional details. A Transformer is a stack of layers composed of two sub-layers: multi-head self-attention followed by a feedforward sub-layer. The multi-head self-attention sub-layer consists of multiple attention heads applied in parallel. Each attention head takes a matrix X where each row represents an element of the input sequence and updates their representations by gathering information from their context using an Attention mechanism (Bahdanau et al., 2014):\nY = Softmax(XTK(QX+P))VX,\nwhere K, V, Q and P are matrices of parameters. The outputs of the heads are then concatenated along the time step into a sequence of vectors.\nThe second sub-layer then applies a fully connected feedforward network to each element of this sequence independently, FFN(x) = U ReLU (Vx), where V and U are matrices of parameters. Each sub-layer is followed by a AddNorm operation that is a residual connection (He et al., 2016) and a layer normalization (Ba et al., 2016)." }, { "heading": "3.2 TRAINING TRANSFORMERS WITH RANDOM STRUCTURED PRUNING", "text": "We present a regularization approach that makes Transformers robust to subsequent structured pruning at inference time. We focus in particular on the case where the targeted structure is a layer." }, { "heading": "3.2.1 RANDOMLY DROPPING STRUCTURES AT TRAINING TIME", "text": "Regularizing networks to be robust to pruning can be achieved by randomly removing weights during its training as in DropConnect (Wan et al., 2013). In this approach, each weight is dropped independently following a Bernoulli distribution associated with a parameter p > 0 that controls the drop rate. This is equivalent to a pointwise multiplication of the weight matrix W with a randomly sampled {0, 1} mask matrix M:\nWd = M W. DropConnect is a form of random unstructured pruning that leads to smaller, but not necessarily more efficient, models. We propose to add structure to this mechanism to target model efficiency.\nRandom Structured Dropout. The weights of a Transformer network belong to multiple overlapping structures, such as heads, FFN matrices, or layers. Dropping weights using groups that follow some of these inherent structures potentially leads to a significant reduction of the inference time. This is equivalent to constraining the mask M to be constant over some predefined groups of weights. More precisely, given a set G of predefined groups of weights, the {0, 1} mask matrix M is randomly sampled over groups instead of weights:\n∀i, M[i] ∈ {0, 1}, and ∀G ∈ G, ∀(i, j) ∈ G, M[i] = M[j]. This structured dropout formulation is general and can be applied to any overlapping groups of weights, whether heads, FFN matrices, or layers. Nonetheless, not all of the structures in a Transformer lead to the same benefits when dropped. For example, dropping attention heads does not reduce runtime as they are usually computed in parallel. For simplicity, we focus on dropping layers, and we name this structured pruning, LayerDrop. This is inspired by the Stochastic Depth approach of Huang et al. (2016) used to train very deep ResNets (He et al., 2015)." }, { "heading": "3.2.2 PRUNING AT INFERENCE TIME", "text": "Selecting Layers to Prune Training with LayerDrop makes the network more robust to predicting with missing layers. However, LayerDrop does not explicitly provide a way to select which groups to prune. We consider several different pruning strategies, described below:\n• Every Other: A straightforward strategy is to simply drop every other layer. Pruning with a rate p means dropping the layers at a depth d such that d ≡ 0(modb 1pc). This strategy is intuitive and leads to balanced networks.\n• Search on Valid: Another possibility is to compute various combinations of layers to form shallower networks using the validation set, then select the best performing for test. This is straightforward but computationally intensive and can lead to overfitting on validation.\n• Data Driven Pruning: Finally, we propose data driven pruning where we learn the drop rate of each layer. Given a target drop rate p, we learn an individual drop rate pd for the layer at depth d such that the average rate over layers is equal to p. More precisely, we parameterize pd as a non-linear function of the activation of its layer and apply a softmax. At inference time, we forward only the fixed top-k highest scoring layers based on the softmax output (e.g. chosen layers do not depend on the input features).\nIn practice, we observe that the Every Other strategy works surprisingly well across many tasks and configurations. Search on Valid and Data Driven Pruning only offer marginal gains. Note that we do not further finetune any of the pruned networks (see Appendix for analysis of finetuning).\nSetting the drop rate for optimal pruning. There is a straightforward relationship between the drop rate of groups and the average pruning level that the network should be resilient to. Assuming N groups and a fixed drop ratio p, the average number of groups used by the network during training is N(1− p). As a consequence, to target a pruning size of r groups, the optimal drop rate is:\np∗ = 1− r N\nIn practice, we observe that networks are more robust to pruning than their expected ratio but higher pruning rates leads to better performance for smaller models. We use a LayerDrop rate of p = 0.2 for all our experiments, but we recommend p = 0.5 to target very small inference time models." }, { "heading": "4 EXPERIMENTAL SETUP", "text": "We apply our method to a variety of sequence modeling tasks: neural machine translation, language modeling, summarization, long form question answering, and various natural language understanding tasks. Our models are implemented in PyTorch using fairseq-py (Ott et al., 2019)1. Additional implementation and training details with hyperparameter settings are in the Appendix.\nNeural Machine Translation. We experiment on the WMT English-German machine translation benchmark using the Transformer Big architecture. We use the dataset of 4.5M en-de sentence pairs from WMT16 (Vaswani et al., 2017) for training, newstest2013 for validation, and newstest2014 for test. We optimize the dropout value within the range {0.1, 0.2, 0.5} on the validation set and set the LayerDrop rate p to 0.2. For generation, we average the last 10 checkpoints, set the length penalty to 0.6, and beam size to 8, following the settings suggested in Wu et al. (2019a), and measure case-sensitive tokenized BLEU. We apply compound splitting, as used in Vaswani et al. (2017).\nLanguage Modeling. We experiment on the Wikitext-103 language modeling benchmark (Merity et al., 2016) which contains 100M tokens and a large vocabulary size of 260K. We adopt the 16 layer Transformer used in Baevski & Auli (2018). We set the LayerDrop rate p to 0.2 and tune the standard dropout parameter in {0.1, 0.2, 0.3} on the validation set. We report test set perplexity (PPL).\n1https://github.com/pytorch/fairseq/tree/master/examples/layerdrop\nSummarization. We adopt the Transformer base architecture and training schedule from Edunov et al. (2019) and experiment on the CNN-Dailymail multi-sentence summarization benchmark. The training data contains over 280K full-text news articles paired with multi-sentence summaries (Hermann et al., 2015; See et al., 2017). We tune a generation length in the range {40, 50, 60} and use 3-gram blocking. We set the LayerDrop rate p to 0.2. We evaluate using ROUGE (Lin, 2004).\nLong Form Question Answering. We consider the Long Form Question Answering Dataset ELI5 of Fan et al. (2019), which consists of 272K question answer pairs from the subreddit Explain Like I’m Five along with extracted supporting documents from web search. We follow the Transformer Big architecture and training procedure of Fan et al. (2019). We generate long answers using beam search with beam size 5 and apply 3-gram blocking (Fan et al., 2017). We evaluate with ROUGE.\nSentence representation Pre-training. We train base and large BERT (Devlin et al., 2018) models following the open-source implementation of Liu et al. (2019). We use two datasets: Bookscorpus + Wiki from Liu et al. (2019) and the larger combination of Bookscorpus + OpenWebText + CC-News + Stories (Liu et al., 2019). We evaluate the pretrained models on various natural language understanding tasks. Specifically, we evaluate accuracy on MRPC (Dolan & Brockett, 2005), QNLI (Rajpurkar et al., 2016), MNLI (Williams et al., 2018), and SST2 (Socher et al., 2013)." }, { "heading": "5 RESULTS", "text": "" }, { "heading": "5.1 LAYERDROP AS A REGULARIZER", "text": "Language Modeling. In Table 2, we show the impact of LayerDrop on the performance of a Transformer network trained in the setting of Adaptive Inputs (Baevski & Auli, 2018). Adding LayerDrop to a 16 layer Transformer improves the performance by 0.4 perplexity, matching the state-of-the-art results of Transformer-XL. Our 40 layer Transformer with LayerDrop further improves the state of the art by 0.6 points. Very deep Transformers are typically hard to train because of instability and memory usage, and they are prone to overfitting on a small dataset like Wikitext-103. LayerDrop regularizes the network, reduces the memory usage, and increases training stability as fewer layers are active at each forward pass. These results confirm that this type of approach can be used to efficiently train very deep networks, as shown in Huang et al. (2016) for convolutional networks.\nSequence to sequence modeling. Similarly, as shown in Table 1 and Table 3, applying LayerDrop to Transformers on text generation tasks such as neural machine translation, summarization, and long form question answering also boosts performance for all tasks. In these experiments, we take the Transformer architectures that are state-the-art and train them with LayerDrop. In neu-\nral machine translation on newstest2014, our 12 encoder layer Transformer model with LayerDrop further improves the state of the art, reaching 30.2 BLEU. In comparison, a standard Transformer trained without LayerDrop diverges with 12 encoder layers. This is a known problem, and techniques such as improved initialization could be used to maintain stability (Junczys-Dowmunt, 2019; Zhang et al., 2019; Wang et al., 2019b; Wu et al., 2019b), but are out of the scope of this work. Similar results are seen in summarization.\nBi-Directional Pre-training. In a second set of experiments, we look at the impact of LayerDrop on pre-training for sentence representation models and subsequent finetuning on multiple natural language understanding tasks. We compare our models to a variant of BERT for sentence representations, called RoBERTa (Liu et al., 2019), and analyze the results of finetuning for data adaptation on MNLI, MRPC, QNLI, and SST2. We apply LayerDrop during both pre-training and finetuning.\nWe compare the performance of the large architecture on the BooksCorpus+Wiki dataset used in BERT. We analyze the performance of training on the additional data used in RoBERTa as well as pre-training for even longer. Comparing fixed model size and training data, LayerDrop can improve the performance of RoBERTa on several tasks. LayerDrop can further be used to both enable and stabilize the training (Huang et al., 2016) of models double the size for even stronger performance." }, { "heading": "5.2 PRUNING TRANSFORMER LAYERS TO ON-DEMAND DEPTH WITH LAYERDROP", "text": "Pruning Generation Tasks. In Figure 2, we investigate the impact of the number of pruned decoder layers on the performance of a Transformer for language modeling, neural machine translation, and summarization. We compare three different settings: standard Transformer models trained without LayerDrop but subsequently pruned, standard Transformer models trained from scratch to each desired depth, and lastly our approach: pruning layers of a Transformer trained with LayerDrop. Our model is trained once with the maximum number of layers and then pruned to the desired depth, without any finetuning in the shallower configuration. Our approach outperforms small models trained from scratch, showing that LayerDrop leads to more accurate small models at a whole range of depths. Further, training with LayerDrop does not incur the computational cost of retraining a new model for each desired depth. For completeness, dropping layers of a deep Transformer trained without LayerDrop performs poorly as it was not trained to be robust to missing layers.\nPruning BERT-like Models. In Table 7 (left), we compare pruning Transformers trained with LayerDrop to different approaches used to create smaller, shallower models. We compare to BERT\nbase and RoBERTa base trained from scratch with 6 and 3 layers as well as recent work on distillation, called DistilBERT (Sanh, 2019). We analyze both BERT and RoBERTa models as the vocabulary is not the same due to differences in subword tokenization, which affects performance.\nDistilBERT occasionally performs worse than BERT of the same size trained from scratch, which confirms the findings of Liu et al. (2018b) about the performance of pruned models compared to training small models from scratch. Our approach, however, obtains results better than BERT and RoBERTa trained from scratch. Further, our method does not need any post-processing: we simply prune every other layer of our RoBERTa model that has been pre-trained with LayerDrop and finetune the small models on each of the downstream tasks, following standard procedure. When training with additional data, shown in Table 7 (right), even stronger performance can be achieved." }, { "heading": "6 ABLATION STUDIES", "text": "Comparison of Structured Dropout Figure 4 (left) contrasts various forms of structured dropout: dropping attention heads, FFN matrices, and entire Transformer layers. Dropping heads alone is worse than dropping entire sub-layers or layers. It also offers no advantage in terms of running time as attention heads are computed in parallel for computational efficiency. We observe no large differences between dropping sub-layers and layers, possibly because we are working with relatively shallow networks. In theory, dropping sub-layers should perform better and we expect this to be the\ncase with very deep Transformers. We experiment with overlapping structured groups, such as heads + layers and heads + sub-layers and find that the beneficial effect can be advantageously combined. We focus on layers for simplicity, as dropping more structures introduces more parameters to tune.\nComparison of Various Pruning Strategies. Figure 4 (right) contrasts various approaches to sub-selecting model layers at inference time.\nThe predominant method used in this paper, the straightforward strategy of selecting every other layer, is tough to beat. We find only marginal improvement can be gained by searching over the validation set for the best set of 8 layers to use and by learning which layers to drop. In contrast, dropping chunks of consecutive layers is harmful. Namely, removing the first half or last half of a model is particularly harmful, as the model does not have the ability to process the input or project to the full vocabulary to predict the subsequent word.\nChoosing which Layers to Prune. Not all layers are equally important. In an experiment on Wikitext-103, we pruned selections of 8 layers at random. Figure 5 displays the perplexity when that layer is removed, averaging results from 20 pruned model per layer. The input and output layers of a network are the most important, as they process the input and project to the output vocabulary.\nRelationship between LayerDrop at Training Time and Pruning at Inference Time. Figure 6 displays the relationship between the training time LayerDrop and the performance of a pruned network at test time. If significant depth reduction is desired, training with larger LayerDrop is beneficial — this equalizes the train and test time settings. An analysis for BERT is in the Appendix." }, { "heading": "7 CONCLUSION", "text": "Structured dropout regularizes neural networks to be more robust to applying structured pruning at inference time. We focus on the setting where structures are layers, enabling pruning of shallow and efficient models of any desired depth. In a variety of text generation and pre-training tasks, we show that LayerDrop enables and stabilizes the training of substantially deeper networks and simultaneously allows for the extraction of models of various depths with strong performance." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ADDITIONAL IMPLEMENTATION DETAILS", "text": "" }, { "heading": "A.1.1 NEURAL MACHINE TRANSLATION", "text": "WMT en-de: We model a 32K joint byte-pair encoding. We train using the cosine (Loshchilov & Hutter, 2016) learning rate schedule from Wu et al. (2019a) with label smoothing 0.1. vocabulary (Sennrich et al., 2015). We train on 8 GPU for total training time 66k seconds.\nIWSLT de-en: The dataset consists of 160K training pairs, fully lowercased. We model a 10K joint BPE vocabulary and generate with beam size 4. We do not average checkpoints. Following Wu et al. (2019a), we use the Transformer base architecture with 6 encoder layers and 6 decoder layers. As the dataset is small, we decrease the overall model size and instead use the following parameters: FFN size 1024, hidden dimension 512, and 4 attention heads. We train on 1 GPU.\nPruning: We apply the Every Other Layer strategy to the decoder and do not finetune." }, { "heading": "A.1.2 LANGUAGE MODELING", "text": "Training: To handle the large vocabulary of Wikitext-103, we follow Dauphin et al. (2017) and Baevski & Auli (2018) in using adaptive softmax (Grave et al., 2016) and adaptive input for computational efficiency. For both input and output embeddings, we use dimension size 1024 and three adaptive bands: 20K, 40K, and 200K. We use a cosine learning rate schedule (Baevski & Auli, 2018; Loshchilov & Hutter, 2016) and train with Nesterov’s accelerated gradient (Sutskever et al., 2013). We set the momentum to 0.99 and renormalize gradients if the norm exceeds 0.1 (Pascanu et al., 2014). During training, we partition the data into blocks of contiguous tokens that ignore document boundaries. At test time, we respect sentence boundaries. We train on 8 GPU for total training time of 216k seconds.\nPruning: We apply the Every Other Layer strategy and do not finetune." }, { "heading": "A.1.3 SUMMARIZATION", "text": "Data: We use the full text (non-anonymized) version of CNN-Dailymail introduced by See et al. (2017). Following Fan et al. (2017), we truncate articles to 400 tokens and model a joint byte-pair vocabulary of 32K types (Sennrich et al., 2016).\nTraining: We train using Adam with a cosine learning rate schedule, warming up for 10K steps. We optimize dropout in the range {0.2, 0.3} on the validation set and set LayerDrop to 0.2. We train on 1 GPU.\nPruning: We apply the Every Other Layer strategy to the decoder and do not finetune." }, { "heading": "A.1.4 LONG FORM QUESTION ANSWERING", "text": "Training: We compare to the full multi-task setting of Fan et al. (2019), where data augmentation and multi-tasking is done at training time to increase the data available. We train on 8 GPU.\nGeneration: We set the minimum length to 150 tokens and the maximum length to 200." }, { "heading": "A.1.5 BI-DIRECTIONAL PRE-TRAINING", "text": "Training: The base architecture is a 12 layer model with embedding size 768 and FFN size 3072. The large architecture consists of 24 layers with embedding size 1024 and FFN size 4096. For both settings, we follow Liu et al. (2019) in using the subword tokenization scheme from Radford et al. (2019), which uses bytes as subword units. This eliminates unknown tokens. Note this produces a different vocabulary size than BERT (Devlin et al., 2018), meaning models of the same depth do not have the same number of parameters. We train with large batches of size 8192 and maintain this batch size using gradient accumulation. We do not use next sentence prediction (Lample & Conneau, 2019). We optimize with Adam with a polynomial decay learning rate schedule. For" }, { "heading": "Hyperparameter Base Large", "text": "BERT-Base, we use 32 GPU (total training time 171k seconds) and for BERT-Large, we use 128 GPU. For the RoBERTa data setting with more data, we use 512 GPU to train BERT-Large.\nFinetuning: During finetuning, we hyperparameter search over three learning rate options (1e-5, 2e-5, 3e-5) and batchsize (16 or 32 sentences). The other parameters are set following Liu et al. (2019). We do single task finetuning, meaning we only tune on the data provided for the given natural language understanding task. We do not perform ensembling. When finetuning models trained with LayerDrop, we apply LayerDrop during finetuning time as well.\nTraining smaller models: We train the 6 and 3 layer RoBERTa models following the same settings, but using the smaller number of layers and without LayerDrop. We finetune with the same sweep parameters. The 6 and 3 layer BERT model results are taken from Devlin et al. (2018).\nTraining larger models: We train the 48 layer RoBERTa model with 0.5 LayerDrop so only 24 layers on average are active during a forward pass.\nPruning: When pruning RoBERTa models, we use the Every Other Layer strategy and finetune without LayerDrop for the smaller models." }, { "heading": "A.2 ADDITIONAL RESULTS", "text": "IWSLT Table 6 displays results on the IWSLT de-en dataset. We see small improvement, likely as the network is small and already has a large quantity of regularization with dropout, attention dropout, and weight decay. The Transformer is not the state of the art architecture, and there remains a large gap between the Transformer and the DynamicConv model proposed by Wu et al. (2019a).\nPruning BERT Models The numerical values corresponding to the pruned 6 and 3 layer RoBERTa + LayerDrop models are shown in Table 7." }, { "heading": "A.3 ADDITIONAL ANALYSIS", "text": "Impact of LayerDrop on training time. Figure 7 shows the increase in training speed when training with increasingly large quantities of LayerDrop. The words per second were computed on 8 V100 GPUs with 32GB of memory, without floating point 16, for a 16 layer model trained on Wikitext-103. Assuming fixed layer size, LayerDrop removes layers at training time randomly, which increases the training speed almost 2x if dropping half the number of layers.\nModel Valid PPL\nPruned w/ LayerDrop 20.78 + Finetune 20.56" }, { "heading": "BERT: Relationship between LayerDrop at Training Time and Pruning at Inference Time", "text": "Similar to the analysis on Language Modeling, we find that training with larger quantities of LayerDrop allows for more aggressive pruning at inference time on various natural language generation tasks. However, as these tasks involve a finetuning step on the downstream tasks after pre-training, the effect is less straightforward. Results are shown in Figure 8.\nImpact of Finetuning. LayerDrop allows models to be pruned to the desired depth at test time. Apart from finetuning for data adaptation on the GLUE tasks, we do not finetune the performance of our smaller models on any of the other tasks we consider in this work. As shown in Table 8, we found that finetuning the pruned models only results in marginal improvement. Further, the finetuning parameters were dependent on the depth of the model at test time and difficult to optimize.\nLayerDrop Dropout Valid PPL\n0.5 0.1 19.03 0.5 0.2 19.22 0.5 0.3 19.31 0.5 0.4 19.62 0.5 0.5 19.95\nTable 9: Performance Varying Dropout with Fixed LayerDrop on a 16 layer language model trained on Wikitext-103 (Valid).\nModel Valid PPL\nAdaptive Input* 18.4 Random LayerDrop 0.2 18.2 Linear LayerDrop to 0.3 18.6 Linear LayerDrop to 0.5 18.5 Linear LayerDrop to 0.8 18.9\nTable 10: Random v. Linear Decay LayerDrop on a 16 layer language model trained on Wikitext-103 (Valid). * result is from Baevski & Auli (2018)\nStructured Dropout Valid PPL\nHalf FFN 29.6 Baseline 28.3 Head 28.1 Sublayer 19.9 Head + Sublayer 19.8 Layer 19.7 Head + Layer 19.7\nTable 11: Performance Varying Structured Dropout and Pruning to an 8 layer language model trained on Wikitext-103 (Valid). Pruning is done by removing every other layer to half the model size.\nEffect of Varying Standard Dropout. LayerDrop adds a strong regularization effect to neural network training. We examine the importance of tuning the standard dropout parameter when training with LayerDrop. In Table 9, we show the performance when LayerDrop is fixed and standard Dropout is varied. We see that when training with LayerDrop, the quantity of standard Dropout can be reduced.\nLayerDrop Schedule: Random or Linear. We investigate the random structured dropping of layers compared to the linear decay schedule proposed in Huang et al. (2016) in Table 10. We find that the linear decay schedule does not provide performance improvement compared to random dropping, which is more straightforward to implement.\nImpact of Types of Structured Dropout when Pruning. Figure 4 (left) contrasts the performance of various forms of structured dropout, such as dropping attention heads, sub-layers of Transformers such as attention or FFN, portions of FFN matrics, and entire Transformer layers. It examines these results in the setting of evaluating the full depth model on language modeling and shows that in general, different types of structured dropout can improve performance.\nIn Table 11, we examine the effect of varying training time structured dropout with performance when pruning. We show that the trend shown in Figure 4 is consistent with inference-time pruning performance, particularly that Half FFN dropout performs slightly worse, but other forms of structured dropout are beneficial." } ]
2,020
null
SP:4677ba60a6346626bfba72170b2d0c68cf9ed6be
[ "This paper studies the problem of designing effective exploration strategies in multi-agent domains. The key idea is to define one agent's exploration in terms of its interactions with other agents. This leads to two auxiliary exploration objectives, which measure how one agent's actions affect the dynamics and value of another agent's actions. The paper does an admirable job comparing the proposed method against a number of baselines, where the proposed method performs significantly better. Visualizations and ablation experiments nicely illustrate the contributions of various components of the method.", "This paper proposes methods for incentivizing exploration in multi-agent RL. There are two approaches that are proposed, both framed as influence maximization (of either the state transitions or the decisions of the other agents). The scaling to multiple agents is done via decomposing to pairwise interactions. This influence objective is the appended to the standard intrinsic motivation objective for single agent RL." ]
Intrinsically motivated reinforcement learning aims to address the exploration challenge for sparse-reward tasks. However, the study of exploration methods in transition-dependent multi-agent settings is largely absent from the literature. We aim to take a step towards solving this problem. We present two exploration methods: exploration via information-theoretic influence (EITI) and exploration via decision-theoretic influence (EDTI), by exploiting the role of interaction in coordinated behaviors of agents. EITI uses mutual information to capture the interdependence between the transition dynamics of agents. EDTI uses a novel intrinsic reward, called Value of Interaction (VoI), to characterize and quantify the influence of one agent’s behavior on expected returns of other agents. By optimizing EITI or EDTI objective as a regularizer, agents are encouraged to coordinate their exploration and learn policies to optimize the team performance. We show how to optimize these regularizers so that they can be easily integrated with policy gradient reinforcement learning. The resulting update rule draws a connection between coordinated exploration and intrinsic reward distribution. Finally, we empirically demonstrate the significant strength of our methods in a variety of multi-agent scenarios.
[ { "affiliations": [], "name": "Tonghan Wang" }, { "affiliations": [], "name": "Jianhao Wang" }, { "affiliations": [], "name": "Yi Wu" } ]
[ { "authors": [ "Mohammad Gheshlaghi Azar", "Ian Osband", "Rémi Munos" ], "title": "Minimax regret bounds for reinforce", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Eugenio Bargiacchi", "Timothy Verstraeten", "Diederik Roijers", "Ann Nowé", "Hado Hasselt" ], "title": "Learning to coordinate with coordination graphs in repeated single-stage multi-agent decision problems", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Trevor Barron", "Oliver Obst", "Heni Ben Amor" ], "title": "Information maximizing exploration with a latent dynamics model", "venue": "arXiv preprint arXiv:1804.01238,", "year": 2018 }, { "authors": [ "Andrew G Barto" ], "title": "Intrinsic motivation and reinforcement learning", "venue": null, "year": 2013 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yongcan Cao", "Wenwu Yu", "Wei Ren", "Guanrong Chen" ], "title": "An overview of recent progress in the study of distributed multi-agent coordination", "venue": "IEEE Transactions on Industrial informatics,", "year": 2012 }, { "authors": [ "Georgios Chalkiadakis", "Craig Boutilier" ], "title": "Coordination in multiagent reinforcement learning: A bayesian approach", "venue": "In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems,", "year": 2003 }, { "authors": [ "Nuttapong Chentanez", "Andrew G Barto", "Satinder P Singh" ], "title": "Intrinsically motivated reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Miguel Suau de Castro", "Elena Congeduti", "Rolf AN Starre", "Aleksander Czechowski", "Frans A Oliehoek" ], "title": "Influence-based abstraction in deep reinforcement learning. In Adaptive, learning agents workshop", "venue": null, "year": 2019 }, { "authors": [ "Maria Dimakopoulou", "Benjamin Van Roy" ], "title": "Coordinated exploration in concurrent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Maria Dimakopoulou", "Ian Osband", "Benjamin Van Roy" ], "title": "Scalable coordinated exploration in concurrent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Vladimir Feinberg", "Alvin Wan", "Ion Stoica", "Michael I Jordan", "Joseph E Gonzalez", "Sergey Levine" ], "title": "Model-based value estimation for efficient model-free reinforcement learning", "venue": "arXiv preprint arXiv:1803.00101,", "year": 2018 }, { "authors": [ "Carlos Florensa", "Yan Duan", "Pieter Abbeel" ], "title": "Stochastic neural networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1704.03012,", "year": 2017 }, { "authors": [ "Jakob Foerster", "Ioannis Alexandros Assael", "Nando de Freitas", "Shimon Whiteson" ], "title": "Learning to communicate with deep multi-agent reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jakob N Foerster", "Gregory Farquhar", "Triantafyllos Afouras", "Nantas Nardelli", "Shimon Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Charles W Fox", "Stephen J Roberts" ], "title": "A tutorial on variational bayesian inference", "venue": "Artificial intelligence review,", "year": 2012 }, { "authors": [ "Abhishek Gupta", "Russell Mendonca", "YuXuan Liu", "Pieter Abbeel", "Sergey Levine" ], "title": "Metareinforcement learning of structured exploration strategies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "Vime: Variational information maximizing exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Edward Hughes", "Joel Z Leibo", "Matthew Phillips", "Karl Tuyls", "Edgar Dueñez-Guzman", "Antonio Garcı́a Castañeda", "Iain Dunning", "Tina Zhu", "Kevin McKee", "Raphael Koster" ], "title": "Inequity aversion improves cooperation in intertemporal social dilemmas", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jaekyeom Kim" ], "title": "Emi: Exploration with mutual information", "venue": "In Proceedings of the 36th International Conference on Machine Learning. JMLR. org,", "year": 2019 }, { "authors": [ "Shariq Iqbal", "Fei Sha" ], "title": "Actor-attention-critic for multi-agent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Shariq Iqbal", "Fei Sha" ], "title": "Coordinated exploration via intrinsic rewards for multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1905.12127,", "year": 2019 }, { "authors": [ "Thomas Jaksch", "Ronald Ortner", "Peter Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Natasha Jaques", "Angeliki Lazaridou", "Edward Hughes", "Caglar Gulcehre", "Pedro A Ortega", "DJ Strouse", "Joel Z Leibo", "Nando de Freitas" ], "title": "Intrinsic social motivation via causal influence in multi-agent rl", "venue": "arXiv preprint arXiv:1810.08647,", "year": 2018 }, { "authors": [ "Chi Jin", "Zeyuan Allen-Zhu", "Sebastien Bubeck", "Michael I Jordan" ], "title": "Is q-learning provably efficient", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "Computer Science,", "year": 2014 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "arXiv preprint arXiv:1611.03673,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Shakir Mohamed", "Danilo Jimenez Rezende" ], "title": "Variational information maximisation for intrinsically motivated reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Ann Nowé", "Peter Vrancx", "Yann-Michaël De Hauwere" ], "title": "Game theory and multi-agent reinforcement learning", "venue": "In Reinforcement Learning,", "year": 2012 }, { "authors": [ "Frans Adriaan Oliehoek", "Stefan J Witwicki", "Leslie Pack Kaelbling" ], "title": "Influence-based abstraction for multiagent systems", "venue": "In Twenty-Sixth AAAI Conference on Artificial Intelligence,", "year": 2012 }, { "authors": [ "Ian Osband", "Benjamin Van Roy" ], "title": "On lower bounds for regret in reinforcement learning", "venue": "arXiv preprint arXiv:1608.02732,", "year": 2016 }, { "authors": [ "Ian Osband", "Daniel Russo", "Benjamin Van Roy" ], "title": "more) efficient reinforcement learning via posterior sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan" ], "title": "What is intrinsic motivation? a typology of computational approaches", "venue": "Frontiers in neurorobotics,", "year": 2009 }, { "authors": [ "Pierre-Yves Oudeyer", "Frdric Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE transactions on evolutionary computation,", "year": 2007 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Deepak Pathak", "Parsa Mahmoudieh", "Guanghao Luo", "Pulkit Agrawal", "Dian Chen", "Yide Shentu", "Evan Shelhamer", "Jitendra Malik", "Alexei A Efros", "Trevor Darrell" ], "title": "Zero-shot visual imitation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Alexander Peysakhovich", "Adam Lerer" ], "title": "Prosocial learning agents solve generalized stag hunts better than selfish ones", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 2043–2044. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Tabish Rashid", "Mikayel Samvelyan", "Christian Schroeder Witt", "Gregory Farquhar", "Jakob Foerster", "Shimon Whiteson" ], "title": "Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Martin Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom Van de Wiele", "Volodymyr Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by playingsolving sparse reward tasks from scratch", "venue": "arXiv preprint arXiv:1802.10567,", "year": 2018 }, { "authors": [ "Jonathan Rubin", "Ohad Shamir", "Naftali Tishby" ], "title": "Trading value and information in mdps. In Decision Making with Imperfect Decision Makers", "venue": null, "year": 2012 }, { "authors": [ "Christoph Salge", "Cornelius Glackin", "Daniel Polani" ], "title": "Changing the environment based on empowerment as intrinsic motivation", "venue": null, "year": 2014 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "A possibility for implementing curiosity and boredom in model-building neural controllers", "venue": "In Proc. of the international conference on simulation of adaptive behavior: From animals to animats,", "year": 1991 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Kyunghwan Son", "Daewoo Kim", "Wan Ju Kang", "David Earl Hostallero", "Yung Yi" ], "title": "Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Susanne Still", "Doina Precup" ], "title": "An information-theoretic approach to curiosity-driven reinforcement learning", "venue": "Theory in Biosciences,", "year": 2012 }, { "authors": [ "Malcolm Strens" ], "title": "A bayesian framework for reinforcement learning", "venue": "In ICML,", "year": 2000 }, { "authors": [ "DJ Strouse", "Max Kleiman-Weiner", "Josh Tenenbaum", "Matt Botvinick", "David J Schwab" ], "title": "Learning to share and hide intentions using information regularization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Peter Sunehag", "Guy Lever", "Audrunas Gruslys", "Wojciech Marian Czarnecki", "Vinicius Zambaldi", "Max Jaderberg", "Marc Lanctot", "Nicolas Sonnerat", "Joel Z Leibo", "Karl Tuyls" ], "title": "Value-decomposition networks for cooperative multi-agent learning based on team reward", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2087 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "OpenAI Xi Chen", "Yan Duan", "John Schulman", "Filip DeTurck", "Pieter Abbeel" ], "title": "exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ying Wen", "Yaodong Yang", "Rui Luo", "Jun Wang", "Wei Pan" ], "title": "Probabilistic recursive reasoning for multi-agent reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yi Wu", "Yuxin Wu", "Georgia Gkioxari", "Yuandong Tian" ], "title": "Building generalizable agents with a realistic and rich 3d environment", "venue": "arXiv preprint arXiv:1801.02209,", "year": 2018 }, { "authors": [ "Yuxin Wu", "Yuandong Tian" ], "title": "Training agent for first-person shooter game with actor-critic curriculum learning", "venue": null, "year": 2016 }, { "authors": [ "Chongjie Zhang", "Victor Lesser" ], "title": "Coordinated multi-agent reinforcement learning in networked distributed pomdps", "venue": "In Twenty-Fifth AAAI Conference on Artificial Intelligence,", "year": 2011 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning algorithms aim to learn a policy that maximizes the accumulative reward from an environment. Many advances of deep reinforcement learning rely on a dense shaped reward function, such as distance to the goal (Mirowski et al., 2016; Wu et al., 2018), scores in games (Mnih et al., 2015) or expert-designed rewards (Wu & Tian, 2016; OpenAI, 2018), but they tend to struggle in many real-world scenarios with sparse rewards (Burda et al., 2019). Therefore, many recent works propose to introduce additional intrinsic incentives to boost exploration, including pseudocounts (Bellemare et al., 2016; Tang et al., 2017; Ostrovski et al., 2017), model-learning improvements (Burda et al., 2019; Pathak et al., 2017; Burda et al., 2018), and information gain (Florensa et al., 2017; Gupta et al., 2018; Hyoungseok Kim, 2019). These works result in significant progress in many challenging tasks such as Montezuma Revenge (Burda et al., 2018), robotic manipulation (Pathak et al., 2018; Riedmiller et al., 2018), and Super Mario games (Burda et al., 2019; Pathak et al., 2017).\nNotably, most of the existing breakthroughs on sparse-reward environments have been focusing on single-agent scenarios and leave the exploration problem largely unstudied for multi-agent settings – it is common in real-world applications that multiple agents are required to solve a task in a coordinated fashion (Cao et al., 2012; Nowé et al., 2012; Zhang & Lesser, 2011). This problem has recently attracted attention and several exploration strategies have been proposed for transition-independent cooperative multi-agent settings (Dimakopoulou & Van Roy, 2018; Dimakopoulou et al., 2018; Bargiacchi et al., 2018; Iqbal & Sha, 2019b). Nevertheless, how to explore effectively in more general scenarios with complex reward and transition dependency among cooperative agents remains an open research problem.\n∗Equal Contribution.\nThis paper aims to take a step towards this goal. Our basic idea is to coordinate agents’ exploration by taking into account their interactions during their learning processes. Configurations where interaction happens (interaction points) lie at critical junctions in the state-action space, through these critical configurations can transit to potentially important under-explored regions. To exploit this idea, we propose exploration strategies where agents start with decentralized exploration driven by their individual curiosity, and are also encouraged to visit interaction points to influence the exploration processes of other agents and help them get more extrinsic and intrinsic rewards. Based on how to quantify influence among agents, we propose two exploration methods. Exploration via information-theoretic influence (EITI) uses mutual information (MI) to capture the interdependence between the transition dynamics of agents. Exploration via decision-theoretic influence (EDTI) goes further and uses a novel measure called value of interaction (VoI) to disentangle the effect of one agent’s state-action pair on the expected (intrinsic) value of other agents. By optimizing MI or VoI as a regularizer to the value function, agents are encouraged to explore state-action pairs where they can exert influences on other agents for learning sophisticated multi-agent cooperation strategies.\nTo efficiently optimize MI and VoI, we propose augmented policy gradient formulations so that the gradients can be estimated purely from trajectories. The resulting update rule draws a connection between coordinated exploration and the distribution of individual intrinsic rewards among team members, which further explains why our methods are able to facilitate multi-agent exploration.\nWe demonstrate the effectiveness of our methods on a variety of sparse-reward cooperative multiagent tasks. Empirical results show that both EITI and EDTI allow for the discovery of influential states and EDTI further filter out interactions that have no effects on the performance. Our results also imply that these influential states are implicitly discovered as subgoals in search space that guide and coordinate exploration. The video of experiments is available at https://sites. google.com/view/influence-based-mae/." }, { "heading": "2 SETTINGS", "text": "In our work, we consider a fully cooperative multi-agent task that can be modelled by a factored multi-agent MDP G = 〈N,S,A, T, r, h, n〉, where N ≡ {1, 2, ..., n} is the finite set of agents, S ≡ ×i∈NSi is the finite set of joint states and Si is the state set of agent i. At each timestep, each agent selects an action ai ∈ Ai at state s, forming a joint action a ∈ A ≡ ×i∈NAi, resulting in a shared extrinsic reward r(s,a) for each agent and the next state s′ according to the transition function T (s′|s,a). The objective of the task is that each agent learns a policy πi(ai|si), jointly maximizing team performance. The joint policy π=〈π1, . . . , πn〉 induces an action-value function, Qext,π(s,a)= Eτ [ ∑h t=0 r\nt|s0=s,a0=a,π], and a value function V ext,π(s)=maxaQext,π(s,a), where τ is the episode trajectory and h is the horizon.\nWe adopt a centralized training and decentralized execution paradigm, which has been widely used in multi-agent deep reinforcement learning (Foerster et al., 2016; Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2018). During training, agents are granted access to the states, actions, (intrinsic) rewards, and value functions of other agents, while decentralized execution only requires individual states." }, { "heading": "3 INFLUENCE-BASED COORDINATED MULTI-AGENT EXPLORATION", "text": "Efficient exploration is critical for reinforcement learning, particularly in sparse-reward tasks. Intrinsic motivation (Oudeyer & Kaplan, 2009) is a crucial mechanism for behaviour learning since it provides the driver of exploration. Therefore, to trade off exploration and exploitation, it is common for an RL agent to maximize an objective of the expected extrinsic reward augmented by the expected intrinsic reward. Curiosity is one of the extensively-studied intrinsic rewards to encourage an agent to explore according to its uncertainty about the environment, which can be measured by model prediction error (Burda et al., 2019; Pathak et al., 2017; Burda et al., 2018) or state visitation count (Bellemare et al., 2016; Tang et al., 2017; Ostrovski et al., 2017).\nWhile such an intrinsic motivation as curiosity drives effective individual exploration, it is often not sufficient enough for learning in collaborative multi-agent settings, because it does not take\ninto account agent interactions. To encourage interactions, we propose an influence value aims to quantify one agent’s influence on the exploration processes of other agents. Maximizing this value will encourage agents to visit interaction points more often through which the agent team can reach configurations that are rarely visited by decentralized exploration. In next sections, we will provide two ways to formulate the influence value with such properties, leading to two exploration strategies.\nThus, for each agent i, our overall optimization objective is:\nJθi [πi|π−i, p0] ≡ V ext,π(s0) + V int,π i (s0) + β · I π −i|i, (1)\nwhere p0(s0) is the initial state distribution, π−i is the joint policy excluding that of agent i, and V int,πi (s) is the intrinsic value function of agent i, I π −i|i is the influence value, β > 0 is a weighting term. In this paper, we use the following notations:\nr̃i(s,a) = r(s,a) + ui(si, ai), (2)\nV πi (s) = V ext,π(s) + V int,πi (s), (3) Qπi (s,a) = r̃i(s,a) + ∑ s′ T (s′|s,a)V πi (s′), (4)\nwhere ui(si, ai) is a curiosity-derived intrinsic reward, r̃i(s,a) is a sum of intrinsic and extrinsic rewards, V πi (s) and Q π i (s,a) here contain both intrinsic and extrinsic rewards." }, { "heading": "3.1 EXPLORATION VIA INFORMATION-THEORETIC INFLUENCE", "text": "One critical problem in our learning framework presented above is to define the influence value I . For simplicity, we start with a two-agent case. The first method we propose is to use mutual information between agents’ trajectories to measure one agent’s influence on other agents’ learning processes. Such mutual information can be defined as information gain of one agent’s state transition given the other’s state and action. Without loss of generality, we define it from the perspective of agent 1:\nMIπ2|1(S ′ 2;S1, A1|S2, A2) = ∑ s,a,s′2∈(S,A,S2) pπ(s,a, s′2) [log p π(s′2|s,a)− log pπ(s′2|s2, a2)] ,\n(5) where s = (s1, s2) is the joint state, a = (a1, a2) is the joint action, and Si and Ai are the random variables of state and action of agent i subject to the distribution induced by the joint policy π. So we define Iπ2|1 as MI π 2|1(S ′ 2;S1, A1|S2, A2) that captures transition interactions between agents. Optimizing this objective encourages agent 1 to visited critical points where it can influence the transition probability of agent 2. We call such an exploration method exploration via informationtheoretic influence (EITI).\nOptimizing MIπ2|1 with respect to the policy parameters θ1 of agent 1 is a little bit challenging, because it is an expectation with respect to a distribution that depends on θ1. The gradient consists of two terms:\n∇θ1MIπ(S′2;S1, A1|S2, A2) = ∑\ns,a,s′2∈(S,A,S2)\n∇θ1(pπ(s,a, s′2)) log p(s′2|s,a)\npπ(s′2|s2, a2)\n+ ∑\ns,a,s′2∈(S,A,S2)\npπ(s,a, s′2)∇θ1 log p(s′2|s,a)\npπ(s′2|s2, a2) .\n(6)\nWhile the second term is an expectation over the trajectory and can be shown to be zero (see Appendix B.1), it is unwieldy to deal with the first term because it requires the gradient of the stationary distribution, which depends on the policies and the dynamics of the environment. Fortunately, the gradient can still be estimated purely from sampled trajectories by drawing inspiration from the proof of the policy gradient theorem (Sutton et al., 2000).\nThe resulting policy gradient update is: ∇θ1Jθ1(t) = ( R̂t1 − V̂ π1 (st) ) ∇θ1 log πθ1(at1|st1) (7)\nwhere V̂ π1 (st) is an augmented value function of R̂ t 1 = ∑h t′=t r̂ t′ 1 and\nr̂t1 = r t + ut1 + β log p(st+12 |st1, st2, at1, at2) p(st+12 |st2, at2) . (8)\nThe third term, which we call EITI reward, is 0 when the agents are transition-independent, i.e., when p(st+12 |st1, st2, at1, at2) = p(s t+1 2 |st2, at2), and is positive when st1, at1 increase the probability of agent 2 translating to st+12 . Therefore, the EITI reward is an intrinsic motivation that encourages agent 1 to visit more frequently the state-action pairs where it can influence the trajectory of agent 2. The estimation of p(st+12 |st1, st2, at1, at2) and p(s t+1 2 |st2, at2) are discussed in Appendix C. We assume that agents know the states and actions of other agents, but this information is only available during centralized training. When execution, agents only have access to their local observations." }, { "heading": "3.2 EXPLORATION VIA DECISION-THEORETIC INFLUENCE", "text": "Mutual information characterizes the influence of one agent’s trajectory on that of the other and captures interactions between the transition functions of the agents. However, it does not provide the value of these interactions to identify interactions related to more internal and external rewards (r̃). To address this issue, we propose exploration via decision-theoretic influence (EDTI) based on a decision-theoretic measure of I , called Value of Interaction (VoI), which disentangles both transition and reward influences. VoI is defined as the expected difference between the action-value function of one agent (e.g., agent 2) and its counterfactual action-value function without considering the state and action of the other agent (e.g., agent 1):\nV oIπ2|1(S ′ 2;S1, A1|S2, A2) = ∑ s,a,s′2∈(S,A,S2) pπ(s,a, s′2) [ Qπ2 (s,a, s ′ 2)−Q π,∗ 2|1 (s2, a2, s ′ 2) ] ,\n(9) where Qπ2 (s,a, s ′ 2) is the expected rewards (including intrinsic rewards) of agent 2 defined as:\nQπ2 (s,a, s ′ 2) = r̃2(s,a) + γ ∑ s′1 p(s′1|s,a, s′2)V π2 (s′), (10)\nand the counterfactual action-value function Qπ,∗2 (also includes intrinsic and extrinsic rewards) can be obtained by marginalizing out the state and action of agent 1:\nQπ,∗2|1 (s2, a2, s ′ 2) = ∑ s∗1 ,a ∗ 1 pπ(s∗1, a ∗ 1|s2, a2)[r̃2(s∗1, s2, a∗1, a2)+γ ∑ s′1 p(s′1|s∗1, s2, a∗1, a2, s′2)V π2 (s′)]. (11) Note that the definition of VoI is analogous to that of MI and the difference lies in that log p(·) measures the amount of information while Q measures the action value. Although VoI can be obtained by learning Qπ2 (s,a) and Q π 2 (s2, a2) and calculating the difference, we propose to explicitly marginalize out s∗1 and a ∗ 1 utilizing the estimated model transition probability p\nπ(s′2|s2, a2) and p(s′2|s,a) to get a more accurate value estimate (Feinberg et al., 2018). The performance of these two formulations are compared in the experiments.\nValue functions Q and V used in VoI contains both expected external rewards and internal rewards, which will not only encourage coordinated exploration by the influence between intrinsic rewards but also filter out meaningless interactions which can not lead to extrinsic reward after intrinsic reward diminishes. To facilitate the optimization of VoI, we rewrite it as an expectation over stateaction trajectories.\nV oIπ2|1(S ′ 2;S1, A1|S2, A2) = Eτ [ r̃2(s,a)− r̃π2 (s2, a2) + γ ( 1− p\nπ(s′2|s2, a2) p(s′2|s,a)\n) V π2 (s ′) ] ,\n(12) where r̃π2 (s2, a2) is the counterfactual immediate reward. The detailed proof is deferred to Appendix B.2. From this definition, we can intuitively see how VoI reflects the value of interactions. r̃2(s,a)− r̃π2 (s2, a2) and 1 − pπ(s′2|s2, a2)/p(s′2|s,a) measure the influence of agent 1 on the immediate reward and the transition function of agent 2, and V π2 (s\n′) serves as a scale factor in terms of future value. Only when agent 1 and agent 2 are both transition- and reward-independent, i.e., when pπ(s′2|s2, a2) = p(s′2|s,a) and rπ2 (s2, a2) = r2(s,a) will VoI equal to 0. In particular, maximizing\nVoI with respect to policy parameters θ1 will lead agent 1 to meaningful interaction points, where V π2 (s ′) is high and s1, a1 can increase the probability that s′ is reached.\nIn this learning framework, agents initially explore the environment individually driven by its own curiosity, during which process they will discover potentially valuable interaction points where they can influence the transition function and (intrinsic) rewarding structure of each other. VoI highlights these points and encourages agents to visit these configurations more frequently. As intrinsic reward diminishes, VoI can gradually distinguish those interaction points which are necessary to get extrinsic rewards." }, { "heading": "3.2.1 POLICY OPTIMIZATION WITH VOI", "text": "We want to optimize Jθi with respect to the policy parameters θi, where the most cumbrous term is ∇θiV oI−i|i. For brevity, we can consider a two-agent case, e.g., optimizing V oI2|1 with respect to the policy parameters θ1. Directly computing the gradient ∇θ1V oI2|1 is not stable, because V oI2|1 contains policy-dependent functions r̃π2 (s2, a2), p\nπ(s′2|s2, a2), and V π2 (s′) (see Eq. 12). To stabilize training , we use target functions to approximate these policy-dependent functions, which is a commonly used technique in deep RL (Mnih et al., 2015). With this approximation, we denote\ng2(s,a) = r̃2(s,a)− r̃−2 (s2, a2) + γ ∑ s′ T (s′|s,a) ( 1− p −(s′2|s2, a2) p(s′2|s,a) ) V −2 (s ′ 1, s ′ 2). (13)\nwhere r−2 , p −, and V −2 are corresponding target functions. As these target functions are only periodically updated during the learning, their gradients over θ1 can be approximately ignored. Therefore, from Eq. 12, we have\n∇θ1V oIπ2|1(S ′ 2;S1, A1|S2, A2) ≈ ∑ s,a∈(S,A) (∇θ1pπ(s,a)) g2(s,a). (14)\nSimilar to the calculation of∇θiMI , we get the gradient at every step (see Appendix B.3 for proof): ∇θ1Jθ1(t) ≈ ( R̂t1 − V̂ π1 (st) ) ∇θ1 log πθ1(at1|st1), (15)\nwhere V̂ π1 (st) is an augmented value function regressed towards R̂ t 1 = ∑h t′=t r̂ t′ 1 and\nr̂t1 = r t + ut1 + β [ ut2 + γ ( 1− p\n−(st+12 |st2, at2) p(st+12 |st1, st2, at1, at2)\n) V −2 (s t+1 1 , s t+1 2 ) ] . (16)\nWe call ut2 + γ ( 1− p −(st+12 |s t 2,a t 2)\np(st+12 |st1,st2,at1,at2)\n) V −2 (s t+1 1 , s t+1 2 ) the EDTI reward." }, { "heading": "3.3 DISCUSSIONS", "text": "Scale to Large Settings: For cases with more than two agents, the VoI of agent i on other agents can be defined similarly to Eq. 9, which is annotated with V oIπ−i|i(S ′ −i;Si, Ai|S−i, A−i), where S−i and A−i are the state and action sets of all agents other than agent i. In practice, agents interaction can often be decomposed to pairwise interaction so V oIπ−i|i(S ′ −i;Si, Ai|S−i, A−i) is well approximated by the sum of values of pairwise value of interaction:\nV oIπ−i|i(S ′ −i;Si, Ai|S−i, A−i) ≈ ∑ j∈N,j 6=i V oIπj|i(S ′ j ;Si, Ai|S−i, A−i). (17)\nRelationship between EITI and EDTI: EITI and EDTI gradient updates are obtained by information- and decision-theoretical influence respectively. Therefore, it is nontrivial to derive that part of the EDTI reward is a lower bound of the EITI reward:\n1− p(s′−i|s−i, a−i) p(s′−i|s,a) ≤ log p(s′−i|s,a) p(s′−i|s−i, a−i) , ∀s,a, s′−i (18)\nwhich easily follows given that log x ≥ 1− 1/x for ∀x > 0. This draws a connection between EITI and EDTI reward.\nComparing EDTI to Centralized Methods: Different from a centralized method which directly includes value functions of other agents in the optimization objective, (i.e., by setting total reward r̂i = r + ui + β(u−i + γV−i), which is called plusV henceforth), the EDTI reward for agent i disentangles its contributions to values of another agents using a counterfactual formulation. This difference is important for quantifying influence because the value of another agent does not just contain the contributions from agent i, but also those of itself and third-party agents. Therefore, EDTI is a kind of intrinsic reward assignment. Our experiments in the next section will compare the performance of plusV against our methods, which verify the importance of the intrinsic reward assignment." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Our experiments aim to answer the following questions: (1) Can EITI and EDTI rewards capture interaction points? If they can, how do these points change throughout exploration? (2) Can exploiting these interaction points facilitate exploration and learning performance? (3) Can EDTI filter out interaction points that are not related to environmental rewards? (4) What if only reward influence between agents are disentangled? We evaluate our approach on a set of multi-agent tasks with sparse rewards based on a discrete version of multi-agent particle world environment (Lowe et al., 2017). PPO (Schulman et al., 2017) is used as the underlying algorithm. For evaluation, all experiments are carried out with 5 different random seeds and results are shown with 95% confidence interval. Demonstrative videos1 are available online.\nBaselines We compare our methods with various baselines shown in Table 1. In particular, we carry out the following ablation studies: i) r influence disentangles immediate reward influence between agents, (derivation of the associated augmented reward can be found in Appendix B.4. Reward influence in long term is not considered because it inevitably involves transition interactions) ii) PlusV as described in Sec. 3.3. iii) Shared critic uses decentralized PPO agents with shared centralized value function and thus is a cooperative version of MADDPG (Lowe et al., 2017) augmented with intrinsic reward of curiosity. iv) Q-Q is similar to EDTI but without explicit counterfactual formulation, as described in Sec. 3.2. We also note that EITI is an ablation of EDTI which considers transition interactions. PlusV, shared critic, Q-Q, and cen control have access to global or other agents’ value functions during training. When execution, all the methods except cen control only require local state.\n1https://sites.google.com/view/influence-based-ma-exploration/" }, { "heading": "4.1 DIDACTIC EXAMPLES", "text": "We present two didactic examples of multi-agent cooperation tasks with sparse reward to explain how EITI and EDTI work. The first didactic example consists of a 30 × 30 maze with two rooms and a door with two switches (Fig. 1 left). In the optimal strategy, one agent should first step on switch 1 to help the other agent pass the door, and then the agent that has already reached the right half should further go to switch 2 to bring the remaining agent in. There are two pairs of interaction points in this task: (switch 1, door) and (switch 2, door), i.e., transition probability of the agent near door is determined by whether another agent is on one of the switch.\nFig. 1-right and Fig. 2-top show the learning curves of our methods and all the baselines, among which EITI, EDTI, r influence, Multi, and centralized control can learn the winning strategy and ours learn much more efficiently. Fig. 2-bottom gives a possible explanation why our methods work. EITI and EDTI rewards successfully highlight the interaction points (before 100 and 2100 updates, respectively). Agents are encouraged to explore these configurations more frequently and thus have better chance to learn the goal strategy. EDTI reward considers the value function of the other agent, so it converges slower than the EITI reward. In contrast, directly adding the other agent’s intrinsic rewards and value functions is noisy (see ”plusV reward”) and confuses the agent because these contain the effect of the other agent’s exploration. As for centralized control, global curiosity encourages agents to try all possible configurations, so it can find environmental rewards in most tasks. However, visiting all configurations without bias renders it inefficient – external rewards begin to dominate the behaviors of agents after 7000 updates even with the help of centralized learning algorithm. Our methods use the same information as centralized exploration but take advantages of agents’ interactions to accelerate exploration.\nIn order to evaluate whether EDTI can help filter out noisy interaction points and accelerate exploration, we conduct experiments in a second didactic task (see Fig. 1 middle). It is also a navigation task in a 25× 25 maze where agents are rewarded for being in a goal room. However, in this experiment, we consider a case where there are four rooms and the upper right one is attached to reward. This task contains 6 pairs of interaction points (switch 1 with each of the doors, each switch with the door of the same room), but only two of them are related to external rewards, i.e., (switch 1, door 1) and (switch 2, door 1). As Fig. 3-right shows, EITI agents treat three doors equally even after 7400 updates (see Fig. 3 right, 7400 updates, top row). In comparison, although EDTI reward suffers from noise in the beginning, it clearly highlight two pairs of valuable interaction points (see Fig. 3 right, 7400 updates, bottom row) as intrinsic reward diminishes. This can explain why EDTI outperforms EITI (Fig. 3 left)." }, { "heading": "4.2 EXPLORATION IN COMPLEX TASKS", "text": "Next, we evaluate the performance of our methods on more complex tasks. To this end, we use three sparse reward cooperative multi-agent tasks depicted in Fig. 7 of Appendix D and analyzed below. Details of implementation and experiment settings are also described in Appendix D.\nPush-Box: A 15 × 15 room is populated with 2 agents and 1 box. Agents need to push the box to the wall in 300 environment steps to get a reward of 1000. However, the box is so heavy that only when two agents push it in the same direction at the same time can it be moved a grid. Agents need to coordinate their positions and actions for multiple steps to earn a reward. The purpose of this task is to demonstrate that EITI and EDTI can explore long-term cooperative strategy.\nIsland: This task is a modified version of the classic Stag Hunt game (Peysakhovich & Lerer, 2018) where two agents roam a 10× 10 island populated with 9 treasures and a random walking beast for 300 environment steps. Agents can collect a treasure by stepping on it to get a team reward of 10 or, by attacking the beast within their attack range, capture it for a reward of 300. The beast would also attack the agents when they are too close. The beast and agent have a maximum energy of 8 and 5 respectively, which will be subtracted by 1 every time attacked. Therefore, an agent is too weak to beat the beast alone and they have to cooperate. In order to learn optimal strategy in this task, one method has to keep exploring after sub-optimal external rewards are found.\nLarge-Island: Similar to Island but with more agents (4), more treasures (16), and a beast with more energy (16) and a higher reward (600) for being caught. This task aims to demonstrate feasibility of our methods in cases with more than 2 agents.\nPush-Box requires agents to take coordinated actions at certain positions for multiple steps to get rewarded. Therefore, this task is particularly challenging and all the baselines struggle to earn any reward (Fig. 4 left and Fig. 8 left). Our methods are considerably more successful because interaction happens when the box is moved – agents remain unmoved when they push the box alone but will move by a grid if push it together. In this way, EITI and EDTI agents are rewarded intrinsically to move the box and thus are able to quickly find the optimal policy.\nIn the Island task, collecting treasures is a easily-attainable local optimal. However, efficient treasures collecting requires the agents to spread on the island. This leads to a situation where attempting to attack the beast seems a bad choice since it is highly possible that agents will be exposed to the beast’s attack alone. They have to give up profitable spreading strategy and take the risk of being killed to discover that if they attack the beast collectively for several timesteps, they will get much more rewards. Our methods help solve this challenge by giving agents intrinsic incentives to appear together in the attack range of the beast, where they have indirect interactions (health is part of the state and it decreases slower when the two are attacked alternatively). Fig. 9 in Appendix D demonstrates that our methods learn to catch the beast quickly, and thus have better performance (Fig. 8 right).\nFinally, outperformance of our methods on Large-Island proves that they can successfully handle cases with more than two agents.\nIn summary, both of our methods are able to facilitate effective exploration on all the tasks by exploiting interactions. EITI outperforms EDTI in scenarios where all interaction points align with extrinsic rewards. On other tasks, EDTI performs better than EITI due to its ability to filter out interaction points that can not lead to more values.\nWe also study EDTI with only intrinsic rewards, discussion and results are included in Appendix A." }, { "heading": "5 RELATED WORKS", "text": "Single-agent exploration achieves conspicuous success recently. Provably efficient methods are proposed, such as upper confidence bound (UCB) (Jaksch et al., 2010; Azar et al., 2017; Jin et al., 2018) and posterior sampling for reinforcement learning (PSRL) (Strens, 2000; Osband et al., 2013; Osband & Van Roy, 2016; Agrawal & Jia, 2017). Given that these methods do not scale well to large or continuous settings, another line of research has been focusing on curiosity-driven exploration (Schmidhuber, 1991; Chentanez et al., 2005; Oudeyer et al., 2007; Barto, 2013; Bellemare et al., 2016; Pathak et al., 2017; Ostrovski et al., 2017), and have shown impressive results (Burda\net al., 2019; 2018; Hyoungseok Kim, 2019). In addition, methods based on variational information maximization (Houthooft et al., 2016; Barron et al., 2018) and mutual information (Rubin et al., 2012; Still & Precup, 2012; Salge et al., 2014; Mohamed & Rezende, 2015; Hyoungseok Kim, 2019) have been proposed for single-agent intrinsically motivated exploration.\nAlthough multi-agent reinforcement learning (MARL) has been making significant progresses in recent years (Foerster et al., 2018; Lowe et al., 2017; Wen et al., 2019; Iqbal & Sha, 2019a; Sunehag et al., 2018; Son et al., 2019; Rashid et al., 2018), less attention has been drawn to multi-agent exploration. Dimakopoulou & Van Roy (2018) and Dimakopoulou et al. (2018) propose posterior sampling methods for exploration of concurrent reinforcement learning in coverage problems, Bargiacchi et al. (2018) presents a multi-agent upper confidence exploration method for repeated single-stage problems, and Iqbal & Sha (2019b) investigates methods to combine several decentralized curiosity-driven exploration strategies. All these works focus on transition-independent settings. Another Bayesian exploration approach has been proposed for learning in stateless repeated games (Chalkiadakis & Boutilier, 2003). In contrast, this paper focuses on more general multi-agent sequential decision making problems with complex reward dependencies and transition interactions among agents.\nIn the literature of MARL, COMA (Foerster et al., 2018) shares some similarity with our decisiontheoretic EDTI approach in that both of them use the idea of counterfactual formulations. However, they are quite different in terms of definition and optimization: (1) conceptually, EDTI measures the influence of one agent on the value functions of other agents, while COMA quantifies individual contribution to the team value; (2) EDTI is defined on counterfactual Q-value over state-action pairs of other agents given its own state-action pair, while COMA uses the counterfactual Q-value just over its own action without considering state information, which is critical for exploration; (3) we explicitly derive the gradients for optimizing EDTI influence for coordinated exploration in the policy gradient framework, which provides more accurate feedback, while COMA uses the counterfactual Q value as a critic. Another line of relevant works (Oliehoek et al., 2012; de Castro et al., 2019) propose influence-based abstraction to predict influence sources to help local decision making of agents. In contrast, this paper presents two novel approaches that quantify and maximize the influence between agents for enabling coordinated multi-agent exploration.\nIn addition, some previous MARL work has also studied intrinsic rewards. One notably relevant work is Jaques et al. (2018), which models the social influence of one agent on other agents’ policies. In contrast, EITI measures the influence of one agent on the transition dynamics of other agents. Accompanying this distinction, EITI includes states of agents in the calculation of influence while social influence dos not. Apart from that, the optimization methods are also different – we directly derive the gradients of mutual information and incorporate its optimization in the policy gradient framework, while Jaques et al. (2018) adds social influence reward to the immediate environmental reward for training policies. Hughes et al. (2018) proposes an inequality aversion reward for learning in intertemporal social dilemmas. Strouse et al. (2018) uses mutual information between goal and states or actions as an intrinsic reward to train the agent to share or hide their intentions." }, { "heading": "6 CLOSING REMARKS", "text": "In this paper, we study the multi-agent exploration problem and propose two influence-based methods that exploits the interaction structure. These methods are based on two interaction measures, MI and Value of Interaction (VoI), which respectively measure the amount and value of one agent’s influence on the other agents’ exploration processes. These two measures can be best regraded as exploration bonus distribution. We also propose an optimization method in the policy gradient framework, which enables agents to achieve coordinated exploration in a decentralized manner and optimize team performance." }, { "heading": "B MATHEMATICAL DETAILS", "text": "B.1 GRADIENT OF MUTUAL INFORMATION\nTo encourage agents to exert influence on transitions of other agents, we optimize mutual information between agent’s trajectories. In particular, in the following, we show that term 2 in Eq. 6 is always zero.\nT2 = ∑\ns,a,s′2∈(S,A,S2)\npπ(s,a, s′2)∇θ1 log p(s′2|s,a)\npπ(s′2|s2, a2) (21)\n= − ∑ s,a,s′2 pπ(s,a, s′2)∇θ1 log pπ(s′2|s2, a2) (22)\n= − ∑ s,a,s′2 pπ(s,a, s′2) ∇θ1(pπ(s′2|s2, a2)) pπ(s′2|s2, a2)\n(23)\n= − ∑ s,a,s′2 pπ(s,a, s′2) pπ(s′2|s2, a2) ∇θ1 ∑ s∗1 ,a ∗ 1 p(s′2|s2, a2, s∗1, a∗1)p(s∗1|s2, a2)πθ1(a∗1|s∗1) (24) = −\n∑ s,a,s′2 pπ(s,a, s′2) pπ(s′2|s2, a2) ∑ s∗1 ,a ∗ 1 p(s′2|s2, a2, s∗1, a∗1)p(s∗1|s2, a2)∇θ1πθ1(a∗1|s∗1) (25)\n= − ∑\ns2,a2,s′2\npπ(s2, a2, s ′ 2) pπ(s′2|s2, a2) ∑ s∗1 ,a ∗ 1 p(s′2|s2, a2, s∗1, a∗1)p(s∗1|s2, a2)∇θ1πθ1(a∗1|s∗1) (26)\n= − ∑\ns2,a2,s′2 pπ(s2, a2) ∑ s∗1 ,a ∗ 1 p(s′2|s2, a2, s∗1, a∗1)p(s∗1|s2, a2)∇θ1πθ1(a∗1|s∗1) (27)\n= − ∑ s2,a2 pπ(s2, a2) ∑ s∗1 ,a ∗ 1 p(s∗1|s2, a2)∇θ1πθ1(a∗1|s∗1) ∑ s′2 p(s′2|s2, a2, s∗1, a∗1) (28)\n= − ∑ s2,a2 pπ(s2, a2) ∑ s∗1 ,a ∗ 1 p(s∗1|s2, a2)∇θ1πθ1(a∗1|s∗1) ∑ s′2\np(s′2|s2, a2, s∗1, a∗1)︸ ︷︷ ︸ =1\n(29)\n= − ∑ s2,a2 pπ(s2, a2) ∑ s∗1 p(s∗1|s2, a2)∇θ1 ∑ a∗1 πθ1(a ∗ 1|s∗1) (30)\n= − ∑ s2,a2 pπ(s2, a2) ∑ s∗1 p(s∗1|s2, a2)∇θ11 (31)\n= 0 (32)\nB.2 DEFINITION OF Value of Interaction\nTo capture both transition and reward interactions between agents and thereby achieve intrinsic reward distribution, we propose a decision-theoretic measure called Value of Interaction. We start from 2-agent cases and the following theorem gives the definition of V oI2|1 in the form of an expectation over trajectories, which is especially helpful in the derivation of the EDTI policy gradient update shown Eq. 15. Theorem 1. Value of Interaction of agent 1 on agent 2 is:\nV oIπ2|1(S ′ 2;S1, A1|S2, A2) = Eτ [ r̃2(s,a)− r̃π2 (s2, a2) + γ ( 1− p\nπ(s′2|s2, a2) p(s′2|s,a)\n) V π2 (s ′) ] ,\n(33) where r̃π2 (s2, a2) is the counterfactual immediate reward.\nV oI2|1 can be defined similarly. To lighten notation in the proof, we define\nV π2 (s ′ 2|s1, s2, a1, a2) = ∑ s′1 p(s′1|s1, s2, a1, a2, s′2)V π2 (s′1, s′2) (34)\nr̃π2 (s2, a2) = ∑ s∗1 ,a ∗ 1 pπ(s∗1, a ∗ 1|s2, a2)r̃2(s∗1, s2, a∗1, a2), (35)\nV π,∗2 (s ′ 2|s2, a2) = ∑ s∗1 ,a ∗ 1 pπ(s∗1, a ∗ 1|s2, a2) ∑ s′1 p(s′1|s∗1, s2, a∗1, a2, s′2)V π2 (s′1, s′2). (36)\nWe first prove Lemma 1, which is used in the proof of Theorem 1. Lemma 1.∑\ns1,s2,a1,a2 pπ(s1, s2, a1, a2)γ ∑ s′2 p(s′2|s1, s2, a1, a2)V π2 (s′2|s2, a2) (37)\n= ∑\ns1,s2,a1,a2 pπ(s1, s2, a1, a2)γ ∑ s′1,s ′ 2 T (s′1, s ′ 2|s1, s2, a1, a2) · pπ(s′2|s2, a2) p(s′2|s1, s2, a1, a2) V π2 (s ′ 1, s ′ 2).\nProof. ∑ s1,s2,a1,a2 pπ(s1, s2, a1, a2)γ ∑ s′2 p(s′2|s1, s2, a1, a2)V π2 (s′2|s2, a2) (38)\n= ∑\ns1,s2,a1,a2 pπ(s1, s2, a1, a2)γ ∑ s′2\np(s′2|s1, s2, a1, a2) · (39)∑ s∗1 ,a ∗ 1 pπ(s∗1, a ∗ 1|s2, a2) ∑ s′1 p(s′1|s∗1, s2, a∗1, a2, s′2)V π2 (s′1, s′2) (40)\n= ∑\ns1,s2,a1,a2 pπ(s1, s2, a1, a2)γ ∑ s′2 p(s′2|s1, s2, a1, a2) · (41)\n∑ s∗1 ,a ∗ 1 pπ(s∗1, a ∗ 1|s2, a2) ∑ s′1 T (s′1, s ′ 2|s∗1, s2, a∗1, a2) p(s′2|s∗1, s2, a∗1, a2) V π2 (s ′ 1, s ′ 2) (42)\n= ∑\ns1,s2,a1,a2 pπ(s1, s2, a1, a2)γ ∑ s′1,s ′ 2 T (s′1, s ′ 2|s1, s2, a1, a2) p(s′2|s1, s2, a1, a2) · (43)\nV π2 (s ′ 1, s ′ 2) ∑ s∗1 ,a ∗ 1 pπ(s∗1, a ∗ 1|s2, a2)p(s′2|s∗1, s2, a∗1, a2) (44)\n= ∑\ns1,s2,a1,a2 pπ(s1, s2, a1, a2)γ ∑ s′1,s ′ 2 T (s′1, s ′ 2|s1, s2, a1, a2) · (45)\npπ(s′2|s2, a2) p(s′2|s1, s2, a1, a2) V π2 (s ′ 1, s ′ 2). (46)\nWe now give the proof of Theorem 1:\nProof.\nV oIπ2|1(S ′ 2;S1, A1|S2, A2) (47) = ∑\ns,a,s′2∈(S,A,S2)\npπ(s,a, s′2) [ Qπ2 (s,a, s ′ 2)−Q π,∗ 2|1 (s2, a2, s ′ 2) ]\n(48)\n= ∑\ns1,s2,a1,a2\npπ(s1, s2, a1, a2)(r̃2(s1, s2, a1, a2)− r̃π2 (s2, a2) + (49)\nγ ∑ s′2 p(s′2|s1, s2, a1, a2)(V π2 (s′2|s1, s2, a1, a2)− V π,∗ 2 (s ′ 2|s2, a2)) (50)\n= ∑\ns1,s2,a1,a2\npπ(s1, s2, a1, a2)(r̃2(s1, s2, a1, a2)− r̃π2 (s2, a2) + (51)\nγ ∑ s′1,s ′ 2 T (s′1, s ′ 2|s1, s2, a1, a2)(1− pπ(s′2|s2, a2) p(s′2|s1, s2, a1, a2) )V π2 (s ′ 1, s ′ 2)) (Lemma 1) (52)\n= Eτ [ r̃2(s,a)− r̃π2 (s2, a2) + γ ( 1− p\nπ(s′2|s2, a2) p(s′2|s,a)\n) V π2 (s ′) ] . (53)\nB.3 CALCULATING GRADIENT OF VOI\nIn order to optimize V oI with respect to the parameters of agent policy, in Sec. 3.2.1, we propose to use target function and get:\n∇θ1V oIπ2|1(S ′ 2;S1, A1|S2, A2) ≈ ∑ s,a∈(S,A) (∇θ1pπ(s,a)) [r̃2(s,a)− r̃−2 (s2, a2)+\nγ ∑ s′ T (s′|s,a) ( 1− p −(s′2|s2, a2) p(s′2|s,a) ) V −2 (s ′ 1, s ′ 2)].\n(54) We prove that ∑ s,a (∇θ1pπ(s,a)) r̃ − 2 (s2, a2) is 0 in the following lemma. Lemma 2. ∑ s1,s2,a1,a2 (∇θ1pπ(s1, s2, a1, a2)) r̃−2 (s2, a2) = 0. (55)\nProof. Similar to the way that policy gradient theorem was proved by Sutton et al. (2000),∑ s1,s2,a1,a2 (∇θ1pπ(s1, s2, a1, a2)) r̃−2 (s2, a2) (56)\n= ∇θ1 ∑\ns1,s2,a1,a2\npπ(s1, s2, a1, a2)r̃ − 2 (s2, a2) (57)\n= ∑ s01,s 0 2 d0(s 0 1, s 0 2) ∞∏ t=0 ( ∇θ1π(at1, at2|st1, st2) ) T (st+11 , s t+1 2 |st1, at1, st2, at2)r̃ − 2 (s t 2, a t 2) (58)\n= ∑ s01,s 0 2 d0(s 0 1, s 0 2) ∞∏ t=0 [ π(at1, a t 2|st1, st2) ( ∇θ1 logπ(at1, at2|st1, st2) ) (59)\nT (st+11 , s t+1 2 |st1, at1, st2, at2)r̃ − 2 (s t 2, a t 2) ]\n(60) = ∑\ns1,s2,a1,a2\npπ(s1, s2, a1, a2) (∇θ1 logπ(a1, a2|s1, s2)) r̃−2 (s2, a2) (61)\n= ∑\ns1,s2,a1,a2\npπ(s1, s2, a1, a2) (∇θ1 logπ(a1|s1, s2)) r̃−2 (s2, a2) (62)\n= ∑ s2,a2 pπ(s2, a2) ∑ s1,a1 pπ(s1, a1|s2, a2) (∇θ1 logπ(a1|s1, s2)) r̃−2 (s2, a2) (63)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1,a1 pπ(s1, a1|s2, a2) (∇θ1 logπ(a1|s1, s2)) (64)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1,a1 pπ(s1, a1|s2, a2) π(a1|s1, s2) (∇θ1π(a1|s1, s2)) (65)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1,a1 pπ(s1|s2, a2)pπ(a1|s1, s2, a2) π(a1|s1, s2) (∇θ1π(a1|s1, s2)) (66)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1,a1 pπ(s1|s2, a2)π(a1|s1, s2) π(a1|s1, s2) (∇θ1π(a1|s1, s2)) (67)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1,a1 pπ(s1|s2, a2) (∇θ1π(a1|s1, s2)) (68)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1 pπ(s1|s2, a2) ∑ a1 (∇θ1π(a1|s1, s2)) (69)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1 pπ(s1|s2, a2) ( ∇θ1 ∑ a1 π(a1|s1, s2) ) (70)\n= ∑ s2,a2 pπ(s2, a2)r̃ − 2 (s2, a2) ∑ s1 pπ(s1|s2, a2) (∇θ11) (71)\n= 0 (72)\nB.4 IMMEDIATE REWARD INFLUENCE\nSimilar to MI and V oI , we can define influence of agent 1 on the immediate rewards of agent 2 as:\nRIπ2|1(S ′ 2;S1, A1|S2, A2) = ∑ s,a∈(S,A) pπ(s,a)[r̃2(s,a)− r̃2(s2, a2)]. (73)\nUse Lemma 2, we can get:\n∇θ1RIπ2|1(S ′ 2;S1, A1|S2, A2) = ∑ s,a∈(S,A) ∇θ1(pπ(s,a))r̃2(s,a). (74)\nNow we have\n∇θ1Jθ1(t) ≈ ( R̂t1 − V̂ π1 (st) ) ∇θ1 log πθ1(at1|st1), (75)\nwhere V̂ π1 (st) is an augmented value function of R̂ t 1 = ∑h t′=t r̂ t′ 1 and\nr̂t1 = r t + ut1 + βu t 2. (76)" }, { "heading": "C ESTIMATION OF CONDITIONAL PROBABILITIES", "text": "To quantify interdependence among exploration processes of agents, we use mutual information and value of interaction. Calculations of MI and VoI need estimation of p(s′2|s2, a2) and p(s′2|s,a). In practice, we track the empirical frequencies pemp(s′2|s2, a2) and pemp(s′2|s,a) and substitute them for the corresponding terms in Eq. 8 and 16.\nEstimating pemp(s′2|s2, a2) and pemp(s′2|s,a) is one obstacle to the scalability of our method, we now discuss how to solve this problem. When the state and action space is small, we can use hash table to implement Monte Carlo method (MC) for estimating the distributions accurately. In the MC sampling, we count from the samples the state frequencies p(s′2|s2, a2) ≡ N(s′2,s2,a2) N(s2,a2) and p(s′2|s,a) ≡ N(s′2,s1,s2,a1,a2) N(s1,s2,a1,a2)\n, where N(·) is the number of times each state-action pair was visited during the learning process. When the problem space becomes large, MC consumes large memory in practice. As an alternative, we adopt variational inference (Fox & Roberts, 2012) to learn variational distributions qξ1(s ′ 2|s2, a2) and qξ2(s′2|s,a), parameterized via neural networks with parameters ξ1 and ξ2, by optimizing the evidence lower bound. In Fig. 6, we show the performance of EDTI estimated using variational inference and the changing of associated EDTI rewards on Pass during 9000 PPO updates. Variational inference introduces some noise in EDTI rewards estimation and thus requires slightly more steps to learn the true probability and the strategy. However, estimating using MC sampling consumes 1.6G memory to save the hash table with 100M items each agent while variational inference needs a three-layer fully connected network with 74800 parameters occupying about 0.60M memory. This results highlights the feasibility of estimating EITI and EDTI rewards using variational inference in problem with large state-action space.\n-i in plusV are respectively scaled by β plusV int and β plusV ext .\nD IMPLEMENTATION DETAILS\nD.1 NETWORK ARCHITECTURE, HYPERPARAMETERS, AND INFRASTRUCTURE\nWe base our framework on OpenAI implementation of PPO2 (Dhariwal et al., 2017) and use its default parameters to carry out all the experiments. We train our models on an NVIDIA RTX 2080TI GPU using experience sampled from 32 parallel environments. We use visitation count to calculate the intrinsic reward, for its provable effectiveness (Azar et al., 2017; Jin et al., 2018). For all our methods and baselines, we use η/ √ N(s) as the exploration bonus for N(s)-th visit to state s. Specific values of η and scaling weights can be found in Table 2.\nAs for variational inference, the inference network is a 3-layer fully-connected network coupled with a 64-dimensional reparameterization estimator. ReLU is used as the activation function for the first two layers and the sum of negative log-likelihood and negative Evidence Lower Bound is used as loss. We use Adam optimizer (Kingma & Ba, 2014) with learning rate 1 × 10-3 and batchsize 2048. To speed up the learning of variational distributions estimation, we equip the learning with proportional prioritized experience replay (Schaul et al., 2015).\nD.2 TASK STRUCTURE\nIn this section, we describe the detailed settings of the experimental tasks.\nPass: There are two agents and two switches to open the door in a 30× 30 grid. Only when at least one of the switches are occupied will the door open. The agents need navigate from left to right and the team reward, which is 1000, is only provided when all agents reach the target zone. Agents can observe the position of another agents.\nSecret-Room: This is an extension of the Pass task with 4 rooms and 4 switches locating in different rooms. The size of the grid is 25×25. When the left switch is occupied, all the three doors are open. And the three switches in each room on the right only control the door of its room. The agents need\nto navigate towards the desired room (in light red of Fig. 1 middle) to achieve the extrinsic team reward 1000. Agents can observe the position of the other agents.\nPush-Box: There are two agents and one box in a 15× 15 grid. Agents need to push the box to the wall. However, the box is so heavy that only when two agents push it in the same direction at the same time can it be moved a grid. The only team reward, 1000, is given when the box is placed right against the wall. Agents can observe the coordinates of their teammate and the location of the box.\nIsland: A group of two agents are hunting for treasure on an island. However, a random walking beast may attack the agents when they are too close. The agents can also attack the beast within their attack range. This hurt doubles when more than one agent attack at the same time. Each agent has a maximum health of 5 and will lose 1/n health per step when there are n agents within the attack range of the beast. Island is a modified version of the classic coordination scenario Stag-Hunt with local optimal, because finding each treasure (9 in total) will trigger a team reward of 10 but catching the beast gives a higher team reward of 300. Agents can observe the position and health of each other, and the coordinates of the beast. Fig. 9 shows the development of the probability of catching the beast and the averaged number of treasures found in an episode during 9000 PPO updates.\nLarge-Island: Settings are similar to that of Island but with more agents (4), more treasures (16), and a beast with more energy (16) and a higher reward (600) for being caught.\nThe horizon of one episode is set to 300 timesteps in all these tasks." }, { "heading": "E COMPARISON WITH SINGLE-AGENT EXPLORATION METHODS", "text": "In this paper, we study the exploration problem in multi-agent settings from a decentralized perspective. Alternatively, exploration can be carried out in a centralized manner – treating agents as a joint one and using single-agent exploration algorithms. In this section, we compare our methods with centralized exploration strategies using RND (Burda et al., 2018) and EMI (Hyoungseok Kim, 2019), which are among the most cutting-edge exploration algorithms driven by curiosity and based on mutual information, respectively. We use codes published by their authors and carry out a modest\ngrid search over hyperparameters. For RND, we search intrinsic reward coefficient in the range of [0.005, 1.0] and extrinsic reward coefficient in range [0.05, 2.0]. For EMI, we test difference combinations of loss weights. Results averaged over four random seeds with the best found parameters are shown below.\nPerformance comparisons on problems of Pass, Secret-Room, and Push-Box are illustrated in Fig. 10. We can observe that our methods significantly outperform centralized exploration strategies using RND or EMI. To better understand this observation, we plot visitation heatmaps over time for RND and EMI, respectively, in Fig. 11 and 12.\nFig. 11 shows visitation heatmaps of RND on the Pass problem. From Fig. 11 (b), we can see that RND seems finding good policies for agents to pass the door in the first 4671 updates. However, agents’ policies seem to collapse quickly after that and their visits scatter around rooms again, which explains its learning curve in Fig. 10. From the evolution of its visitation heatmaps, we hypothesize that after visiting the center of the room for many times, agents’ curiosity models overfit on a particular set of states and they start to be curious about the relatively unfamiliar transition dynamics around the wall. As the result, the RND intrinsic reward drags the agents to the walls, as shown in Fig. 11(c) and (d), and their performance quickly drops within several updates (i.e., update 4671- 4677 shown by Fig. 11(b-d)). After a while, agents then leave from the walls and visit around in the\nroom again, as shown in Fig. 11(e). The whole exploration process repeated. Similar behaviors are also observed on the Secret-Room problem.\nWe also analyze the exploration behaviors of EMI agents on Pass, as illustrated by visitation heatmaps in Fig. 12. EMI tends to explore the state-action pairs where the transition dynamics is relatively complex, such as the edges and corners of the room (Fig. 12(a-c)). For problems where these state-action pairs do not lead to goals, EMI is not very effective. As the (centralized) transition dynamics of the Pass problem is relatively simple, EMI intrinsic reward quickly diminishes, which results in the behaviors of agents keeping unchanged after 500 updates (Fig. 12(d-e)).\nIn summary, centralized single-agent exploration methods encode some heuristics to facilitate exploration, but they typically do not place a great emphasis on interactions among agents and are thus not very efficient for multi-agent exploration with sparse interactions." } ]
2,020
INFLUENCE-BASED MULTI-AGENT EXPLORATION
SP:561cb64e5320799677a5a7108830aaec9d33a963
[ "The paper is dedicated to studying adversarial attack and defense problems from the perspective of Fourier analysis. They demonstrate that the adversarial vulnerability of neural networks can be attributed to non-zero high-frequency components. Then, the author proposes a simple post average approach to smooth out the insignificant high-frequency components, which can improve the adversarial robustness of neural networks. They conduct extensive experiments on ImageNet and CIFAR-10 to defend existing attacks, including FGSM, PGD, DeepFool, and C&W attacks.", "The paper proposes an approach for improving robustness of already trained artificial neural networks with relu activation functions. The main motivation comes from signal processing where robustness is typically obtained via averaging moduli of Fourier coefficients over some frequency band (e.g., mel-frequency coefficients and deep scattering spectrum are based on this principle). The strategy amounts to sampling several random direction vectors in a ball of constant radius centered at a training example and averaging their predictions. The empirical estimate of the expected predictor value over the ball centered at a training example is used as its hypothesis value." ]
In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis. We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks. We then demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components. Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks. Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks. Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 80-96% of the adversarial samples generated by these methods without introducing significant performance degradation (less than 2%) on the original clean images.
[]
[ { "authors": [ "Moustafa Alzantot", "Yash Sharma", "Supriyo Chakraborty", "Mani B. Srivastava" ], "title": "Genattack: Practical black-box attacks with gradient-free optimization", "venue": "CoRR, abs/1805.11090,", "year": 2018 }, { "authors": [ "Anish Athalye", "Nicholas Carlini" ], "title": "On the robustness of the cvpr 2018 white-box adversarial example defenses", "venue": "arXiv preprint arXiv:1804.03286,", "year": 2018 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David A. Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "CoRR, abs/1802.00420,", "year": 2018 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Defensive distillation is not robust to adversarial examples", "venue": "arXiv preprint arXiv:1607.04311,", "year": 2016 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Magnet and ”efficient defenses against adversarial attacks” are not robust to adversarial examples", "venue": "arXiv preprint arXiv:1711.08478,", "year": 2017 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec", "year": 2017 }, { "authors": [ "Reuben Feinman", "Ryan R. Curtin", "Saurabh Shintre", "Andrew B. Gardner" ], "title": "Detecting adversarial samples from artifacts", "venue": null, "year": 2017 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Hui Jiang" ], "title": "A new perspective on machine learning: How to do perfect supervised learning", "venue": "arXiv preprint arXiv:1901.02046.,", "year": 2019 }, { "authors": [ "Hui Jiang", "Chin-Hui Lee" ], "title": "A new approach to utterance verification based on neighborhood information in model space", "venue": "IEEE Transactions on Speech and Audio Processing,", "year": 2003 }, { "authors": [ "Hui Jiang", "Keikichi Hirose", "Qiang Huo" ], "title": "A new approach to utterance verification based on neighborhood information in model space", "venue": "IEEE Transactions on Speech and Audio Processing,", "year": 1999 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Linh Nguyen", "Arunesh Sinha" ], "title": "A learning approach and masking approach to secure learning", "venue": "CoRR, abs/1709.04447,", "year": 2017 }, { "authors": [ "N. Papernot", "P. McDaniel", "S. Jha", "M. Fredrikson", "Z.B. Celik", "A. Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "In 2016 IEEE European Symposium on Security and Privacy (EuroS P),", "year": 2016 }, { "authors": [ "N. Papernot", "P. McDaniel", "X. Wu", "S. Jha", "A. Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Ian J. Goodfellow" ], "title": "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples", "venue": "CoRR, abs/1605.07277,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z. Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS", "year": 2017 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "arXiv preprint arXiv:1707.04131,", "year": 2017 }, { "authors": [ "Andras Rozsa", "Ethan M. Rudd", "Terrance E. Boult" ], "title": "Adversarial diversity and hard positive generation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2016 }, { "authors": [ "E.M. Stein", "G. Weiss" ], "title": "Introduction to Fourier Analysis on Euclidean Spaces", "venue": "URL https://books.google. ca/books?id=YUCV678MNAIC", "year": 1971 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "George Neville Watson" ], "title": "A treatise on the theory of Bessel functions", "venue": "Cambridge university press,", "year": 1995 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan L. Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "CoRR, abs/1711.01991,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Although deep neural networks (DNN) have shown to be powerful in many machine learning tasks, Szegedy et al. (2013) found that they are vulnerable to adversarial samples. Adversarial samples are subtly altered inputs that can fool the trained model to produce erroneous outputs. They are more commonly seen in image classification task and typically the perturbations to the original images are so small that they are imperceptible to human eye.\nResearch in adversarial attacks and defences is highly active in recent years. In the attack side, many attacking methods have been proposed (Szegedy et al., 2013; Goodfellow et al., 2014; Papernot et al., 2016a; Papernot et al., 2017; Moosavi-Dezfooli et al., 2016; Kurakin et al., 2016; Madry et al., 2017; Carlini and Wagner, 2017a; Chen et al., 2017; Alzantot et al., 2018; Brendel et al., 2017), with various ways to generate effective adversarial samples to circumvent new proposed defence methods. However, since different attacks usually are effective to different defences or datasets, there is no consensus on which attack is the strongest. Hence for the sake of simplicity, in this work, we will evaluate our proposed defence approach against four popular attacks for empirical analysis. In the defence side, various defence mechanisms have also been proposed, including adversarial training (Rozsa et al., 2016; Kurakin et al., 2016; Tramèr et al., 2017; Madry et al., 2017), network distillation (Papernot et al., 2016b), gradient masking (Nguyen and Sinha, 2017), adversarial detection (Feinman et al., 2017) and adding modifications to neural networks (Xie et al., 2017). Nonetheless, many of them were quickly defeated by new types of attacks (Carlini and Wagner, 2016; 2017b;c;a; Athalye et al., 2018; Athalye and Carlini, 2018; Alzantot et al., 2018). Madry et al. (2017) tried to provide a theoretical security guarantee for adversarial training by a min-max loss formulation, but the difficulties in non-convex optimization and in finding the ultimate adversarial samples for training may loosen this robustness guarantee. As a result, so far there is no defence that is universally robust to all adversarial attacks.\nAlong the line of researches, there were also investigations into the properties and existence of adversarial samples. Szegedy et al. (2013) first observed the transferability of adversarial samples across models trained with different hyper-parameters and across different training sets. They also\nattributed the adversarial samples to the low-probability blind spots in the manifold. In (Goodfellow et al., 2014), the authors explained adversarial samples as ”a result of models being too linear, rather than too nonlinear.” In (Papernot et al., 2016), the authors showed the transferability occurs across models with different structures and even different machine learning techniques in addition to neural networks. In summary, the general existence and transferability of adversarial samples are well known but the reason of adversarial vulnerability still needs further investigation.\nGenerally speaking, when we view neural network as a multivariate function f(x) of input x, if a small imperceptible perturbation ∆x leads to a huge fluctuation ∆f(x), the large quantity ∆f(x)/∆x essentially corresponds to high frequency components in the Fourier spectrum of f(x). In this paper, we will start with the Fourier analysis of neural networks and elucidate why there always exist some decaying but nonzero high frequency response components in neural networks. Based on this analysis, we show that neural networks are inherently vulnerable to adversarial samples due to the underlying model structure. Next, we propose a simple post-averaging method to tackle this problem. Our proposed method is fairly simple since it works as a post-processing stage of any given neural network models and it does not require re-training the networks at all. Furthermore, we have evaluated the post-averaging method against four popular adversarial attacking methods and our method is shown to be universally effective in defending all examined attacks. Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our simple post-averaging method can successfully defend over 80-96% of the adversarial samples generated by these attacks with little performance degradation (less than 2%) on the original clean images." }, { "heading": "2 FOURIER ANALYSIS OF NEURAL NETWORKS", "text": "In order to understand the behaviour of adversarial samples, it is essential to find the Fourier transform of neural networks. Fortunately, for some widely used neural networks, namely fully-connected neural networks using ReLU activation functions, we may explicitly derive their Fourier transform under some minor conditions. As we will show, these theoretical results will shed light on how adversarial samples happen in neural networks." }, { "heading": "2.1 FOURIER TRANSFORM OF FULLY-CONNECTED RELU NEURAL NETWORKS", "text": "As we know, any fully-connected ReLU neural networks (prior to the softmax layer) essentially form piece-wise linear functions in input space. Due to space limit, we will only present the main results in this section and the proofs and more details may be found in Appendix.\nDefinition 2.1. A piece-wise linear function is a continuous function f : Rn −→ R such that there are some hyperplanes passing through origin and dividing Rn into M pairwise disjoint regions Rm, (m = 1, 2, ...,M), on each of which f is linear:\nf(x) = w1 · x x ∈ R1 w2 · x x ∈ R2 ... wM · x x ∈ RM\nLemma 2.2. Composition of a piece-wise linear function with a ReLU activation function is also a piece-wise linear function.\nTheorem 2.3. The output of any hidden unit in an unbiased fully-connected ReLU neural network is a piece-wise linear function.\nThis is straightforward because the input to any hidden node is a linear combination of piece-wise linear functions and this input is composed with the ReLU activation function to yield the output, which is also piece-wise linear. However, each region Rm is the intersection of a different number of half-spaces, enclosed by various hyperplanes in Rn. In general, these regions Rm (m = 1, · · · ,M) do not have simple shapes. For the purpose of mathematical analysis, we need to decompose each region into a union of some well-defined shapes having a uniform form, which is called infinite simplex.\nDefinition 2.4. Let V = {v1,v2, ...,vn} be a set of n linearly independent vectors in Rn. An infinite simplex, R+V, is defined as the region linearly spanned by V using only positive weights:\nR+V = { n∑ k=1 αkvk ∣∣∣∣ αk > 0, k = 1, 2, · · · , n }\n(1)\nTheorem 2.5. Each piece-wise linear function f(x) can be formulated as a summation of some simpler functions: f(x) = ∑L l=1 fl(x), each of which is linear and non-zero only in an infinite simplex as follows:\nfl(x) = { wl · x x ∈ R+Vl 0 otherwise\n(2)\nwhere Vl is a set of n linearly independent vectors, and wl is a weight vector.\nIn practice, we can always assume that the input to neural networks, x, is bounded. As a result, for computational convenience, we may normalize all inputs x into the unit hyper-cube, Un = [0, 1]n. Obviously, this assumption can be easily incorporated into the above analysis by multiplying each fl(x) in eq.(2) by ∏n r=1 h(xr)h(1−xr) where h(x) is the Heaviside step function. Alternatively, we may simplify this term by adding n2 additional hyperplanes to further split the input space to ensure all the elements of x do not change signs within each region R+Vq . In this case, within each region R+Vq , the largest absolute value among all elements of x is always achieved by a specific element, which is denoted as rq . In other words, the dimension xrq achieves the largest absolute value inside R+Vq . Similarly, the normalized piece-wise linear function may be represented as a summation of\nsome functions: f(x) = ∑Q q=1 gq(x), where each gq(x) (q = 1, 2, · · · , Q) has the following form:\ngq(x) = { wq · x h(1− xrq ) x ∈ R+Vq 0 otherwise\nFor every Vq , there exists an n× n invertible matrix Aq to linearly transform all vectors of Vq into standard basis vectors ei in Rn. As a result, each function gq(x) may be represented in terms of standard bases V∗ = {e1, · · · , en} as follows:\ngq(x) = { w̄q · x̄q h(1− 1 · x̄q) x̄q ∈ R+V∗ 0 otherwise\nwhere x̄q = xATq , and w̄q = wqA −1 q .\nLemma 2.6. Fourier transform of the following function:\ns(x) = { h(1− 1 · x) x ∈ R+V∗ 0 otherwise\nmay be presented as:\nS(ω) = ( −i√\n2π )n n∑ r=0 e−iωr∏ r′ 6=r (ωr′ − ωr) (3)\nwhere ωr is the r-th component of frequency vector ω (r = 1, · · · , n), and ω0 = 0.\nFinally we derive the Fourier transform of fully-connected ReLU neural networks as follows.\nTheorem 2.7. The Fourier transform of the output of any hidden node in a fully-connected unbiased1 ReLU neural network may be represented as ∑Q q=1 wqA −1 q ∇S(ωA−1q ), where∇ denote the differential operator.\n1For mathematical convenience, we assume neural networks have no biases here. However, regular neural networks with biases may be reformulated as unbiased ones by adding another dimension of constants. Thus, the main results here are equally applicable to both cases. Note that regular neural networks with biases are used in our experiments in this paper.\nObviously, neural networks are the so-called approximated bandlimited models as defined in (Jiang, 2019), which have decaying high frequency components in Fourier spectrum. Theorem 2.7 further suggests that the matrices A−1q may contribute to the high frequency components when the corresponding region R+Vq are too small. This is clear because the determinant of Aq is proportional to the volume of R+Vq in R\nn. In summary, the high frequency components of neural networks are mostly attributed to these tiny regions in the input space. As we will show later, these small regions may be explicitly exploited to generate adversarial samples for neural networks." }, { "heading": "2.2 UNDERSTANDING ADVERSARIAL SAMPLES", "text": "As shown in Theorem 2.3, neural network may be viewed as a sequential division of the input space into many small regions, as illustrated in Figure 1. Each layer is a further division of the existing regions from the previous layers, with each region being divided differently. Hence a neural network with multiple layers would result in a tremendous amount of sub-regions in the input space. For example, when cutting an n-dimensional space using N hyperplanes, the maximum number of regions may be computed as ( N 0 ) + ( N 1 ) + · · · + ( N n ) . For a hidden layer of N = 1000 nodes and input dimension is n = 200, the maximum number of regions is roughly equal to 10200. In other words, even a middle-sized neural network can partition input space into a huge number of sub-regions, which can easily exceed the total number of atoms in the universe. When we learn a neural network, we can not expect there is at least one training sample inside each region. For those regions that do not have any training sample, the resultant linear functions in them may be arbitrary since they do not contribute to the training objective function at all. Of course, most of these regions are extremely small in size. When we measure the expected loss function over the entire space, their contributions are negligible since the chance for a randomly sampled point to fall into these tiny regions is extremely small. However, adversarial attack is imposing a new challenge since adversarial samples are not naturally sampled. Given that the total number of regions is huge, those tiny regions are almost everywhere in the input space. For any data point in the input space, we almost surely can find such a tiny region in proximity where the linear function is arbitrary. If a point inside this tiny region is selected, the output of the neural network may be unexpected. We believe that these tiny unlearned regions may be a major reason why neural networks are vulnerable to adversarial samples.\nIn layered deep neural networks, the linear functions in all regions are not totally independent. If we use v(l) to denote the weight matrix in layer l, the resultant linear weight wk in eq.(2) is actually the sum of all concatenated v(l) along all active paths. When we make a small perturbation ∆x to any input x, the fluctuation in the output of any hidden node can be approximated represented as:\n∆f(x) ∝ N · ∏ l E [ |v(l)ij | ] (4)\nwhere N denotes the total number of hyperplanes to be crossed when moving x to x + ∆x. In any practical neural network, we normally have at least tens of thousands of hyperplanes crossing the hypercube Un = [0, 1]n. In other words, for any input x in a high-dimensional space, a small perturbation can always easily cross a large number of hyperplanes to enter a tiny unlearned region. When N is fairly large, the above equation indicates that the output of a neural network can still fluctuate dramatically even after all weight vectors are regularized by L1 or L2 norm. As a reference,\nwe have verified this on some ImageNet data using a VGG16 model. When PGD is used to generate adversarial samples with average perturbation ||∆x||2 ≤ 0.35, which is extremely small perturbation since x has over a hundred thousand dimensions on ImageNet, we have observed that in average about N = 5278 hyperplanes are crossed per layer even after such a small perturbation is added.\nAt last, since the ubiquitous existence of unlearned tiny regions is an intrinsic property of neural networks given its current model structure, we believe that adversarial training strategies will not be sufficient to completely get rid of adversarial samples. In principle, neural networks must be strictly bandlimited to filter out those decaying high frequency components in order to completely eliminate all adversarial samples. We definitely need more research efforts to figure out how to do this effectively and efficiently for neural networks." }, { "heading": "3 THE PROPOSED DEFENCE APPROACH: POST-AVERAGING", "text": "" }, { "heading": "3.1 POST-AVERAGING", "text": "In this paper, we propose a simple post-processing method to smooth out those high frequency components as much as possible, which relies on a simple idea similar to moving-average in onedimensional sequential data. Instead of generating prediction merely from one data point, we use the averaged value within a small neighborhood around the data point, which is called post-averaging here. Mathematically, the post-averaging is computed as an integral over a small neighborhood centered at the input:\nfC(x) = 1\nVC ∫ · · · ∫ x′∈C f(x− x′) dx′ (5)\nwhere x is the input and f(x) represents the output of the neural network, and C denotes a small neighborhood centered at the origin and VC denotes its volume. When we choose C to be an n-sphere in Rn of radius r, we may simply derive the Fourier transform of fC(x) as follows:\nFC(ω) = F (ω) 1\nVC ∫ · · · ∫ x′∈C e−ix ′·ω dx′ = F (ω) Γ(n2 + 1) π n 2 Jn 2 (r|ω|) (r|ω|)n2\n(6)\nwhere Jn 2 (·) is the first kind Bessel function of order n/2. Since the Bessel functions, Jν(ω), decay with rate 1/ √ ω as |ω| → ∞ (Watson, 1995), we have FC(ω) ∼ F (ω)\n(r|ω|) n+1 2 as |ω| → ∞. Therefore, if r is chosen properly, the post-averaging operation can significantly bandlimit neural networks by smoothing out high frequency components. Note that the similar ideas have been used in (Jiang et al., 1999; Jiang and Lee, 2003) to improve robustness in speech recognition." }, { "heading": "3.2 SAMPLING METHODS", "text": "However, it is intractable to compute the above integral for any meaningful neural network used in practical applications. In this work, we propose to use a simple numerical method to approximate it. For any input x, we select K points in the neighborhood C centered at x, i.e. {x1,x2, · · · ,xK} , to approximately compute the integral as\nfC(x) ≈ 1\nK K∑ k=1 f(xk). (7)\nObviously, in order to defend against adversarial samples, it is important to have samples outside the current unlearned tiny region. In the following, we use a simple sampling method based on directional vectors. To generate a relatively even set of samples for eq.(7), we first determine some directional vectors v̂, and then move the input x along these directions using several step sizes within the sphere of radius r: x′ = x + λ · v̂ (8) where λ = [± r3 ,± 2r 3 ,±r], and v̂ is a selected unit-length directional vector. For each selected direction, we generate six samples within C along both the positive and the negative directions to ensure efficiency and even sampling. We use this implementation for the convenience to extend with different types of sampling strategies.\nWe tried several direction sampling strategies, including using the directions towards the closest region boundaries, and found that the simple random direction sampling gives the best performance. In this sampling method, we fill the directional vectors with random numbers generated from a standard normal distribution, and then normalize them to have unit length." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the above post-averaging method on defending against several popular adversarial attacking methods." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "• Dataset: We evaluated our method on both the ImageNet (Russakovsky et al., 2015) and CIFAR-10 (Krizhevsky et al., 2009) datasets. Since our proposed post-averaging method does not need to re-train neural networks, we do not need to use any training data in our experiments. For evaluation purpose, we use the validation set of the ImageNet dataset. The validation set consists of 50000 images labelled into 1000 categories. For computational efficiency, we randomly choose 5000 images from the ImageNet validation set and evaluate our model on these 5000 images. For the CIFAR-10 dataset, we use the full test set, which consists of 10000 images labelled into 10 categories.\n• Target model: For model on ImageNet, we use a pre-trained ResNet-152 (He et al., 2016) network that is available from PyTorch, while for CIFAR-10, we use a pre-trained ResNet110 network from Yerlan Idelbayev 2. In our experiments, we directly use these pre-trained models without any modification.\n• Source of adversarial attacking methods: We use Foolbox (Rauber et al., 2017), an open source tool box to generate adversarial samples using different adversarial attacking methods. In this work, we tested our method against four popular attacking methods in the literature: Fast Gradient Sign method (FGSM) (Goodfellow et al., 2014), Projected Gradient Descent (PGD) method (Kurakin et al., 2016; Madry et al., 2017), DeepFool (DF) attack method (Moosavi-Dezfooli et al., 2016) and Carlini & Wagner (C&W) L2 attack method (Carlini and Wagner, 2017a). We used these attack methods in their default settings.\n• Threat model: In our experiments, we use an l∞ norm to constrain the allowed perturbation distance." }, { "heading": "4.2 EVALUATION CRITERIA", "text": "For each experiment, we define:\n• Clean set: The dataset that consists of the original images from ImageNet or CIFAR-10. • Attacked set: For every correctly classified image in the Clean set, if an adversarial sample\nis successfully generated under the attacking criteria, the original sample is replaced with the adversarial sample; if no adversarial sample is found, the original sample is kept in the dataset. Meanwhile, all the misclassified images are kept in the dataset without any change. Therefore the Attacked set has the same number of images as the clean set.\nIn our experiments, we evaluate the original network and the network defended by post-averaging on both the Clean and the Attacked sets. The performance is measured in terms of :\n• Accuracy: number of correctly classified images over the whole dataset. • Defence rate: number of successfully defended adversarial samples over the total number\nof adversarial samples in the Attacked set. By ”successfully defended”, it refers to the case where an adversarial sample is correctly classified after the original model is defended by the post-averaging approach.\n2https://github.com/akamaster/pytorch_resnet_cifar10/tree/master/ pretrained_models" }, { "heading": "4.3 EXPERIMENTAL RESULTS", "text": "Table 1 shows the performance of our defence approach against different attacking methods. In this table, the samples for post-averaging are selected within an n-sphere of radius r as in eq.(8), with K = 15 different directions. Thus results in a total of 15× 2× 3 + 1 = 91 samples (including the input) for each input image to be used in eq.(7). Moreover, all the adversarial samples generated are restricted to be within the perturbation range = 8/255. We show the top-1 accuracy of the original model and the defended model on both the Clean and the Attacked set respectively, as well as the defence rate of the defended model. Besides, we also show the number of adversarial samples successfully generated by each attacking method in the last column.\nFrom Table 1, we can see that our proposed defence approach is universally robust to all of the attacking methods we have examined. It has achieved above 80-96% defence rates in all the experiments with only a minor performance degradation in the Clean set (less than 2%). Especially on the ImageNet dataset, our method is able to defend about 95% of the adversarial samples. However, an interesting observation from the experimental results is that the defence rate in the CIFAR-10 dataset is lower than the usually more challenging ImageNet dataset. We think this may be because data points are sparser in the ImageNet space than in the CIFAR-10 space, as ImageNet has a much larger dimensionality.\nGenerally, using a larger sampling radius r can increase the chance of moving out of the unlearned regions as we desired, but it will also introduce more noise that can harm the prediction accuracy; On the other hand, using a smaller sampling radius r can reduce the performance degradation but it may not be sufficient to defend against adversarial samples. The optimal value for r varies with different datasets due to their dimensionality and data sparsity. In experiments, we found that r = 30 for ImageNet and r = 6 for CIFAR-10 achieved relatively better performance. Figure 2 shows how the model defence rate on ImageNet varies with different r. As shown in the figure, the optimal value for r also varies in different attacking methods, but the performance variations are small. In general, our model retains high defence rate throughout the r range [15, 30].\nWe also tested the effect of K, the number of sampling directions used, on the model performance. From Table 2, we can see that our model performance is not very sensitive to K. It is able to achieve a good defence rate with only K = 6, that is, 37 samples used for each input image. In implementation, these samples can be easily packed into a mini-batch for fast computation in GPUs. When running on the same machine, we measured the averaged inference time for a single input image on the original network as 0.04 seconds, while the inference time for our models with different K are shown in Table 2. By comparison, we can know that the inference time after adding post-averaging is roughly 2 3K of the original inference time.\nAt last, we evaluated our post-averaging defence approach against attacks with different allowed perturbation ranges . The results are shown in Figure 3. As we can see, our model retains very good attack defence rate up to = 32/255. Note that the defence rate against PGD and C&W doesn’t change much along the variation of , this is because PGD and C&W have already successfully generated adversarial samples for most of the correctly classified inputs when is small. Hence their generated adversarial samples will not change much when using larger . For FGSM, our method yields lower defending performance. The possible reason is that FGSM tends to generate much larger perturbations than other three stronger attacking methods under the same setting. A large perturbation is more likely to move samples across class-specific decision boundaries to generate much more confusing samples. In our opinion, this is a general phenomenon in pattern classification, not particular to adversarial attacks." }, { "heading": "5 FINAL REMARKS", "text": "In this paper, we have presented some theoretical results by Fourier analysis of ReLU neural networks. These results are useful for us to understand why neural networks are vulnerable to adversarial samples. Based on the results, we hypothesize that the inevitable and ubiquitous existence of tiny unlearned regions in the model function mapping may be a major reason for adversarial vulnerability. As a possible defence strategy, we have proposed a simple post-averaging method. Experimental results on the ImageNet and the CIFAR-10 datasets have demonstrated that our simple defence technique turns out to be very effective against many popular attack methods in the literature. Finally, it will be interesting to see whether our post-averaging method will be still robust against any new attack methods in the future." }, { "heading": "APPENDIX: MATHEMATICAL PROOFS", "text": "Definition B.1. A piece-wise linear function is a continuous function f : Rn −→ R such that there are some hyperplanes passing through origin and dividing Rn into M pairwise disjoint regions Rm, (m = 1, 2, ...,M), on each of which f is linear:\nf(x) = w1 · x x ∈ R1 w2 · x x ∈ R2 ... wM · x x ∈ RM\nLemma B.2. Composition of a piece-wise linear function with a ReLU activation function is also a piece-wise linear function.\nProof. Let r(.) denote the ReLU activation function. If f(x) on region Rm takes both positive and negative values, r ( f(x) ) will break it into two regions R+p and R 0 p. On the former r ( f(x) ) = f(x)\nand on the latter r ( f(x) ) = 0, which both are linear functions. As f(x) on Rp is linear, common boundary of R+p and R 0 p lies inside a hyperplane passing through origin – which is the kernel of the linear function. Therefore, if f(x) is a piece-wise linear function defined by k hyperplanes resulting inM regions, r ( f(x) ) will be a piece-wise linear function defined by at most k+m hyperplanes.\nTheorem B.3. The output of any hidden unit in an unbiased fully-connected ReLU neural network is a piece-wise linear function.\nProof. This proposition immediately follows lemma B.2.\nDefinition B.4. Let V = {v1,v2, ...,vn} be a set of n independent vectors in Rn. An infinite simplex, R+V, is defined as the region linearly spanned by V using only positive weights:\nR+V = { n∑ k=1 αkvk | ∀k αk > 0} (9)\nTheorem B.5. Each piece-wise linear function f(x) can be formulated as a summation of some functions: f(x) = ∑K k=1 fk(x), each of which is linear and non-zero only in an infinite simplex as follows:\nfk(x) = { wk · x x ∈ R+Vk 0 otherwise\nwhere Vk is a set of n independent vectors, and wk is a weight vector.\nProof. Each region Rp of a piece-wise linear function, f(x), which describes the behavior of a ReLU node if intersects with an affine hyper-plane results in a convex polytope. This convex polytope can be triangulated into some simplices. Define Vk, (k = 1, 2, ...,K), sets of vertexes of these simplices. The infinite simplexes created by these vector sets will have the desired property and f(x) can be written as: f(x) = ∑K k=1 fk(x).\nAs explained earlier in the original article by adding n2 hyper-planes to those defining the piece-wise linear function, the output of a ReLU node may be represented as f(x) = ∑Q q=1 gq(x). These hyper-planes are those perpendicular to standard basis vectors and subtraction of one of these vectors from another one. That is, ei (i = 1, . . . , n) and ei − ej (1 ≤ i < j ≤ n). Given this representation, the final step to achieve the Fourier transform is the following lemma:\nLemma B.6. Fourier transform of the following function:\ns(x) = { h(1− 1 · x) x ∈ R+V∗ 0 otherwise\nmay be presented as:\nS(ω) = ( −i√\n2π )n n∑ r=0 e−iωr∏ r′ 6=r (ωr′ − ωr) (10)\nwhere ωr is the rth component of frequency vector ω (r = 1, · · · , n), and ω0 = 0.\nProof. Alternatively, s(x) may be represented as:\ns(x) = h(1 · x)h(1− 1 · x) n∏ j=1 h(xj)h(1− xj) (11)\nTherefore, we need to compute Fourier transform of h(x)h(1− x):\n1√ 2π ∫ ∞ −∞ e−ixωh(x)(1− x)dx = 1√ 2π ∫ 1 0 e−ixωdx (12)\n= −i√ 2π 1− e−iω ω (13)\nBy taking the inverse Fourier transform of the function:\n( √ 2π)n−1 ∫ ∞ −∞ −i√ 2π 1− e−iζ ζ δn(ω − ζ1) dζ (14)\nwhere δn is n-dimensional Dirac Delta function, it can be shown that it is the Fourier transform of h(1 · x)h(1− 1 · x):\n( 1√ 2π )n ∫ · · · ∫ Rn eiω.x( √ 2π)n−1 ∫ ∞ −∞ −i√ 2π 1− e−iζ ζ δn(ω − ζ1) dζ dω (15)\n= 1√ 2π ∫ · · · ∫ Rn eiω.x ∫ ∞ −∞ −i√ 2π 1− e−iζ ζ δn(ω − ζ1) dζ dω (16)\n= 1√ 2π ∫ ∞ −∞ −i√ 2π 1− e−iζ ζ ∫ · · · ∫ Rn eiω.xδn(ω − ζ1) dω dζ (17)\n= 1√ 2π ∫ ∞ −∞ −i√ 2π 1− e−iζ ζ eiζ1.x dζ (18)\n= h(1 · x)h(1− 1 · x) (19) Now we can find the Fourier transform of s(x)\nS(ω) = ( n∏ r=1 −i√ 2π 1− e−iωr ωr ) ∗ ( √ 2π)n−1 ∫ ∞ −∞ −i√ 2π 1− e−iζ ζ δn(ω − ζ1) dζ (20)\n= i( −i√ 2π )n+2 ∫ ∞ −∞ e−iζ n∏ r=0 1− e−i(ωr−ζ) ωr − ζ dζ (21)\nwhere ∗ is convolution operator. The final integrand may be represented as:\ne−iζ n∏ r=0 1− e−i(ωr−ζ) ωr − ζ = e−iζ n∏ r=0 1 ωr − ζ n∏ r=0 (1− e−i(ωr−ζ)) (22)\n= e−iζ n∑ r=0 Ar ωr − ζ n∏ r=0 (1− e−i(ωr−ζ)) (23)\n= e−iζ n∑ r=0 Ar ωr − ζ ∑ B⊆Ω (−1)|B|e−i(σB−|B|ζ) (24)\n= n∑ r=0 Ar ωr − ζ ∑ B⊆Ω (−1)|B|e−i(σB−(|B|−1)ζ) (25)\nwhere Ω = {ω0, ..., ωn}, σB is the summation over elements of B and Ar = ∏ r′ 6=r 1 ωr′−ωr . Therefore:\n∫ ∞ −∞ e−iζ n∏ r=0 1− e−i(ωr−ζ) ωr − ζ dζ (26)\n= ∫ ∞ −∞ n∑ r=0 Ar ωr − ζ ∑ B⊆S (−1)|B|e−i(σB−(|B|−1)ζ) dζ (27)\n= n∑ r=0 Ar ∫ ∞ −∞ 1 ωr − ζ ∑ B⊆S (−1)|B|e−i(σB−(|B|−1)ζ) dζ (28)\n= n∑ r=0 Ar ∫ ∞ −∞ 1 ζ ∑ B⊆S (−1)|B|+1e−i(σB−(|B|−1)ωr+(|B|−1)ζ) dζ (29)\n= n∑ r=0 Ar ∑ B⊆S (−1)|B|iπ sign(|B| − 1)e−i(σB−(|B|−1)ωr) (30)\nIf B does not contain ωr and have at least 2 elements then the terms for B and B ∪ {ωr} will cancel each other out. Also, sign(|B| − 1) will vanish if B has only one element. Therefore, there only remains empty set and sets with two elements one of them being ωr. Given the fact that ∑ Ar = 0, the result of the integral will be:∫ ∞ −∞ e−iζ n∏ r=0 1− e−i(ωr−ζ) ωr − ζ dζ = iπ n∑ r=0 Ar(−e−iωr + ∑ r′ 6=r e−iωr′ ) (31)\n= −2iπ n∑ r=0 Are −iωr (32)\nFinally, substituting 32 into 21 yields to the desired result.\nTheorem B.7. The Fourier transform of the output of any hidden node in a fully-connected ReLU neural network may be represented as ∑Q q=1 wqA −1 q ∇S(ωA−1q ), where∇ denote the differential operator. Proof. As discussed in the original paper, f(x) = ∑Q q=1 gq(x) where:\ngq(x) = { w̄q · x̄q h(1− 1 · x̄q) x̄q ∈ R+V∗ 0 otherwise (33)\nor equivalently:\ngq(x) = w̄q · x̄qs(x̄q) (34) Therefore:\nF (ω) = Q∑ q=1 Gq(ω) (35)\n= Q∑ q=1 w̄q.∇S(ω̄q) (36)\nwhere ω̄q = ωA−1q .\nDERIVATION OF EQ.(6)\nAs for the Fourier transform computed in section 3.1, it should be mentioned that the integral in equation 6 is the Fourier transform of:\nhr(x) = h(r − |x|) (37)\nwhich can be derived utilizing the property of the Fourier transforms for radially symmetric functions (Stein and Weiss, 1971):\nHr(ω) = |ω|− n−2 2 ∫ ∞ 0 Jn−2 2 (|ω|ρ)ρ n−2 2 h(r − ρ)ρ dρ (38)\n= |ω|− n−2 2 ∫ r 0 Jn−2 2 (|ω|ρ)ρn2 dρ (39) = ( r\n|ω| )\nn 2 Jn\n2 (r|ω|) (40)\nGiven this transform:\nFC(ω) = F (ω) 1\nVC ∫ · · · ∫ x′∈C e−ix ′·ω dx′ (41)\n= F (ω) Γ(n2 + 1)\nπ n 2\nJn 2 (r|ω|) (r|ω|)n2\n(42)" } ]
2,019
BANDLIMITING NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS
SP:e86910b99dbc07691e2882c85c87d150de40a0ff
[ "This paper focuses on learning a representation that facilitates efficient content-based retrieval. Although the representations that are learned from deep neural networks can contain rich information, it is computationally expensive to use those representations to perform a search for the best match. In particular, computing the Euclidean distance between a query and an instance scales linearly with the size of the representation. Prior approaches to this problem have focused either on: (1) compactifying the learned representations into another form, such as a Hamming code, in a way that preserves the identifiability of an instance; (2) resorting to approximate methods that sacrifice accurate search for efficiency.", "This paper proposes to learn sparse representation in neural networks for retrieval in large database of vectors. Such sparse representation, when the fraction of non-zeros is high, can be computed using sparse matrix multiplication, or variants of inverted index scoring and lead to potentially lower FLOPs needed. This paper proposes to induce sparsity by adding a regularization term, which counts the expected number of FLOPs needed for sparse scoring." ]
Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification. Retrieval of such representations from a large database is however computationally challenging. Approximate methods based on learning compact representations, have been widely explored for this problem, such as locality sensitive hashing, product quantization, and PCA. In this work, in contrast to learning compact representations, we propose to learn high dimensional and sparse representations that have similar representational capacity as dense embeddings while being more efficient due to sparse matrix multiplication operations which can be much faster than dense multiplication. Following the key insight that the number of operations decreases quadratically with the sparsity of embeddings provided the non-zero entries are distributed uniformly across dimensions, we propose a novel approach to learn such distributed sparse embeddings via the use of a carefully constructed regularization function that directly minimizes a continuous relaxation of the number of floating-point operations (FLOPs) incurred during retrieval. Our experiments show that our approach is competitive to the other baselines and yields a similar or better speed-vs-accuracy tradeoff on practical datasets1.
[ { "affiliations": [], "name": "Biswajit Paria" }, { "affiliations": [], "name": "Chih-Kuan Yeh" }, { "affiliations": [], "name": "Ian E.H. Yen" }, { "affiliations": [], "name": "Ning Xu" }, { "affiliations": [], "name": "Pradeep Ravikumar" }, { "affiliations": [], "name": "Barnabás Póczos" } ]
[ { "authors": [ "Martín Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In 12th USENIX Symposium on Operating Systems Design and Implementation", "year": 2016 }, { "authors": [ "Alexandr Andoni", "Piotr Indyk", "Thijs Laarhoven", "Ilya Razenshteyn", "Ludwig Schmidt" ], "title": "Practical and optimal lsh for angular distance", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Devansh Arpit", "Yingbo Zhou", "Hung Ngo", "Venu Govindaraju" ], "title": "Why regularized auto-encoders learn sparse representation", "venue": "arXiv preprint arXiv:1505.05561,", "year": 2015 }, { "authors": [ "Alex Auvolat", "Sarath Chandar", "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio" ], "title": "Clustering is efficient for approximate maximum inner product search", "venue": "arXiv preprint arXiv:1507.05910,", "year": 2015 }, { "authors": [ "Jimmy Ba", "Brendan Frey" ], "title": "Adaptive dropout for training deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Dmitry Baranchuk", "Artem Babenko", "Yury Malkov" ], "title": "Revisiting the inverted indices for billion-scale approximate nearest neighbors", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "L Susan Blackford", "Antoine Petitet", "Roldan Pozo", "Karin Remington", "R Clint Whaley", "James Demmel", "Jack Dongarra", "Iain Duff", "Sven Hammarling", "Greg Henry" ], "title": "An updated set of basic linear algebra subprograms (blas)", "venue": "ACM Transactions on Mathematical Software,", "year": 2002 }, { "authors": [ "Yue Cao", "Mingsheng Long", "Jianmin Wang", "Han Zhu", "Qingfu Wen" ], "title": "Deep quantization network for efficient image retrieval", "venue": null, "year": 2016 }, { "authors": [ "Yue Cao", "Mingsheng Long", "Bin Liu", "Jianmin Wang" ], "title": "Deep cauchy hashing for hamming space retrieval", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Moses S Charikar" ], "title": "Similarity estimation techniques from rounding algorithms", "venue": "In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing,", "year": 2002 }, { "authors": [ "Sheng Chen", "Yang Liu", "Xiang Gao", "Zhen Han" ], "title": "Mobilefacenets: Efficient cnns for accurate real-time face verification on mobile devices", "venue": "arXiv preprint arXiv:1804.07573,", "year": 2018 }, { "authors": [ "Sanjoy Dasgupta", "Charles F Stevens", "Saket Navlakha" ], "title": "A neural algorithm for a fundamental computing problem", "venue": null, "year": 2017 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Stefanos Zafeiriou" ], "title": "Arcface: Additive angular margin loss for deep face recognition", "venue": "arXiv preprint arXiv:1801.07698,", "year": 2018 }, { "authors": [ "Venice Erin Liong", "Jiwen Lu", "Gang Wang", "Pierre Moulin", "Jie Zhou" ], "title": "Deep hashing for compact binary codes learning", "venue": null, "year": 2015 }, { "authors": [ "Tiezheng Ge", "Kaiming He", "Qifa Ke", "Jian Sun" ], "title": "Optimized product quantization for approximate nearest neighbor", "venue": null, "year": 2013 }, { "authors": [ "Aristides Gionis", "Piotr Indyk", "Rajeev Motwani" ], "title": "Similarity search in high dimensions via hashing", "venue": null, "year": 1999 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Yandong Guo", "Lei Zhang", "Yuxiao Hu", "Xiaodong He", "Jianfeng Gao" ], "title": "MS-Celeb-1M: A dataset and benchmark for large scale face recognition", "venue": "In European Conference on Computer", "year": 2016 }, { "authors": [ "M Hadi Kiapour", "Xufeng Han", "Svetlana Lazebnik", "Alexander C Berg", "Tamara L Berg" ], "title": "Where to buy it: Matching street clothing photos in online shops", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Patrick Haffner" ], "title": "Fast transpose methods for kernel learning on sparse data", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Song Han", "Xingyu Liu", "Huizi Mao", "Jing Pu", "Ardavan Pedram", "Mark A Horowitz", "William J Dally" ], "title": "Eie: efficient inference engine on compressed deep neural network", "venue": "In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA),", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Gary B Huang", "Marwan Mattar", "Tamara Berg", "Eric Learned-Miller" ], "title": "Labeled faces in the wild: A database forstudying face recognition in unconstrained environments", "venue": "In Workshop on faces in’Real-Life’Images: detection,", "year": 2008 }, { "authors": [ "Herve Jegou", "Matthijs Douze", "Cordelia Schmid" ], "title": "Product quantization for nearest neighbor search", "venue": null, "year": 2011 }, { "authors": [ "Yeonwoo Jeong", "Hyun Oh Song" ], "title": "Efficient end-to-end learning for quantizable representations", "venue": null, "year": 2018 }, { "authors": [ "Yushi Jing", "David Liu", "Dmitry Kislyuk", "Andrew Zhai", "Jiajing Xu", "Jeff Donahue", "Sarah Tavel" ], "title": "Visual search at pinterest", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Jeff Johnson", "Matthijs Douze", "Hervé Jégou" ], "title": "Billion-scale similarity search with gpus", "venue": "arXiv preprint arXiv:1702.08734,", "year": 2017 }, { "authors": [ "Koray Kavukcuoglu", "Marc’Aurelio Ranzato", "Yann LeCun" ], "title": "Fast inference in sparse coding algorithms with applications to object recognition", "venue": "arXiv preprint arXiv:1010.3467,", "year": 2010 }, { "authors": [ "Ira Kemelmacher-Shlizerman", "Steven M Seitz", "Daniel Miller", "Evan Brossard" ], "title": "The megaface benchmark: 1 million faces for recognition at scale", "venue": null, "year": 2016 }, { "authors": [ "Douwe Kiela", "Léon Bottou" ], "title": "Learning image embeddings using convolutional neural networks for improved multi-modal semantics", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML Deep Learning Workshop,", "year": 2015 }, { "authors": [ "Deguang Kong", "Ryohei Fujimaki", "Ji Liu", "Feiping Nie", "Chris Ding" ], "title": "Exclusive feature learning on arbitrary structures via `1,2-norm", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 and cifar-100 datasets", "venue": "URl: https://www. cs. toronto. edu/kriz/cifar. html,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "NeurIPS,", "year": 2012 }, { "authors": [ "Marcin Krotkiewski", "Marcin Dabrowski" ], "title": "Parallel symmetric sparse matrix–vector product on scalar multi-core cpus", "venue": "Parallel Computing,", "year": 2010 }, { "authors": [ "Taku Kudo", "Yuji Matsumoto" ], "title": "Fast methods for kernel-based text analysis", "venue": "In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume", "year": 2003 }, { "authors": [ "Honglak Lee", "Chaitanya Ekanadham", "Andrew Y Ng" ], "title": "Sparse deep belief net model for visual area v2", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Qi Li", "Zhenan Sun", "Ran He", "Tieniu Tan" ], "title": "Deep supervised discrete hashing", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Wenye Li", "Jingwei Mao", "Yin Zhang", "Shuguang Cui" ], "title": "Fast similarity search via optimal sparse lifting", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Baoyuan Liu", "Min Wang", "Hassan Foroosh", "Marshall Tappen", "Marianna Pensky" ], "title": "Sparse convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Meng Yang" ], "title": "Large-margin softmax loss for convolutional neural networks", "venue": "In ICML,", "year": 2016 }, { "authors": [ "David G Lowe" ], "title": "Distinctive image features from scale-invariant keypoints", "venue": "International journal of computer vision,", "year": 2004 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Yury Malkov", "Alexander Ponomarenko", "Andrey Logvinov", "Vladimir Krylov" ], "title": "Approximate nearest neighbor algorithm based on navigable small world graphs", "venue": "Information Systems,", "year": 2014 }, { "authors": [ "Yury A Malkov", "Dmitry" ], "title": "A Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs", "venue": null, "year": 2018 }, { "authors": [ "Frank J Massey Jr." ], "title": "The kolmogorov-smirnov test for goodness of fit", "venue": "Journal of the American statistical Association,", "year": 1951 }, { "authors": [ "Lukas Meier", "Sara Van De Geer", "Peter Bühlmann" ], "title": "The group lasso for logistic regression", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2008 }, { "authors": [ "John Mellor-Crummey", "John Garvin" ], "title": "Optimizing sparse matrix–vector product computations using unroll and jam", "venue": "The International Journal of High Performance Computing Applications,", "year": 2004 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Stylianos Moschoglou", "Athanasios Papaioannou", "Christos Sagonas", "Jiankang Deng", "Irene Kotsia", "Stefanos Zafeiriou" ], "title": "Agedb: the first manually collected, in-the-wild age database", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop,", "year": 2017 }, { "authors": [ "Yair Movshovitz-Attias", "Alexander Toshev", "Thomas K Leung", "Sergey Ioffe", "Saurabh Singh" ], "title": "No fuss distance metric learning using proxies", "venue": "arXiv preprint arXiv:1703.07464,", "year": 2017 }, { "authors": [ "Paul Neculoiu", "Maarten Versteegh", "Mihai Rotaru" ], "title": "Learning text similarity with siamese recurrent networks", "venue": "In Proceedings of the 1st Workshop on Representation Learning for NLP,", "year": 2016 }, { "authors": [ "Behnam Neyshabur", "Nathan Srebro" ], "title": "On symmetric and asymmetric lshs for inner product search", "venue": "In International Conference on Machine Learning,", "year": 1926 }, { "authors": [ "Andrew Ng" ], "title": "Sparse autoencoder", "venue": "CS294A Lecture notes,", "year": 2011 }, { "authors": [ "Hong-Wei Ng", "Stefan Winkler" ], "title": "A data-driven approach to cleaning large face datasets", "venue": "In Image Processing (ICIP),", "year": 2014 }, { "authors": [ "Qingqun Ning", "Jianke Zhu", "Zhiyuan Zhong", "Steven CH Hoi", "Chun Chen" ], "title": "Scalable image retrieval by sparse product quantization", "venue": "IEEE Transactions on Multimedia,", "year": 2016 }, { "authors": [ "Mohammad Norouzi", "David J Fleet", "Ruslan R Salakhutdinov" ], "title": "Hamming distance metric learning", "venue": "NeurIPS,", "year": 2012 }, { "authors": [ "Hyun Oh Song", "Stefanie Jegelka", "Vivek Rathod", "Kevin Murphy" ], "title": "Deep metric learning via facility location", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Bruno A Olshausen", "David J Field" ], "title": "Sparse coding with an overcomplete basis set: A strategy employed by v1", "venue": "Vision research,", "year": 1997 }, { "authors": [ "Angshuman Parashar", "Minsoo Rhu", "Anurag Mukkara", "Antonio Puglielli", "Rangharajan Venkatesan", "Brucek Khailany", "Joel Emer", "Stephen W Keckler", "William J Dally" ], "title": "Scnn: An accelerator for compressed-sparse convolutional neural networks", "venue": "ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA),", "year": 2017 }, { "authors": [ "Sungrae Park", "JunKeon Park", "Su-Jin Shin", "Il-Chul Moon" ], "title": "Adversarial dropout for supervised and semi-supervised learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Viktor K Prasanna", "Gerald R Morris" ], "title": "Sparse matrix computations on reconfigurable hardware", "venue": null, "year": 2007 }, { "authors": [ "Maxim Raginsky", "Svetlana Lazebnik" ], "title": "Locality-sensitive binary codes from shift-invariant kernels", "venue": "NeurIPS,", "year": 2009 }, { "authors": [ "Parikshit Ram", "Alexander G Gray" ], "title": "Maximum inner-product search using cone trees", "venue": "In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2012 }, { "authors": [ "Marc’Aurelio Ranzato", "Christopher Poultney", "Sumit Chopra", "Yann L Cun" ], "title": "Efficient learning of sparse representations with an energy-based model", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Marc’Aurelio Ranzato", "Y-Lan Boureau", "Yann L Cun" ], "title": "Sparse feature learning for deep belief networks", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Alexandre Sablayrolles", "Matthijs Douze", "Nicolas Usunier", "Hervé Jégou" ], "title": "How should we evaluate supervised hashing", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Alexandre Sablayrolles", "Matthijs Douze", "Cordelia Schmid", "Hervé Jégou" ], "title": "Spreading vectors for similarity search", "venue": null, "year": 2019 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Madry" ], "title": "How does batch normalization help optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Devashish Shankar", "Sujay Narumanchi", "HA Ananya", "Pramod Kompalli", "Krishnendu Chaudhury" ], "title": "Deep learning based large scale visual recommendation and search for e-commerce", "venue": "arXiv preprint arXiv:1703.02344,", "year": 2017 }, { "authors": [ "Anshumali Shrivastava", "Ping Li" ], "title": "Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips)", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Nimit Sharad Sohoni", "Christopher Richard Aberger", "Megan Leszczynski", "Jian Zhang", "Christopher Ré" ], "title": "Low-memory neural network training: A technical report", "venue": null, "year": 1904 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Robert Tibshirani" ], "title": "Regression shrinkage and selection via the lasso", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1996 }, { "authors": [ "Francisco Vazquez", "G Ortega", "José-Jesús Fernández", "Ester M Garzón" ], "title": "Improving the performance of the sparse matrix vector product with gpus", "venue": "In 2010 10th IEEE International Conference on Computer and Information Technology,", "year": 2010 }, { "authors": [ "Francisco Vázquez", "José-Jesús Fernández", "Ester M Garzón" ], "title": "A new approach for sparse matrix vector product on nvidia gpus", "venue": "Concurrency and Computation: Practice and Experience,", "year": 2011 }, { "authors": [ "Jingdong Wang", "Ting Zhang", "Nicu Sebe", "Heng Tao Shen" ], "title": "A survey on learning to hash", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Xueyi Wang" ], "title": "A fast exact k-nearest neighbors algorithm for high dimensional search using k-means clustering and triangle inequality", "venue": "In Neural Networks (IJCNN), The 2011 International Joint Conference on,", "year": 2011 }, { "authors": [ "Kilian Q Weinberger", "Lawrence K Saul" ], "title": "Distance metric learning for large margin nearest neighbor classification", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Wei Wen", "Chunpeng Wu", "Yandan Wang", "Yiran Chen", "Hai Li" ], "title": "Learning structured sparsity in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yan Zhang", "Yasser H Shalabi", "Rishabh Jain", "Krishna K Nagar", "Jason D Bakos" ], "title": "Fpga vs. gpu for sparse matrix vector multiply", "venue": "In 2009 International Conference on Field-Programmable Technology,", "year": 2009 }, { "authors": [ "Yang Zhou", "Rong Jin", "Steven Chu-Hong Hoi" ], "title": "Exclusive lasso for multi-task feature selection", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Ling Zhuo", "Viktor K Prasanna" ], "title": "Sparse matrix-vector multiplication on fpgas", "venue": "In Proceedings of the 2005 ACM/SIGDA 13th international symposium on Field-programmable gate arrays,", "year": 2005 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning semantic representations using deep neural networks (DNN) is now a fundamental facet of applications ranging from visual search (Jing et al., 2015; Hadi Kiapour et al., 2015), semantic text matching (Neculoiu et al., 2016), oneshot classification (Koch et al., 2015), clustering (Oh Song et al., 2017), and recommendation (Shankar et al., 2017). The high-dimensional dense embeddings generated from DNNs however pose a computational challenge for performing nearest neighbor search in large-scale problems with millions of instances. In particular, when the embedding dimension is high, evaluating the distance of any query to all the instances in a large database is expensive, so that efficient search without sacrificing accuracy is difficult. Representations generated using DNNs typically have a higher dimension compared to hand-crafted features such as SIFT (Lowe, 2004), and moreover are dense. The key caveat with dense features is that unlike bag-of-words features they cannot be efficiently searched through an inverted index, without approximations.\nSince accurate search in high dimensions is prohibitively expensive in practice (Wang, 2011), one has to typically sacrifice accuracy for efficiency by resorting to approximate methods. Addressing the problem of efficient approximate Nearest-Neighbor Search (NNS) (Jegou et al., 2011) or Maximum Inner-Product Search (MIPS) (Shrivastava and Li, 2014) is thus an active area of research, which we review in brief in the related work section. Most approaches (Charikar, 2002; Jegou et al., 2011) aim to learn compact lower-dimensional representations that preserve distance information.\nWhile there has been ample work on learning compact representations, learning sparse higher dimensional representations have been addressed only recently (Jeong and Song, 2018; Cao et al., 2018). As a seminal instance, Jeong and Song (2018) propose an end-to-end approach to learn\n∗Part of the work was done when BP was a research intern at Snap Inc. 1The implementation is available at https://github.com/biswajitsc/sparse-embed\nsparse and high-dimensional hashes, showing significant speed-up in retrieval time on benchmark datasets compared to dense embeddings. This approach has also been motivated from a biological viewpoint (Li et al., 2018) by relating to a fruit fly’s olfactory circuit, thus suggesting the possibility of hashing using higher dimensions instead of reducing the dimensionality. Furthermore, as suggested by Glorot et al. (2011), sparsity can have additional advantages of linear separability and information disentanglement.\nIn a similar vein, in this work, we propose to learn high dimensional embeddings that are sparse and hence efficient to retrieve using sparse matrix multiplication operations. In contrast to compact lowerdimensional ANN-esque representations that typically lead to decreased representational power, a key facet of our higher dimensional sparse embeddings is that they can have the same representational capacity as the initial dense embeddings. The core idea behind our approach is inspired by two key observations: (i) retrieval of d (high) dimensional sparse embeddings with fraction p of non-zero values on an average, can be sped up by a factor of 1/p. (ii) The speed up can be further improved to a factor of 1/p2 by ensuring that the non-zero values are evenly distributed across all the dimensions. This indicates that sparsity alone is not sufficient to ensure maximal speedup; the distribution of the non-zero values plays a significant role as well. This motivates us to consider the effect of sparsity on the number of floating point operations (FLOPs) required for retrieval with an inverted index. We propose a penalty function on the embedding vectors that is a continuous relaxation of the exact number of FLOPs, and encourages an even distribution of the non-zeros across the dimensions.\nWe apply our approach to the large scale metric learning problem of learning embeddings for facial images. Our training loss consists of a metric learning (Weinberger and Saul, 2009) loss aimed at learning embeddings that mimic a desired metric, and a FLOPs loss to minimize the number of operations. We perform an empirical evaluation of our approach on the Megaface dataset (Kemelmacher-Shlizerman et al., 2016), and show that our proposed method successfully learns high-dimensional sparse embeddings that are orders-of-magnitude faster. We compare our approach to multiple baselines demonstrating an improved or similar speed-vs-accuracy trade-off.\nThe rest of the paper is organized as follows. In Section 3 we analyze the expected number of FLOPs, for which we derive an exact expression. In Section 4 we derive a continuous relaxation that can be used as a regularizer, and optimized using gradient descent. We also provide some analytical justifications for our relaxation. In Section 5 we then compare our method on a large metric learning task showing an improved speed-accuracy trade-off compared to the baselines." }, { "heading": "2 RELATED WORK", "text": "Learning compact representations, ANN. Exact retrieval of the top-k nearest neighbours is expensive in practice for high-dimensional dense embeddings learned from deep neural networks, with practitioners often resorting to approximate nearest neighbours (ANN) for efficient retrieval. Popular approaches for ANN include Locality sensitive hashing (LSH) (Gionis et al., 1999; Andoni et al., 2015; Raginsky and Lazebnik, 2009) relying on random projections, Navigable small world graphs (NSW) (Malkov et al., 2014) and hierarchical NSW (HNSW) (Malkov and Yashunin, 2018) based on constructing efficient search graphs by finding clusters in the data, Product Quantization (PQ) (Ge et al., 2013; Jegou et al., 2011) approaches which decompose the original space into a cartesian product of low-dimensional subspaces and quantize each of them separately, and Spectral hashing (Weiss et al., 2009) which involves an NP hard problem of computing an optimal binary hash, which is relaxed to continuous valued hashes, admitting a simple solution in terms of the spectrum of the similarity matrix. Overall, for compact representations and to speed up query times, most of these approaches use a variety of carefully chosen data structures, such as hashes (Neyshabur and Srebro, 2015; Wang et al., 2018), locality sensitive hashes (Andoni et al., 2015), inverted file structure (Jegou et al., 2011; Baranchuk et al., 2018), trees (Ram and Gray, 2012), clustering (Auvolat et al., 2015), quantization sketches (Jegou et al., 2011; Ning et al., 2016), as well as dimensionality reductions based on principal component analysis and t-SNE (Maaten and Hinton, 2008).\nEnd to end ANN. Learning the ANN structure end-to-end is another thread of work that has gained popularity recently. Norouzi et al. (2012) propose to learn binary representations for the Hamming metric by minimizing a margin based triplet loss. Erin Liong et al. (2015) use the signed output of a deep neural network as hashes, while imposing independence and orthogonality conditions on the\nhash bits. Other end-to-end learning approaches for learning hashes include (Cao et al., 2016; Li et al., 2017). An advantage of end-to-end methods is that they learn hash codes that are optimally compatible to the feature representations.\nSparse representations. Sparse representations have been previously studied from various viewpoints. Glorot et al. (2011) explore sparse neural networks in modeling biological neural networks and show improved performance, along with additional advantages such as better linear separability and information disentangling. Ranzato et al. (2008; 2007); Lee et al. (2008) propose learning sparse features using deep belief networks. Olshausen and Field (1997) explore sparse coding with an overcomplete basis, from a neurobiological viewpoint. Sparsity in auto-encoders have been explored by Ng et al. (2011); Kavukcuoglu et al. (2010). Arpit et al. (2015) provide sufficient conditions to learn sparse representations, and also further provide an excellent review of sparse autoencoders. Dropout (Srivastava et al., 2014) and a number of its variants (Molchanov et al., 2017; Park et al., 2018; Ba and Frey, 2013) have also been shown to impose sparsity in neural networks.\nHigh-dimensional sparse representations. Sparse deep hashing (SDH) (Jeong and Song, 2018) is an end-to-end approach that involves starting with a pre-trained network and then performing alternate minimization consisting of two minimization steps, one for training the binary hashes and the other for training the continuous dense embeddings. The first involves computing an optimal hash best compatible with the dense embedding using a min-cost-max-flow approach. The second step is a gradient descent step to learn a dense embedding by minimizing a metric learning loss. A related approach, k-sparse autoencoders (Makhzani and Frey, 2013) learn representations in an unsupervised manner with at most k non-zero activations. The idea of high dimensional sparse embeddings is also reinforced by the sparse-lifting approach (Li et al., 2018) where sparse high dimensional embeddings are learned from dense features. The idea is motivated by the biologically inspired fly algorithm (Dasgupta et al., 2017). Experimental results indicated that sparse-lifting is an improvement both in terms of precision and speed, when compared to traditional techniques like LSH that rely on dimensionality reduction.\n`1 regularization, lasso. The Lasso (Tibshirani, 1996) is the most popular approach to impose sparsity and has been used in a variety of applications including sparsifying and compressing neural networks (Liu et al., 2015; Wen et al., 2016). The group lasso (Meier et al., 2008) is an extension of lasso that encourages all features in a specified group to be selected together. Another extension, the exclusive lasso (Kong et al., 2014; Zhou et al., 2010), on the other hand, is designed to select a single feature in a group. Our proposed regularizer, originally motivated by idea of minimizing FLOPs closely resembles exclusive lasso. Our focus however is on sparsifying the produced embeddings rather than sparsifying the parameters.\nSparse matrix vector product (SpMV). Existing work for SpMV computations include (Haffner, 2006; Kudo and Matsumoto, 2003), proposing algorithms based on inverted indices. Inverted indices are however known to suffer from severe cache misses. Linear algebra back-ends such as BLAS (Blackford et al., 2002) rely on efficient cache accesses to achieve speedup. Haffner (2006); Mellor-Crummey and Garvin (2004); Krotkiewski and Dabrowski (2010) propose cache efficient algorithms for sparse matrix vector products. There has also been substantial interest in speeding up SpMV computations using specialized hardware such as GPUs (Vazquez et al., 2010; Vázquez et al., 2011), FPGAs (Zhuo and Prasanna, 2005; Zhang et al., 2009), and custom hardware (Prasanna and Morris, 2007).\nMetric learning. While there exist many settings for learning embeddings (Hinton and Salakhutdinov, 2006; Kingma and Welling, 2013; Kiela and Bottou, 2014) in this paper we restrict our attention to the context of metric learning (Weinberger and Saul, 2009). Some examples of metric learning losses include large margin softmax loss for CNNs (Liu et al., 2016), triplet loss (Schroff et al., 2015), and proxy based metric loss (Movshovitz-Attias et al., 2017)." }, { "heading": "3 EXPECTED NUMBER OF FLOPS", "text": "In this section we study the effect of sparsity on the expected number of FLOPs required for retrieval and derive an exact expression for the expected number of FLOPs. The main idea in this paper\nis based on the key insight that if each of the dimensions of the embedding are non-zero with a probability p (not necessarily independently), then it is possible to achieve a speedup up to an order of 1/p2 using an inverted index on the set of embeddings. Consider two embedding vectors u,v. Computing uTv requires computing only the pointwise product at the indices k where both uk and vk are non-zero. This is the main motivation behind using inverted indices and leads to the aforementioned speedup. Before we analyze it more formally, we introduce some notation.\nLet D = {(xi, yi)}ni=1 be a set of n independent training samples drawn from Z = X × Y according to a distribution P , where X ,Y denote the input and label spaces respectively. Let F = {fθ : X → Rd | θ ∈ Θ} be a class of functions parameterized by θ ∈ Θ, mapping input instances to d-dimensional embeddings. Typically, for image tasks, the function is chosen to be a suitable CNN (Krizhevsky et al., 2012). Suppose X,Y ∼ P , then define the activation probability pj = P(fθ(X)j 6= 0), and its empirical version p̄j = 1n ∑n i=1 I[fθ(xi)j 6= 0].\nWe now show that sparse embeddings can lead to a quadratic speedup. Consider a d-dimensional sparse query vector uq = fθ(xq) ∈ Rd and a database of n sparse vectors {vi = fθ(x(i))}ni=1 ⊂ Rd forming a matrix D ∈ Rn×d. We assume that xq,x(i) (i = 1, . . . , n) are sampled independently from P . Computing the vector matrix product Duq requires looking at only the columns of D corresponding to the non-zero entries of uq given byNq = {j | 1 ≤ j ≤ d, (uq)j 6= 0}. Furthermore, in each of those columns we only need to look at the non-zero entries. This can be implemented efficiently in practice by storing the non-zero indices for each column in independent lists, as depicted in Figure 2.\nThe number of FLOPs incurred is given by,\nF (D,uq) = ∑ j∈Nq ∑ i:vij 6=0 1 = n∑ i=1 d∑ j=1 I[(uq)j 6= 0 ∧ vij 6= 0]\nTaking the expectation on both sides w.r.t. xq,x(i) and using the independence of the data, we get\nE[F (D,uq)] = n∑ i=1 d∑ j=1 P ( (uq)j 6= 0 ) P ( vij 6= 0 ) = n d∑ j=1 P(fθ(X)j 6= 0)2 (1)\nwhere X ∼ P . Since the expected number of FLOPs scales linearly with the number of vectors in the database, a more suitable quantity is the mean-FLOPs-per-row defined as\nF(fθ,P) = E[F (D,uq)]/n = d∑ j=1 P(fθ(X)j 6= 0)2 = d∑ j=1 p2j . (2)\nNote that for a fixed amount of sparsity ∑d j=1 pj = d p, this is minimized when each of the dimensions are non-zero with equal probability pj = p, ∀1 ≤ j ≤ d, upon which F(fθ,P) = d p2 (so that as a regularizer, F(fθ,P) will in turn encourage such a uniform distribution across dimensions). Given such a uniform distribution, compared to dense multiplication which has a complexity of O(d) per row, we thus get an improvement by a factor of 1/p2 (p < 1). Thus when only p fraction of all the entries is non-zero, and evenly distributed across all the columns, we achieve a speedup of 1/p2. Note that independence of the non-zero indices is not necessary due to the linearity of expectation – in fact, features from a neural network are rarely uncorrelated in practice.\nFLOPs versus speedup. While FLOPs reduction is a reasonable measure of speedup on primitive processors of limited parallelization and cache memory. FLOPs is not an accurate measure of actual speedup when it comes to mainstream commercial processors such as Intel’s CPUs and Nvidia’s GPUs, as the latter have cache and SIMD (Single-Instruction Multiple Data) mechanism highly optimized for dense matrix multiplication, while sparse matrix multiplication are inherently less tailored to their cache and SIMD design (Sohoni et al., 2019). On the other hand, there have been threads of research on hardwares with cache and parallelization tailored to sparse operations that show speedup proportional to the FLOPs reduction (Han et al., 2016; Parashar et al., 2017). Modeling the cache and other hardware aspects can potentially lead to better performance but less generality and is left to our future works." }, { "heading": "4 OUR APPROACH", "text": "The `1 regularization is the most common approach to induce sparsity. However, as we will also verify experimentally, it does not ensure an uniform distribution of the non-zeros in all the dimensions that is required for the optimal speed-up. Therefore, we resort to incorporating the actual FLOPs incurred, directly into the loss function which will lead to an optimal trade-off between the search time and accuracy. The FLOPs F(fθ,P) being a discontinuous function of model parameters, is hard to optimize, and hence we will instead optimize using a continuous relaxation of it.\nDenote by `(fθ,D), any metric loss on D for the embedding function fθ. The goal in this paper is to minimize the loss while controlling the expected FLOPs F(fθ,P) defined in Eqn. 2. Since the distribution P is unknown, we use the samples to get an estimate of F(fθ,P). Recall the empirical fraction of non-zero activations p̄j = 1n ∑n i=1 I[fθ(xi)j 6= 0], which converges in probability to\npj . Therefore, with a slight abuse of notation define F(fθ,D) = ∑d j=1 p̄ 2 j , which is a consistent estimator for F(fθ,P) based on the samples D. Note that F denotes either the population or empirical quantities depending on whether the functional argument is P or D. We now consider the following regularized loss.\nmin θ∈Θ `(fθ,D) + λF(fθ,D)︸ ︷︷ ︸ L(θ)\n(3)\nfor some parameter λ that controls the FLOPs-accuracy tradeoff. The regularized loss poses a further hurdle, as p̄j and consequently F(fθ,D) are not continuous due the presence of the indicator functions. We thus compute the following continuous relaxation. Define the mean absolute activation aj = E[|fθ(X)j |] and its empirical version āj = 1n ∑n i=1 |fθ(xi)j |, which is the `1 norm of the activations (scaled by 1/n) in contrast to the `0 quasi norm in the FLOPs calculation. Define the relaxations, F̃(fθ,P) = ∑d j=1 a 2 j and its consistent estimator F̃(fθ,D) = ∑d j=1 ā 2 j . We propose to minimize the following relaxation, which can be optimized using any off-the-shelf stochastic gradient\ndescent optimizer. min θ∈Θ\n`(fθ,D) + λF̃(fθ,D)︸ ︷︷ ︸ L̃(θ) . (4)\nSparse retrieval and re-ranking. During inference, the sparse vector of a query image is first obtained from the learned model and the nearest neighbour is searched in a database of sparse vectors forming a sparse matrix. An efficient algorithm to compute the dot product of the sparse query vector with the sparse matrix is presented in Figure 1. This consists of first building a list of the non-zero values and their positions in each column. As motivated in Section 3, given a sparse query vector, it is sufficient to only iterate through the non-zero values and the corresponding columns. Next, a filtering step is performed keeping only scores greater than a specified threshold. Top-k candidates from the remaining items are returned. The complete algorithm is presented in Algorithm 1. In practice, the sparse retrieval step is not sufficient to ensure good performance. The top-k shortlisted candidates are therefore further re-ranked using dense embeddings as done in SDH. This step involves multiplication of a small dense matrix with a dense vector. The number of shortlisted candidates k is chosen such that the dense re-ranking time does not dominate the total time.\nComparison to SDH (Jeong and Song, 2018). It is instructive to contrast our approach with that of SDH (Jeong and Song, 2018). In contrast to the binary hashes in SDH, our approach learns sparse real valued representations. SDH uses a min-cost-max-flow approach in one of the training steps, while we train ours only using SGD. During inference in SDH, a shortlist of candidates is first created by considering the examples in the database that have hashes with non-empty intersections with the query hash. The candidates are further re-ranked using the dense embeddings. The shortlist in our approach on the other hand is constituted of the examples with the top scores from the sparse embeddings.\nComparison to unrelaxed FLOPs regularizer. We provide an experimental comparison of our continuous relaxation based FLOPs regularizer to its unrelaxed variant, showing that the performance of the two are markedly similar. Setting up this experiment requires some analytical simplifications based on recent deep neural network analyses. We first recall recent results that indicate that the output of a batch norm layer nearly follows a Gaussian distribution (Santurkar et al., 2018), so that in our context, we could make the simplifying approximation that fθ(X)j (where X ∼ P) is distributed as ρ(Y ) where Y ∼ N (µj(θ), σ2j (θ)), ρ is the ReLU activation used at the neural network output. We have modelled the pre-activation as a Gaussian distribution with mean and variance depending on the model parameters θ. We experimentally verify that this assumption holds by minimizing the KS distance (Massey Jr, 1951) between the CDF of ρ(Y ) where Y ∼ N (µ, σ2) and the empirical CDF of the activations. The KS distance is minimized wrt. µ, σ. Figure 3a shows the empirical CDF and the fitted CDF of ρ(Y ) for two different architectures.\nWhile µj(θ), σj(θ) (1 ≤ j ≤ d) cannot be tuned independently due to their dependence on θ, in practice, the huge representational capacity of neural networks allows µj(θ) and σj(θ) to be tuned almost independently. We consider a toy setting with 2-d embeddings. For a tractable analysis, we make the simplifying assumption that, for j = 1, 2, fθ(X)j is distributed as ReLU(Y ) where Y ∼ N (µj , σ2j ), thus losing the dependence on θ.\nWe now analyze how minimizing the continuous relaxation F̃(fθ,P) compares to minimizing F(fθ,P). Note that we consider the population quantities here instead of the empirical quantities, as they are more amenable to theoretical analyses due to the existence of closed form expressions. We also consider the `1 regularizer as a baseline. We initialize with (µ1, µ2, σ1, σ2) = (−1/4,−1.3, 1, 1), and minimize the three quantities via gradient descent with infinitesimally small learning rates. For this contrastive analysis, we have not considered the effect of the metric loss. Note that while the discontinuous empirical quantity F(fθ,D) cannot be optimized via gradient descent, it is possible to do so for its population counterpart F(fθ,P) since it is available in closed form as a continuous function when making Gaussian assumptions. The details of computing the gradients can be found in Appendix A.\nWe start with activation probabilities (p1, p2) = (0.4, 0.1), and plot the trajectory taken when performing gradient descent, shown in Figure 3b. Without the effect of the metric loss, the probabilities are expected to go to zero as observed in the plot. It can be seen that, in contrast to the `1-regularizer,\n(a) The CDF of ρ(Y ) fitted to minimize the KS distance to the empirical CDF of the activations for two different architectures. (b) The trajectory of the activation probabilities when minimizing the respective regularizers.\nFigure 3: Figure (a) shows that the CDF of the activations (red) closely resembles the CDF of ρ(Y ) (blue) where Y is a Gaussian random variable. Figure (b) shows that F and F̃ behave similarly by sparsifying the less sparser activation at a faster rate when compared to the `1 regularizer.\nF and F̃ both tend to sparsify the less sparse activation (p1) at a faster rate, which corroborates the fact that they encourage an even distribution of non-zeros.\nF̃ promotes orthogonality. We next show that, when the embeddings are normalized to have a unit norm, as typically done in metric learning, then minimizing F̃(fθ,D) is equivalent to promoting orthogonality on the absolute values of the embedding vectors. Let ‖fθ(x)‖2 = 1, ∀x ∈ X , we then have the following:\nF̃(fθ,D) = d∑ j=1\n( 1\nn n∑ i=1 |fθ(xi)j |\n)2 = 1\nn2 ∑ p,q∈[1:n] 〈 |fθ(xp)|, |fθ(xq)| 〉 (5)\nF̃(fθ,D) is minimized when the vectors {|fθ(xi)|}ni=1 are orthogonal. Metric learning losses aim at minimizing the interclass dot product, whereas the FLOPs regularizer aims at minimizing pairwise dot products irrespective of the class, leading to a tradeoff between sparsity and accuracy. This approach of pushing the embeddings apart, bears some resemblance to the idea of spreading vectors (Sablayrolles et al., 2019) where an entropy based regularizer is used to uniformly distribute the embeddings on the unit sphere, albeit without considering any sparsity. Maximizing the pairwise dot product helps in reducing FLOPs as is illustrated by the following toy example. Consider a set of d vectors {vi}di=1 ⊂ Rd (here n = d) satisfying ‖vi‖2 = 1, ∀i ∈ [1 : d]. Then ∑ p,q∈[1:d] 〈 |vp|, |vq|\n〉 is minimized when vp = ep, where ep is an one-hot vector with the p th entry equal to 1 and the rest 0. The FLOPs regularizer thus tends to spread out the non-zero activations in all the dimensions, thus producing balanced embeddings. This simple example also demonstrates that when the number of classes in the training set is smaller or equal to the number of dimensions d, a trivial embedding that minimizes the metric loss and also achieves a small number of FLOPs is fθ(x) = ey where y is true label for x. This is equivalent to predicting the class of the input instance. The caveat with such embeddings is that they might not be semantically meaningful beyond the specific supervised task, and will naturally hurt performance on unseen classes, and tasks where the representation itself is of interest. In order to avoid such a collapse in our experiments, we ensure that the embedding dimension is smaller than the number of training classes. Furthermore, as recommended by Sablayrolles et al. (2017), we perform all our evaluations on unseen classes.\nExclusive lasso. Also known as `1,2-norm, in previous works it has been used to induce competition (or exclusiveness) in features in the same group. More formally, consider d features indexed by {1, . . . , d}, and groups g ⊂ {1, . . . , d} forming a set of groups G ⊂ 2{1,...,d}.2 Let w denote the\n2Denotes the powerset of {1, . . . , d}.\nweight vector for a linear classifier. The exclusive lasso regularizer is defined as, ΩG(w) = ∑ g∈G ‖wg‖21,\nwhere wg denotes the sub-vector (wi)i∈g , corresponding to the indices in g. G can be used to induce various kinds of structural properties. For instance G can consist of groups of correlated features. The regularizer prevents feature redundancy by selecting only a few features from each group.\nOur proposed FLOPs based regularizer has the same form as exclusive lasso. Therefore exclusive lasso applied to the batch of activations, with the groups being columns of the activation matrix (and rows corresponding to different inputs), is equivalent to the FLOPs regularizer. It can be said that, within each activation column, the FLOPs regularizer induces competition between different input examples for having a non-zero activation." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate our proposed approach on a large scale metric learning dataset: the Megaface (Kemelmacher-Shlizerman et al., 2016) used for face recognition. This is a much more fine grained retrieval tasks (with 85k classes for training) compared to the datasets used by Jeong and Song (2018). This dataset also satisfies our requirement of the number of classes being orders of magnitude higher than the dimensions of the sparse embedding. As discussed in Section 4, a few number of classes during training can lead the model to simply learn an encoding of the training classes and thus not generalize to unseen classes. Face recognition datasets avoid this situation by virtue of the huge number of training classes and a balanced distribution of examples across all the classes.\nFollowing standard protocol for evaluation on the Megaface dataset (Kemelmacher-Shlizerman et al., 2016), we train on a refined version of the MSCeleb-1M (Guo et al., 2016) dataset released by Deng et al. (2018) consisting of 1 million images spanning 85k classes. We evaluate with 1 million distractors from the Megaface dataset and 3.5k query images from the Facescrub dataset (Ng and Winkler, 2014), which were not seen during training.\nNetwork architecture. We experiment with two architectures: MobileFaceNet (Chen et al., 2018), and ResNet-101 (He et al., 2016). We use ReLU activations in the embedding layer for MobileFaceNet, and SThresh activations (defined below) for ResNet. The activations are `2-normalized to produce an embedding on the unit sphere, and used to compute the Arcface loss (Deng et al., 2018). We learn 1024 dimensional sparse embeddings for the `1 and F̃ regularizers; and 128, 512 dimensional dense embeddings as baselines. All models were implemented in Tensorflow (Abadi et al., 2016) with the sparse retrieval algorithm implemented in C++. The re-ranking step used 512-d dense embeddings.\nActivation function. In practice, having a non-linear activation at the embedding layer is crucial for sparsification. Layers with activations such as ReLU are easier to sparsify due to the bias parameter in the layer before the activation (linear or batch norm) which acts as a direct control knob to the sparsity. More specifically, ReLU(x− λ) can be made more (less) sparse by increasing (decreasing) the components of λ, where λ is the bias parameter of the previous linear layer. In this paper we consider two types of activations: ReLU(x) = max(x,0), and the soft thresholding operator SThresh(x) = sgn(x) max(|x|−1/2, 0) (Boyd and Vandenberghe, 2004). ReLU activations always produce positive values, whereas soft thresholding can produce negative values as well.\nPractical considerations. In practice, setting a large regularization weight λ from the beginning is harmful for training. Sparsifying too quickly using a large λ leads to many dead activations (saturated to zero) in the embedding layer and the model getting stuck in a local minima. Therefore, we use an annealing procedure and gradually increase λ throughout the training using a regularization weight schedule λ(t) : N 7→ R that maps the training step to a real valued regularization weight. In our experiments we choose a λ(t) that increases quadratically as λ(t) = λ(t/T )2, until step t = T , where T is the threshold step beyond which λ(t) = λ.\nBaselines. We compare our proposed F̃-regularizer, with multiple baselines: exhaustive search with dense embeddings, sparse embeddings using `1 regularization, Sparse Deep Hashing (SDH)\n(Jeong and Song, 2018), and PCA, LSH, PQ applied to the 512 dimensional dense embeddings from both the architectures. We train the SDH model using the aforementioned architectures for 512 dimensional embeddings, with number of active hash bits k = 3. We use numpy (using efficient MKL optimizations in the backend) for matrix multiplication required for exhaustive search in the dense and PCA baselines. We use the CPU version of the Faiss (Johnson et al., 2017) library for LSH and PQ (we use the IVF-PQ index from Faiss).\nFurther details on the training hyperparameters and the hardware used can be found in Appendix B." }, { "heading": "5.1 RESULTS", "text": "We report the recall and the time-per-query for various hyperparameters of our proposed approach and the baselines, yielding trade-off curves. The reported times include the time required for re-ranking. The trade-off curves for MobileNet and ResNet are shown in Figures 4a and 4c respectively. We observe that while vanilla `1 regularization is an improvement by itself for some hyperparameter settings, the F̃ regularizer is a further improvement, and yields the most optimal trade-off curve. SDH has a very poor speed-accuracy trade-off, which is mainly due to the explosion in the number of shortlisted candidates with increasing number of active bits leading to an increase in the retrieval time. On the other hand, while having a small number of active bits is faster, it leads to a smaller recall. For the other baselines we notice the usual order of performance, with PQ having the best speed-up compared to LSH and PCA. While dimensionality reduction using PCA leads to some speed-up for relatively high dimensions, it quickly wanes off as the dimension is reduced even further.\nWe also report the sub-optimality ratio Rsub = F(fθ,D)/dp̄2 computed over the dataset D, where p̄ = 1d ∑d j=1 p̄j is the mean activation probability estimated on the test data. Notice that Rsub ≥ 1, and the optimal Rsub = 1 is achieved when p̄j = p̄, ∀1 ≤ j ≤ d, that is when the non-zeros are evenly distributed across the dimensions. The sparsity-vs-suboptimality plots for MobileNet and ResNet are shown in Figures 4a and 4c respectively. We notice that the F̃ -regularizer yields values of\nRsub closer to 1 when compared to the `1-regularizer. For the MobileNet architecture we notice that the `1 regularizer is able to achieve values of R close to that of F̃ in the less sparser region. However, the gap increases substantially with increasing sparsity. For the ResNet architecture on the other hand the `1 regularizer yields extremely sub-optimal embeddings in all regimes. The F̃ regularizer is therefore able to produce more balanced distribution of non-zeros.\nThe sub-optimality is also reflected in the recall values. The gap in the recall values of the `1 and F̃ models is much higher when the sub-optimality gap is higher, as in the case of ResNet, while it is small when the sub-optimality gap is smaller as in the case of MobileNet. This shows the significance of having a balanced distribution of non-zeros. Additional results, including results without the re-ranking step and performance on CIFAR-100 can be found in Appendix C." }, { "heading": "6 CONCLUSION", "text": "In this paper we proposed a novel approach to learn high dimensional embeddings with the goal of improving efficiency of retrieval tasks. Our approach integrates the FLOPs incurred during retrieval into the loss function as a regularizer and optimizes it directly through a continuous relaxation. We provide further insight into our approach by showing that the proposed approach favors an even distribution of the non-zero activations across all the dimensions. We experimentally showed that our approach indeed leads to a more even distribution when compared to the `1 regularizer. We compared our approach to a number of other baselines and showed that it has a better speed-vs-accuracy trade-off. Overall we were able to show that sparse embeddings can be around 50× faster compared to dense embeddings without a significant loss of accuracy." }, { "heading": "Acknowledgements", "text": "We thank Hong-You Chen for helping in running some baselines during the early stages of this work. This work has been partially funded by the DARPA D3M program and the Toyota Research Institute. Toyota Research Institute (\"TRI\") provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity." }, { "heading": "Appendix", "text": "" }, { "heading": "A GRADIENT COMPUTATIONS FOR ANALYTICAL EXPERIMENTS", "text": "As described in the main text, for purposes of an analytical toy experiment, we consider a simplified setting with 2-d embeddings with the jth (j = 1, 2) activation being distributed as (Yj)+ = ReLU(Yj) where Yj ∼ N (µj , σj). We assume µj ≤ 0, which is typical for sparse activations (pj ≤ 0.5). Then the three compared regularizers are F(pθ,P) = ∑2 j=1 P((Yj)+ > 0)2,\nF̃(pθ,P) = ∑2 j=1 E[(Yj)+]2, and `1(pθ,P) = ∑2 j=1 E[(Yj)+]. Computing the regularizer gradients thus boils down to computing the gradients of P((Yj)+ > 0)2,E[(Yj)+]2, and E[(Yj)+] as provided in the following lemmas. We hide the subscript j for brevity, as computations are similar for all j. Lemma 1.\nE[Y+] = σ√ 2π exp\n( − µ 2\n2σ2\n) + µ ( 1− Φ ( −µ σ )) , (6)\nand, P(Y+ > 0) = 1− Φ ( −µ σ ) , (7) where Φ denotes the cdf of the Gaussian distribution.\nProof of Lemma 1. The proof is based on standard Gaussian identities. E[Y+] = ∫ ∞\n0 x√ 2πσ2 exp\n( − (x− µ) 2\n2σ2\n) dx = ∫ ∞ −µ x+ µ√ 2πσ2 exp ( − x 2 2σ2 ) dx\n= ∫ ∞ −µ x√ 2πσ2 exp ( − x 2 2σ2 ) dx+ ∫ ∞ −µ µ√ 2πσ2 exp ( − x 2 2σ2 ) dx\n= σ√ 2π exp\n( − µ 2\n2σ2\n) + µ ( 1− Φ ( −µ σ ))\nP(Yj > 0) = ∫ ∞\n0 1√ 2πσ2 exp\n( − (x− µ) 2\n2σ2\n) dx = ∫ ∞ −µ/σ 1√ 2π exp ( −x 2 2 ) dx\n= 1− Φ ( −µ σ )\nLemma 2. ∂\n∂µ P(Y+ > 0) = −\n∂ ∂µ Φ ( −µ σ ) = 1 σ √ 2π exp ( − µ 2 2σ2 ) . (8)\n∂\n∂σ P(Y+ > 0) = −\n∂ ∂σ Φ ( −µ σ ) = − µ σ2 √ 2π exp ( − µ 2 2σ2 ) . (9)\nProof of Lemma 2. Follows directly from the statement by standard differentiation.\nLemma 3. ∂\n∂µ E[Y+] = 1− Φ ( −µ σ ) . (10)\n∂\n∂σ E[Y+] = 1√ 2π exp\n( − µ 2\n2σ2\n) . (11)\nProof of Lemma 3.\n∂\n∂µ E[Y+] = −\nµ σ √ 2π exp\n( − µ 2\n2σ2\n) + ∂\n∂µ\n[ µ ( 1− Φ ( −µ σ ))] = 1− Φ ( −µ σ )\nwhere the last step follows from Lemma 2.\n∂\n∂σ E[Y+] = 1√ 2π exp\n( − µ 2\n2σ2\n) + µ2\nσ2 √ 2π exp\n( − µ 2\n2σ2\n) + ∂\n∂σ\n[ µ ( 1− Φ ( −µ σ ))] =\n1√ 2π exp\n( − µ 2\n2σ2 ) where the last step follows from Lemma 2.\nLemma 4.\n∂\n∂µ E[Y+]2 = 2E[Y+]\n( 1− Φ ( −µ σ )) . (12)\n∂\n∂σ E[Y+]2 = 2E[Y+] 1√ 2π exp\n( − µ 2\n2σ2\n) . (13)\nProof of Lemma 4. Follows directly from Lemma 3.\nLemma 5.\n∂\n∂µ P(Y+ > 0)2 = 2P(Y+ > 0)\n1 σ √ 2π exp\n( − µ 2\n2σ2\n) . (14)\n∂\n∂σ P(Y+ > 0)2 = −2P(Y+ > 0)\nµ σ2 √ 2π exp\n( − µ 2\n2σ2\n) . (15)\nProof of Lemma 5. Follows directly from Lemma 2." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "All images were resized to size 112× 112 and aligned using a pre-trained aligner3. For the Arcloss function, we used the recommended parameters of margin m = 0.5 and temperature s = 64. We trained our models on 4 NVIDIA Tesla V-100 GPUs using SGD with a learning rate of 0.001, momentum of 0.9. Both the architectures were trained for a total of 230k steps, with the learning rate being decayed by a factor of 10 after 170k steps. We use a batch size of 256 and 64 per GPU for MobileFaceNet for ResNet respectively.\nPre-training in SDH is performed in the same way as described above. The hash learning step is trained on a single GPU with a learning rate of 0.001. The ResNet model is trained for 200k steps with a batch size of 64, and the MobileFaceNet model is trained for 150k steps with a batch size of 256. We set the number of active bits k = 3 and a pairwise cost of p = 0.1." }, { "heading": "Hyper-parameters for MobileNet models.", "text": "1. The regularization parameter λ for the F̃ regularizer was varied as 200, 300, 400, 600.\n2. The regularization parameter λ for the `1 regularizer was varied as 1.5, 2.0, 2.7, 3.5.\n3. The PCA dimension is varied as 64, 96, 128, 256.\n4. The number of LSH bits were varied as 512, 768, 1024, 2048, 3072.\n5. For IVF-PQ from the faiss library, the following parameters were fixed: nlist=4096, M=64, nbit=8, and nprobe was varied as 100, 150, 250, 500, 1000.\n3https://github.com/deepinsight/insightface" }, { "heading": "Hyper-parameters for ResNet baselines.", "text": "1. The regularization parameter λ for the F̃ regularizer was varied as 50, 100, 200, 630. 2. The regularization parameter λ for the `1 regularizer was varied as 2.0, 3.0, 5.0, 6.0.\n3. The PCA dimension is varied as 48, 64, 96, 128.\n4. The number of LSH bits were varied as 256, 512, 768, 1024, 2048.\n5. For IVF-PQ, the following parameters were the same as in MobileNet: nlist=4096, M=64, nbit=8. nprobe was varied as 50, 100, 150, 250, 500, 1000.\nSelecting top-k. We use the following heuristic to create the shortlist of candidates after the sparse ranking step. We first shortlist all candidates with a score greater than some confidence threshold. For our experiments we set the confidence threshold to be equal to 0.25. If the size of this shortlist is larger than k, it is further shrunk by consider the top k scorers. For all our experiments we set k = 1000. This heuristic avoids sorting the whole array, which can be a bottleneck in this case. The parameters are chosen such that the time required for the re-ranking step does not dominate the total retrieval time." }, { "heading": "Hardware.", "text": "1. All models were trained on 4 NVIDIA Tesla V-100 GPUs with 16G of memory.\n2. System Memory: 256G.\n3. CPU: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz.\n4. Number of threads: 32.\n5. Cache: L1d cache 32K, L1i cache 32K, L2 cache 256K, L3 cache 46080K.\nAll timing experiments were performed on a single thread in isolation." }, { "heading": "C ADDITIONAL RESULTS", "text": "" }, { "heading": "C.1 RESULTS WITHOUT RE-RANKING", "text": "Figure 5 shows the comparison of the approaches with and without re-ranking. We notice that there is a significant dip in the performance without re-ranking with the gap being smaller for ResNet with FLOPs regularization. We also notice that the FLOPs regularizers has a better trade-off curve for the no re-ranking setting as well." }, { "heading": "C.2 FPR AND TPR CURVES", "text": "In the main text we have reported the recall@1 which is a standard face recognition metric. This however is not sufficient to ensure good face verification performance. The goal in face verification is to predict whether two faces are similar or dissimilar. A natural metric in such a scenario is the FPR-TPR curve. Standard face verification datasets include LFW (Huang et al., 2008) and AgeDB (Moschoglou et al., 2017). We produce embeddings using our trained models, and use them to compute similarity scores (dot product) for pairs of images. The similarity scores are used to compute the FPR-TPR curves which are shown in Figure 6. We notice that for curves with similar probability of activation p, the FLOPs regularizer performs better compared to `1. This demonstrates the efficient utilization of all the dimensions in the case of the FLOPs regularizer that helps in learning richer representations for the same sparsity.\nWe also observe that the gap between sparse and dense models is smaller for ResNet, thus suggesting that the ResNet model learns better representations due to increased model capacity. Lastly, we also note that the gap between the dense and sparse models is smaller for LFW compared to AgeDB, thus corroborating the general consensus that LFW is a relatively easier dataset." }, { "heading": "C.3 CIFAR-100 RESULTS", "text": "We also experimented with the Cifar-100 dataset (Krizhevsky et al., 2009) consisting of 60000 examples and 100 classes. Each class consists of 500 train and 100 test examples. We compare the `1 and FLOPs regularized approaches with the sparse deep hashing approach. All models were trained using the triplet loss (Schroff et al., 2015) and embedding dim d = 64. For the dense and DH baselines, no activation was used on the embeddings. For the `1 and FLOPs regularized models we used the SThresh activation. Similar to Jeong and Song (2018), the train-test and test-test precision values have been reported in Table 1. Furthermore, the reported results are without re-ranking. Cifar-100 being a small dataset, we only report the FLOPs-per-row, as time measurements can be\nmisleading. In our experiments, we achieved slightly higher precisions for the dense model compared to (Jeong and Song, 2018). We notice that our models use less than 50% of the computation compared to SDH, albeit with a slightly lower precision." } ]
2,020
MINIMIZING FLOPS TO LEARN EFFICIENT SPARSE REPRESENTATIONS
SP:962c445bf9fa4cc39b00aa1a57073320ba145865
[ "This paper propose a heuristic algorithm for deciding which random variables to be Gaussianized early in flow-based generative models. The proposed algorithm involves first training a flow without multi-scale training, for example, 32*32*c - 32*32*c - 32*32*c. Then, it computes the logdet term for each variable at each layer. It then spatially partition the first flow block by two halves of shape 16*16*2c based on max-pooling the logdet term. Then it recursively Gaussianize one half, and partition the other half as 8*8*4c, still using the pre-computed logdet tensors (Ld in the paper). After partitioning, they train a multi-scale model with the learned partition.", "This paper presents a new multi-scale architecture for flow-based generative models. Unlike prior work on multi-scale flow architectures which use fixed dimension-splitting heuristics, the proposed approach learns which dimensions to process further. The features are chosen for further processing based on a heuristic motivated by each feature's contribution to the total likelihood. This contribution is given by the each features' contribution to the log-determinant term in the change of variables formula. The model is trained in a two-step process. First a flow model with no multi-scale architecture is trained. Then each feature's importance is calculated based on its contribution to the log-determinant. Then these scores are used to rank the features for the second, multi-scale model which is retrained from scratch. The authors demonstrate the performance of their approach on density modeling on standard image datasets. They demonstrate an improvement over the standard real-nvp architecture." ]
Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process. However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space. An effective solution to the above challenge as proposed by Dinh et al. (2016) is a multi-scale architecture, which is based on iterative early factorization of a part of the total dimensions at regular intervals. Prior works on generative flows involving a multi-scale architecture perform the dimension factorization based on a static masking. We propose a novel multi-scale architecture that performs data dependent factorization to decide which dimensions should pass through more flow layers. To facilitate the same, we introduce a heuristic based on the contribution of each dimension to the total log-likelihood which encodes the importance of the dimensions. Our proposed heuristic is readily obtained as part of the flow training process, enabling versatile implementation of our likelihood contribution based multi-scale architecture for generic flow models. We present such implementations for several state-ofthe-art flow models and demonstrate improvements in log-likelihood score and sampling quality on standard image benchmarks. We also conduct ablation studies to compare proposed method with other options for dimension factorization.
[]
[ { "authors": [ "Andrei Atanov", "Alexandra Volokhova", "Arsenii Ashukha", "Ivan Sosnovik", "Dmitry Vetrov" ], "title": "Semiconditional normalizing flows for semi-supervised learning", "venue": "arXiv preprint arXiv:1905.00505,", "year": 2019 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky TQ Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "arXiv preprint arXiv:1811.00995,", "year": 2018 }, { "authors": [ "Ricky TQ Chen", "Jens Behrmann", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "arXiv preprint arXiv:1906.02735,", "year": 2019 }, { "authors": [ "Xi Chen", "Nikhil Mishra", "Mostafa Rohaninejad", "Pieter Abbeel" ], "title": "Pixelsnail: An improved autoregressive generative model", "venue": "arXiv preprint arXiv:1712.09763,", "year": 2017 }, { "authors": [ "Moustapha Cisse", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Emily L Denton", "Soumith Chintala", "Rob Fergus" ], "title": "Deep generative image models using a laplacian pyramid of adversarial networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real NVP", "venue": "CoRR, abs/1605.08803,", "year": 2016 }, { "authors": [ "Jesse Engel", "Kumar Krishna Agrawal", "Shuo Chen", "Ishaan Gulrajani", "Chris Donahue", "Adam Roberts" ], "title": "Gansynth: Adversarial neural audio synthesis", "venue": null, "year": 1902 }, { "authors": [ "Marc Finzi", "Pavel Izmailov", "Wesley Maddox", "Polina Kirichenko", "Andrew Gordon Wilson" ], "title": "Invertible convolutional networks", "venue": "In Workshop on Invertible Neural Nets and Normalizing Flows, International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Will Grathwohl", "Ricky TQ Chen", "Jesse Betterncourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "arXiv preprint arXiv:1810.01367,", "year": 2018 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "Jonathan Ho", "Xi Chen", "Aravind Srinivas", "Yan Duan", "Pieter Abbeel" ], "title": "Flow++: Improving flowbased generative models with variational dequantization and architecture design", "venue": null, "year": 1902 }, { "authors": [ "Emiel Hoogeboom", "Rianne van den Berg", "Max Welling" ], "title": "Emerging convolutions for generative normalizing flows", "venue": "arXiv preprint arXiv:1901.11137,", "year": 2019 }, { "authors": [ "Pavel Izmailov", "Polina Kirichenko", "Marc Finzi", "Andrew Gordon Wilson" ], "title": "Semi-supervised learning with normalizing flows", "venue": "In Workshop on Invertible Neural Nets and Normalizing Flows, International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "arXiv preprint arXiv:1812.04948,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Manoj Kumar", "Mohammad Babaeizadeh", "Dumitru Erhan", "Chelsea Finn", "Sergey Levine", "Laurent Dinh", "Durk Kingma" ], "title": "Videoflow: A flow-based generative model for video", "venue": null, "year": 1903 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Augustus Odena" ], "title": "Semi-supervised learning with generative adversarial networks", "venue": "arXiv preprint arXiv:1606.01583,", "year": 2016 }, { "authors": [ "Aaron Van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Scott Reed", "Aäron van den Oord", "Nal Kalchbrenner", "Sergio Gómez Colmenarejo", "Ziyu Wang", "Yutian Chen", "Dan Belov", "Nando de Freitas" ], "title": "Parallel multiscale autoregressive density estimation", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael S. Bernstein", "Alexander C. Berg", "FeiFei Li" ], "title": "Imagenet large scale visual recognition challenge", "venue": "CoRR, abs/1409.0575,", "year": 2014 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P. Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "CoRR, abs/1701.05517,", "year": 2017 }, { "authors": [ "Benigno Uria", "Iain Murray", "Hugo Larochelle" ], "title": "Rnade: The real-valued neural autoregressive density-estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep Generative Modeling aims to learn the embedded distributions and representations in input (especially unlabelled) data, requiring no/minimal human labelling effort. Learning without knowledge of labels (unsupervised learning) is of increasing importance because of the abundance of unlabelled data and the rich inherent patterns they posses. The representations learnt can then be utilized in a number of downstream tasks such as semi-supervised learning (Kingma et al., 2014; Odena, 2016), synthetic data augmentation and adversarial training (Cisse et al., 2017), text analysis and model based control etc. The repository of deep generative modeling majorly includes Likelihood based models such as autoregressive models (Oord et al., 2016b; Graves, 2013), latent variable models (Kingma & Welling, 2013), flow based models (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018) and implicit models such as generative adversarial networks (GANs) (Goodfellow et al., 2014). Autoregressive models (Salimans et al., 2017; Oord et al., 2016b;a; Chen et al., 2017) achieve exceptional log-likelihood score on many standard datasets, indicative of their power to model the inherent distribution. But, they suffer from slow sampling process, making them unacceptable to adopt in real world applications. Latent variable models such as variational autoencoders (Kingma & Welling, 2013) tend to better capture the global feature representation in data, but do not offer an exact density estimate. Implicit generative models such as GANs which optimize a generator and a discriminator in a min-max fashion have recently become popular for their ability to synthesize realistic data (Karras et al., 2018; Engel et al., 2019). But, GANs do not offer a latent space suitable for further downstream tasks, nor do they perform density estimation.\nFlow based generative models (Dinh et al., 2016; Kingma & Dhariwal, 2018) perform exact density estimation with fast inference and sampling, due to their parallelizability. They also provide an information rich latent space suitable for many applications. However, the dimension of latent space for flow based generative models is same as the high-dimensional input space, by virtue of bijectivity nature of flows. This poses a bottleneck for flow models to scale with increasing input dimensions due to computational complexity. An effective solution to the above challenge is a multi-scale architecture,\nintroduced by Dinh et al. (2016), which performs iterative early gaussianization of a part of the total dimensions at regular intervals of flow layers. This not only makes the model computational and memory efficient but also aids in distributing the loss function throughout the network for better training. Many prior works including Kingma & Dhariwal (2018); Atanov et al. (2019); Durkan et al. (2019); Kumar et al. (2019) implement multi-scale architecture in their flow models, but use static masking methods for factorization of dimensions. We propose a multi-scale architecture which performs data dependent factorization to decide which dimensions should pass through more flow layers. For the decision making, we introduce a heuristic based on the amount of total log-likelihood contributed by each dimension, which in turn signifies their individual importance. We lay the ground rules for quantitative estimation and qualitative sampling to be satisfied by an ideal factorization method for a multi-scale architecture. Since in the proposed architecture, the heuristic is obtained as part of the flow training process, it can be universally applied to generic flow models. We present such implementations for flow models based on affine/additive coupling and ordinary differential equation (ODE) and achieve quantitative and qualitative improvements. We also perform ablation studies to confirm the novelty of our method. Summing up, the contributions of our research are,\nContributions:\n1. A log-determinant based heuristic which entails the contribution by each dimensions towards the total log-likelihood in a multi-scale architecture.\n2. A multi-scale architecture based on the above heuristic performing data-dependent splitting of dimensions, implemented for several classes of flow models.\n3. Quantitative and qualitative analysis of above implementations and an ablation study\nTo the best of our knowledge, we are the first to propose a data-dependent splitting of dimensions in a multi-scale architecture." }, { "heading": "2 BACKGROUND", "text": "In this section, we illustrate the functioning of flow based generative models and the multiscale architecture as introduced by Dinh et al. (2016)." }, { "heading": "2.1 FLOW-BASED GENERATIVE MODELS", "text": "Let x be a high-dimensional random vector with unknown true distribution p(x). The following formulation is directly applicable to continous data, and with some pre-processing steps such as dequantization (Uria et al., 2013; Salimans et al., 2017; Ho et al., 2019) to discrete data. Let z be the latent variable with a known standard distribution p(z), such as a standard multivariate gaussian. Using an i.i.d. dataset D, the target is to model pθ(x) with parameters θ. A flow, fθ is defined to be an invertible transformation that maps observed data x to the latent variable z. A flow is invertible, so the inverse function T maps z to x, i.e. z = fθ(x) = T −1(x) and x = T (z) = f−1θ (z) (1) The log-likelihood can be expressed as,\nlog (pθ(x)) = log(p(z)) + log (∣∣∣∣det(∂fθ(x)∂xT )∣∣∣∣) (2)\nwhere ∂fθ(x)\n∂xT is the Jacobian of fθ at x.\nThe invertibile nature of flow allows it to be capable of being composed of other flows of compatible dimensions. In practice, flows are constructed by composing a series of component flows. Let the flow fθ be composed of K component flows, i.e. fθ = fθK ◦ fθK−1 ◦ · · · ◦ fθ1 and the intermediate variables be denoted by yK ,yK−1, · · · ,y0 = x. Then the log-likelihood of the composed flow is,\nlog (pθ(x)) = log(p(z)) + log (∣∣∣∣det(∂(fθK ◦ fθK−1 ◦ · · · ◦ fθ1(x))∂xT )∣∣∣∣) (3)\n= log(p(z))︸ ︷︷ ︸ Log-latent density + K∑ i=1 log |det(∂yi/∂yTi−1)|︸ ︷︷ ︸ Log-det\n(4)\nwhich follows from the fact that det(A ·B) = det(A) ·det(B). In our work, we refer the first term in Equation 4 as log-latent-density and the second term as log-determinant (log-det). The reverse path, from z to x can be written as a composition of inverse flows, x = f−1θ (z) = f −1 θ1 ◦ f−1θ2 ◦ · · · ◦ f −1 θK\n(z). Confirming with the properties of a flow as mentioned above, different types of flows can be constructed (Kingma & Dhariwal, 2018; Dinh et al., 2016; 2014; Behrmann et al., 2018)." }, { "heading": "2.2 MULTI-SCALE ARCHITECTURE", "text": "Multi-scale architecture is a design choice for latent space dimensionality reduction of flow models, in which part of the dimensions are factored out/early gaussianized at regular intervals, and the other part is exposed to more flow layers. The process is called dimension factorization. In the problem setting as introduced in Section 2.1, the factoring operation can be mathematically expressed as,\ny0 = x (5) zl+1,yl+1 = fθl+1(yl), l ∈ {0, 1, · · · ,K − 2} (6)\nzK = fθK (yK−1) (7) z = (z1, z2, · · · , zK) (8)\nThe factoring of dimensions at early layers has the benefit of distributing the loss function throughout the network (Dinh et al., 2016) and optimizing the amount of computation and memory used by the model. We consider the multi-scale architecture for flow based generative models as introduced by Dinh et al. (2016) (and later used by state-of-the-art flow models such as Glow(Kingma & Dhariwal, 2018)) as the base of our research work." }, { "heading": "3 LIKELIHOOD CONTRIBUTION BASED MULTISCALE ARCHITECTURE", "text": "In a multi-scale architecture, it is apparent that the network will better learn the distribution of variables getting exposed to more layers of flow as compared to the ones which get factored at a finer scale (earlier layer). The method of dimension splitting proposed by prior works such as (Dinh et al., 2016; Kingma & Dhariwal, 2018; Behrmann et al., 2018) are static in nature and do not distinguish between importance of different dimensions. In this section, we introduce a heuristic to estimate the contribution of each dimension towards the total log-likelihood, and introduce a method which can use the heuristic to decide the dimensions to be factored at an earlier layer, eventually achieving preferrential splitting in multiscale architecture. Our approach builds an efficient multiscale architecture which factors the dimensions at each flow layer in a way such that the local variance in the input space is well captured as the flow progresses and the log-likelihood is maximized. We also describe how our multi-scale architecture can be implemented over several standard flow models.\nRecall from Equation 4 that the log-likelihood is composed of two terms, the log-latent-density term and the log-det term. The log-latent-density term depends on the choice of latent distribution whereas the log-det term depends on the modeling of the flow layers. So, careful design of flow layers can lead to maximized log-determinant, eventually maximizing the likelihood. The total log-det term is nothing but the sum of log-det terms contributed by each dimension. Let the dimension of the input space x be s× s× c, where s is the image height/width and c is the number of channels for image inputs. For the following formulation, let us assume no dimensions were gaussianized early so that we have access to log-det term for all dimensions at each flow layer, and the dimension at all intermediate layer remains same (i.e. s× s× c). We apply a flow (fθ) with K component flows to x, z pair, so that z = fθ(x) = fθK ◦ fθK−1 ◦ · · · ◦ fθ1(x). The intermediate variables are denoted by yK ,yK−1, · · · ,y0 with yK = z (since no early gaussianization was done) and y0 = x. The log-det term at layer l, L(l)d , is given by,\n[L (l) d ]scaler = l∑ i=1 log |det(∂yi/∂yTi−1)| (9)\nThe log-det of the jacobian term encompasses contribution by all the s × s × c dimensions. We decompose it to obtain the individual contribution by variables (dimensions) towards the total log-det (∼ total log-likelihood). The log-det term can be viewed (with slight abuse of notations) as a s×s× c\n[L\n(l) d ]s×s×c representing log-det of variables in a flow layer. (On right) It is squeezed to s 2 × s 2 × 4c with local max and min pooling operation. The black (respectively white) marked pixels represent dimensions having more (respectively less) log-det locally.\ntensor corresponding to each of the dimensions, summed over the flow layers till l,\n[L (l) d ]s×s×c = l∑ i=1 [d (α,β,γ) i−1 ]s×s×c, where α, β ∈ {0, · · · , s} and γ ∈ {0, · · · , c} (10)\ns.t. ∑ α,β,γ d (α,β,γ) i−1 = log |det(∂yi/∂y T i−1)| (11)\nThe entries in [L(l)d ]s×s×c having higher value correspond to the variables which contribute more towards the total log-likelihood, hence are more valuable for better flow formulation. So, we can use the likelihood contribution (in the form of log-det term) by each dimension as a heuristic for deciding which variables should be gaussianized early in a multi-scale architecture. Ideally, at each flow layer, the variables with more log-det term should be exposed to more layer of flow and the ones having less log-det term should be factored out. In this manner, selectively more power can be provided to variables which capture meaningful representation (and are more valuable from log-det perspective) to be expressive by being exposed to multiple flow layers. This formulation leads to enhanced density estimation performance. Additionally, for many datasets such as images, the spatial nature should be taken into account while deciding dimensions for early gaussianization. Summarily, at every flow layer, an ideal factorization method should,\n1. (Quantitative) For efficient density estimation: Early gaussianize the variables having less log-det and expose the ones having more log-det to more flow layers\n2. (Qualitative) For qualitative reconstruction: Capture the local variance over the flow layers, i.e. the dimensions being exposed to more flow layers should contain representative pixel variables from throughout the whole image.\nKeeping the above requirements in mind, variants of hybrid techniques for factorization can be implemented for different types of flow models which involve a multi-scale architecture, to improve their density estimation and qualitative performance. The key requirement is availability of log-det contributions per dimension, which can be fulfilled by decomposition of the log-det of the jacobian. We refer to the method as Likelihood Contribution based Multi-scale Architecture (LCMA). The steps of LCMA implementation for flow models is summarized in Algorithm 1. Note that in step 2 of dimension factorization phase in algorithm 1, we group the dimensions having more/less log-det locally and then perform splitting. This preserves the local spatial variation of the image in both parts of the factorization, leveraging both enhanced density estimation as well as qualitative reconstruction. Another important observation is since the factorization of dimensions does not occur during the training time, and before the actual training starts, the decision of dimensions which get factored at each flow layer is fixed, the change of variables formula can be applied. This allows the use of non-invertible operations (e.g. max and min pooling) for efficient factorization with log-det heuristic.\nStep 1 of dimension factorization phase requires computation of individual contribution of dimensions ([L(l)d ]s×s×c) towards the total log-likelihood, which can vary depending on the original design of flow\nAlgorithm 1: LCMA implementation for generative flow models Pre-Training Phase: Pre-train a network with no multiscale architecture (no dimensionality\nreduction) to obtain the log-det term at every flow layer. Dimension Factorization: In this phase, the dimensions to be factored at each flow layer is decided\nbased on the log-det term at that layer\n1. The individual contribution of dimensions towards likelihood ([L(l)d ]s×s×c) is computed specifically for corresponding flow models (Refer Section 3.1 and Section 3.2).\n2. Convert [L(l)d ]s×s×c into a s 2 × s 2 × 4c shaped tensor using local max and min-pooling (=\n−max-pooling(−input)) operations (Figure 1) at each flow layer. 3. Among the 4c channels, one half contains the dimensions having more log-det term\ncompared with its neighbourhood pixel (Black marked in Fig. 1), while the other half contains the dimensions having less log-det (White marked in Fig. 1).\n4. Split the tensor along the channel dimension to two parts. 5. Forward the corresponding dimensions contributing more towards likelihood into more flow\nlayers and early gaussianize the ones contributing less. 6. Repeat steps 1-5 for all the layers with dimensions passed to that layer till the latent space.\nTraining Phase: The decision of dimensions to be factored at each layer as performed in previous step remains fixed. Finally, the flow model with proposed LCMA is trained.\nmodels. Some flow models offer direct decomposition of jacobian into per-dimension components, whereas for others, an indirect estimation method has to be adopted. We now describe such methods to obtain such individual likelihood contribution of dimensions for flow models based on affine coupling (RealNVP (Dinh et al., 2016) and Glow (Kingma & Dhariwal, 2018)), and flow models involving ordinary differential equation (ODE) based density estimators (i-ResNet (Behrmann et al., 2018)), all of which involve a multiscale architecture." }, { "heading": "3.1 ESTIMATION OF PER-DIMENSION LIKELIHOOD CONTRIBUTION FOR AFFINE COUPLING BASED FLOW MODELS", "text": "RealNVP (Dinh et al., 2016): For RealNVP with afffine coupling layers, the logarithm of individual diagonal elements of jacobian, summed over layers till layer l provides the per-dimensional likelihood contribution components at layer l.\nGlow (Kingma & Dhariwal, 2018): Unlike RealNVP where the log-det terms for each dimension can be expressed as log of corresponding diagonal element of jacobian, Glow contains 1× 1 convolution blocks having non-diagonal log-det term for channel dimensions, for a s× s× c tensor h given by,\nlog ∣∣∣∣det(dconv2D(h;W)dh )∣∣∣∣ = s · s · log |det(W)| (12)\nIt remains to decompose the log |det(W)| to individual contribution by each channel. As a suitable candidate, singular values of W correspond to the contribution from each channel dimension, so their log value is the individual log-det contribution. So the individual log-det term for channels are obtained by,\n|det(W)| = ∏ i σi(W)⇔ log |det(W)| = ∑ i log(σi(W)) (13)\nwhere σi(W) are the singular values of the weight matrix W. For affine blocks in Glow, same method as RealNVP is adopted." }, { "heading": "3.2 ESTIMATION OF PER-DIMENSION LIKELIHOOD CONTRIBUTION FOR FLOW MODELS WITH ODE BASED DENSITY ESTIMATORS", "text": "Recent works on flow models such as Behrmann et al. (2018); Grathwohl et al. (2018); Chen et al. (2019) employ variants of ODE based density estimators. We introduce method to find perdimensional likelihood contribution for i-ResNet (Behrmann et al., 2018), which is a residual network\nwith invertibility and efficient jacobian computation properties. i-ResNet is modelled as a flow F (x), such that z = F (x) = (I + g)(x), where g(x) is the forward propagation function. The log-likelihood expression is written with the log-det of the jacobian is expressed as a power series, ln px(x) = ln pz(z) + ln |det JF (x)|, ln |det JF (x)| = tr ( ln ( I + Jg(x) )) =\n∞∑ k=1 (−1)k+1 tr(Jkg ) k\nwhere tr denotes the trace. Due to computational constraints, the power series is computed up to a finite number of iterations with the tr(Jkg ) term stochastically approximated by hutchinson’s trace estimator, tr(A) = Ep(v) [ vTAv ] , E[v] = 0 and Cov(v) = I . The component corresponding to each dimension that becomes part of the log-det term is the diagonal element of Jkg , so the per-dimension contribution to the likelihood can be approximated as the diagonal elements of Jkg , summed over the power series upto a finite number of iterations n. The diagonal elements are obtained with the hutchinson’s trace estimator without any extra cost, i.e. if v = [v1, v2, · · · , vs×s×c]T ,\n∞∑ k=1 (−1)k+1 tr(Jkg ) k = ∞∑ k=1 (−1)k+1 Ep(v)\n[ vTJkg v ] k = ∞∑ k=1 (−1)k+1 Ep(v) [ (vTJkg )v ] k\nIn above equation, (vTJkg ) is the vector-jacobian product which is multiplied again with v. The individual components which are summed when (vTJkg ) is multiplied with v correspond to the diagonal terms in jacobian, over the expectation Ep(v). So those terms are the contribution by the individual dimensions, to the log-likelihood and are expressed as [L(l)d ]s×s×c for use in step 1 of dimension factorization step in LCMA implementation for i-ResNet." }, { "heading": "4 RELATED WORK", "text": "Multi-scale architecture and variants have been successful in a number of prior works in deep generative modeling. For invertible neural networks, Finzi et al. (2019) use a keepChannel for selective feed forward of channels analogous to multi-scaling. In the spectrum of generative flow models, multi-scale architecture has been utilized to achieve the dimensionality reduction and enhanced training because of the distribution of loss function in the network (Dinh et al., 2016; Kingma & Dhariwal, 2018). A variant of multiscale architecture has been utilized to capture local variations in auto-regressive models (Reed et al., 2017). Among GAN(Goodfellow et al., 2014) models, Denton et al. (2015) use a multiscale variant to generate images in a coarse-to-fine manner. For multi-scale architectures in generative flow models, our proposed method performs factorization of dimensions based on their likelihood contribution, which in another sense translates to determining which dimensions are important from density estimation and qualitative reconstruction point of view. Keeping this in mind, we discuss prior works on generative flow models which involve multi-scaling and/or incorporate permutation among dimensions to capture their interactions.\nA number of generative flow models implement a multi-scale architecture, such as Dinh et al. (2016); Kingma & Dhariwal (2018); Atanov et al. (2019); Izmailov et al. (2019); Durkan et al. (2019); Kumar et al. (2019); Behrmann et al. (2018) etc. Kingma & Dhariwal (2018) introduce an 1 × 1 convolution layer in between the actnorm and affine coupling layer in their flow architecture. The 1× 1 convolution is a generalization of permutation operation which ensures that each dimension can affect every other dimension. This can be interpreted as redistributing the contribution of dimensions to total likelihood among the whole space of dimensions. So Kingma & Dhariwal (2018) treat the dimensions as equiprobable for factorization in their implementation of multi-scale architecture, and split the tensor at each flow layer evenly along the channel dimension. We, on the other hand, take the next step and focus on the individuality of dimensions and their importance from the amount they contribute towards the total log-likelihood. The log-det score is available via direct/indirect decomposition of the jacobian obtained as part of computations in a flow training, so we essentially have a heuristic for free. Since our method focuses individually on the dimensions using a heuristic which is always available, it can prove to be have more versatility in being compatible with generic multi-scale architectures. Hoogeboom et al. (2019) extend the concept of 1 × 1 convolutions to invertible d× d convolutions, but do not discuss about multi-scaling. Dinh et al. (2016) also include a type of permutation which is equivalent to reversing the ordering of the channels, but is more restrictive and fixed. Flow models such as Behrmann et al. (2018); Grathwohl et al. (2018); Chen et al.\n(2019) involve ODE based density estimators. They also implement a multi-scale architecture, but the splitting operation is a static channel wise splitting without considering importance of individual dimensions or any permutations. Izmailov et al. (2019); Durkan et al. (2019); Kumar et al. (2019); Atanov et al. (2019) use multi-scale architecture in their flow models, coherent with Dinh et al. (2016); Kingma & Dhariwal (2018), but perform the factorization of dimensions without any consideration of the individual contribution of the dimension towards the total log-likelihood. For qualitative sampling along with efficient density estimation, we also propose that factorization methods should preserve spatiality of the image in the two splits, motivated by the spatial nature of splitting methods in Kingma & Dhariwal (2018) (channel-wise splitting) and Dinh et al. (2016) (checkerboard and channel-wise splitting)." }, { "heading": "5 EXPERIMENTS", "text": "In Section 3, we established that our proposed likelihood contribution based factorization of dimensions can be implemented for flow models involving a multi-scale architecture, in order to improve their density estimation and qualitative performance. In this section we present the detailed results of proposed LCMA adopted for the flow model of RealNVP (Dinh et al., 2016) and quantitative comparisons with Glow (Kingma & Dhariwal, 2018) and i-ResNet (Behrmann et al., 2018). For direct comparison, all the experimental settings such as data pre-processing, optimizer parameters as well as flow architectural details (coupling layers, residual blocks) are kept the same, except that the factorization of dimensions at each flow layer is performed according to the methods described in Section 3. For ease of access, we also have summarized the experimental details in Appendix A.\nFor RealNVP, we perform experiments on four benchmarked image datasets: CIFAR-10 (Krizhevsky, 2009), Imagenet (Russakovsky et al., 2014) (downsampled to 32× 32 and 64× 64), and CelebFaces Attributes (CelebA) (Liu et al., 2015). The scaling in LCMA is performed once for CIFAR-10, thrice for Imagenet 32 × 32 and 4 times for Imagenet 64 × 64 and CelebA. We compare LCMA with conventional RealNVP and report the quantitative and qualitative results. For Glow and i-ResNet with LCMA, we perform experiments on CIFAR-10 and present improvements over baseline bits/dim. We also perform an ablation studies for LCMA vs. other possible dimension splitting options." }, { "heading": "5.1 QUANTITATIVE COMPARISON", "text": "The bits/dim scores of RealNVP with conventional multi-scale architecture (as introduced in Dinh et al. (2016)) and RealNVP with LCMA are given in Table 1. It can be observed that the density estimation results using LCMA is in all cases better in comparison to the baseline. We observed that the improvement for CelebA is relatively high as compared to natural image datasets. This observation was expected as facial features often contain high redundancy and the flow model learns to put more importance (reflected in terms of high log-det) on selected dimensions that define the facial features. Our proposed LCMA exposes such dimensions to more flow layers, making them more expressive and hence the significant improvement in code length (bits/dim) is observed. The improvement in bits/dim is less for natural image datasets because of the high variance among features defining them, which has been the challenge with image compression algorithms. Note that the improvement in density estimation is always relative to the original flow architecture (RealNVP in our case) over which we use our proposed LCMA, as we do not alter any architecture other than the dimension factorization method. The quantitative results of LCMA implementation for RealNVP, Glow and i-ResNet with CIFAR-10 dataset is summarized in Table 2. The density estimation scores for flows with LCMA outperform the same flow with conventional multi-scale architectures." }, { "heading": "5.2 QUALITATIVE COMPARISON", "text": "An ideal dimension factorization method should capture the local variance over series of flow layers, which helps in qualitative sampling. For LCMA implementation, we introduced local max and min pooling operations on log-det heuristic to decide which dimensions to be gaussianized early (Section 3). Figure 2(a) shows samples from original datasets, Figure 2(b) shows the samples from trained RealNVP flow model with conventional multi-scale architecture and Figure 2(c) shows the samples from RealNVP with LCMA trained on various datasets. The finer facial details such as hair styles, eye-lining and facial folds in Celeba samples generated from RealNVP with LCMA were perceptually better than the baseline. The global feature representation observed is similar to that in RealNVP, as the flow architecture was kept the same. The background for natural images such as Imagenet 32 × 32 and 64 × 64 was constructed at par with the original flow model. As has been observed in different flow models such as RealNVP and Glow, the latent space holds knowledge about the feature representation in the data. We performed linear interpolations in latent space to ensure its efficient construction and generated images, as shown in Figure 3. The interpolations observed were smooth, with intermediate samples perceptibly resembling synthetic faces, signifying the efficient construction of latent space. More interpolations are included in Appendix B." }, { "heading": "5.3 ABLATION STUDY", "text": "We performed ablation studies to compare LCMA with other methods for dimension factorization in a multi-scale architecture. We consider 4 variants for our study, namely fixed random permutation (Case 1), multiscale architecture with early gaussianization of high log-det dimensions (Case 2), factorization method with checker-board and channel splitting as introduced in RealNVP (Case 3) and multiscale architecture with early gaussianization of low log-det dimensions, which is our proposed LCMA (Case 4). In fixed random permutation, we randomly partition the tensor into two halves, with no regard to the spatiality or log-det score. In case 2, we do the reverse of LCMA, and gaussianize the high log-det variables early. The bits/dim score and generated samples for each of the method are given in Table 3. As expected from an information theoretic perspective, gaussianizing high log-det variables early provides the worst density estimation, as the model could not capture the high amount of important information. Comparing the same with fixed random permutation, the latter has better score as the probability of a high log-det variable being gaussianized early reduces to half, and it gets further reduced with RealNVP due to channel-wise and checkerboard splitting. LCMA has the best score among all methods, as the variables carrying more information are exposed to more flow layers. Fixed random permutation has the worst quality of sampled images, as the spatiality is lost during factorization. The sample quality improves for Case 2 and RealNVP. The sampled images are perceptually best for LCMA. Summarizing, LCMA outperforms multi-scale architectures based on other factorization methods, as it improves density estimation and generates qualitative samples." }, { "heading": "6 CONCLUSIONS", "text": "We proposed a novel multi-scale architecture for generative flows which employs a data-dependent splitting based the individual contribution of dimensions to the total log-likelihood. Implementations of the proposed method for several state-of-the-art flow models such as RealNVP (Dinh et al., 2016), Glow(Kingma & Dhariwal, 2018) and i-ResNet(Behrmann et al., 2018) were presented. Empirical studies conducted on benchmark image datasets validate the strength of our proposed method, which improves log-likelihood scores and is able to generate qualitative samples. Ablation study results confirm the power of LCMA over other options for dimension factorization." }, { "heading": "A EXPERIMENTAL SETTINGS", "text": "For direct comparison with Dinh et al. (2016), data pre-processing, optimizer parameters as well as flow architectural details (coupling layers, residual blocks) are kept the same, except that the factorization of dimensions at each flow layer is performed according to the method described in Section 3. In this section, for the ease of access, we summarize the experimental settings.\nDatasets: We perform experiments on four benchmarked image datasets: CIFAR-10 (Krizhevsky, 2009), Imagenet (Russakovsky et al., 2014) (downsampled to 32× 32 and 64× 64), and CelebFaces Attributes (CelebA) (Liu et al., 2015).\nPre-processing: For CelebA, we take a central crop of 148 × 148 then resize it to 64 × 64. For dequantization of images (whose values lies in [0, 256]D), the data is transformed to logit(α+ (1− α) x256 ), where α = 0.05. The sample allocation for training and validation were done as per the official allocation for the datasets.\nFlow model architecture: We use affine coupling layers as introduced (Dinh et al., 2016). A layer of flow is defined as 3 coupling layers with checkerboard splits at s× s resolution, 3 coupling layers with channel splits at s/2× s/2 resolution, where s is the resolution at the input of that layer. For datasets having resolution 32, we use 3 such layers and for those having resolution 64, we use 4 layers. The cascade connection of the layers is followed by 4 coupling layers with checkerboard splits at the final resolution, marking the end of flow composition. For CIFAR-10, each coupling layer uses 8 residual blocks. Other datasets having images of size 32×32 use 4 residual blocks whereas 64×64 ones use 2 residual blocks. More details on architectures will be given in a source code release.\nOptimization parameters: We optimize with ADAM optimizer (Kingma & Ba, 2014) with default hyperparameters and use an L2 regularization on the weight scale parameters with coefficient 5 ·10−5. A batch size of 64 was used. The computations were performed in NVIDIA Tesla V100 GPUs.\nMultiscale Architecture: Scaling is done once for CIFAR-10, thrice for Imagenet 32 × 32 and 4 times for Imagenet 64× 64 and CelebA.\nB INTERPOLATIONS AMONG TWO IMAGES FROM CELEBA DATASET\nFigure 4 presents more interpolation examples obtained using our model between two images from CelebA dataset." }, { "heading": "C ADDITIONAL SAMPLES", "text": "In this section, we present more samples from RealNVP model with likelihood contribution based multiscale architecture trained on different datasets." } ]
2,019
null
SP:fb717cacd65d17e3d1971170a82b902ee94d4dfc
[ "In this paper, the authors study the adversarial example generation problem, in the difficult case where the attacked model is a black box. Since the model is unknown, the approaches based on the minimization of a loss function with a gradient based optimizer do not apply. The current alternatives, known as decision-based attack, use iterative local updates from a starting point to a local minimum, where the class of the adversarial example is different from the initial example while its distance stays close to the initial one.", "This paper proposes a meta-algorithm for the so-called \"decision-based attack\" problem, where a model that can be accessed only via label queries for a given input is attacked by a minimal perturbation to the input that changes the predicted label. The algorithm, BOSH, augments any iterative algorithm for this problem with a diversification strategy based on bayesian optimization and throwing away bad solutions. Empirically, it is shown that BOSH can improve the performance of recently developed algorithms for this problem, by exploring more solutions and refining them intelligently." ]
Adversarial example generation becomes a viable method for evaluating the robustness of a machine learning model. In this paper, we consider hard-label blackbox attacks (a.k.a. decision-based attacks), which is a challenging setting that generates adversarial examples based on only a series of black-box hard-label queries. This type of attacks can be used to attack discrete and complex models, such as Gradient Boosting Decision Tree (GBDT) and detection-based defense models. Existing decision-based attacks based on iterative local updates often get stuck in a local minimum and fail to generate the optimal adversarial example with the smallest perturbation. To remedy this issue, we propose an efficient meta algorithm called BOSH-attack, which tremendously improves existing algorithms through Bayesian Optimization (BO) and Successive Halving (SH). In particular, instead of traversing a single solution path when searching an adversarial example, we maintain a pool of solution paths to explore important regions. We show empirically that the proposed algorithm converges to a better solution than existing approaches, while the query count is smaller than applying multiple random initializations by a factor of 10.
[]
[ { "authors": [ "Moustafa Alzantot", "Yash Sharma", "Supriyo Chakraborty", "Mani Srivastava" ], "title": "Genattack: Practical black-box attacks with gradient-free optimization", "venue": "arXiv preprint arXiv:1805.11090,", "year": 2018 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Hongge Chen", "Huan Zhang", "Pin-Yu Chen", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Attacking visual language grounding with adversarial examples: A case study on neural image captioning", "venue": "arXiv preprint arXiv:1712.02051,", "year": 2017 }, { "authors": [ "Hongge Chen", "Huan Zhang", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Robust decision trees against adversarial examples", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jianbo Chen", "Michael I Jordan", "Martin J. Wainwright" ], "title": "Hopskipjumpattack: A query-efficient decision-based attack", "venue": "arXiv preprint arXiv:1904.02144,", "year": 2019 }, { "authors": [ "Pin-Yu Chen", "Yash Sharma", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Ead: elastic-net attacks to deep neural networks via adversarial examples", "venue": "In Thirty-second AAAI conference on artificial intelligence,", "year": 2018 }, { "authors": [ "Minhao Cheng", "Thong Le", "Pin-Yu Chen", "Jinfeng Yi", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Queryefficient hard-label black-box attack: An optimization-based approach", "venue": "arXiv preprint arXiv:1807.04457,", "year": 2018 }, { "authors": [ "Minhao Cheng", "Simranjit Singh", "Patrick Chen", "Pin-Yu Chen", "Sijia Liu", "Cho-Jui Hsieh" ], "title": "Sign-opt: A query efficient hard-label adversarial attack", "venue": null, "year": 1909 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "Bohb: Robust and efficient hyperparameter optimization at scale", "venue": "arXiv preprint arXiv:1807.01774,", "year": 2018 }, { "authors": [ "Reuben Feinman", "Ryan R. Curtin", "Saurabh Shintre", "Andrew B. Gardner" ], "title": "Detecting adversarial samples from artifacts", "venue": "arXiv preprint arXiv:1703.00410,", "year": 2017 }, { "authors": [ "Thomas A Feo", "Mauricio GC Resende" ], "title": "Greedy randomized adaptive search procedures", "venue": "Journal of global optimization,", "year": 1995 }, { "authors": [ "Fred Glover", "Manuel Laguna" ], "title": "Tabu search. handbook of combinatorial optimization", "venue": "Kluwer Academic Publishers,", "year": 1999 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "arXiv preprint arXiv:1804.08598,", "year": 2018 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Aleksander Madry" ], "title": "Prior convictions: Black-box adversarial attacks with bandits and priors", "venue": "arXiv preprint arXiv:1807.07978,", "year": 2018 }, { "authors": [ "Kevin Jamieson", "Ameet Talwalkar" ], "title": "Non-stochastic best arm identification and hyperparameter optimization", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Alex Kantchelian", "J Doug Tygar", "Anthony Joseph" ], "title": "Evasion and hardening of tree ensemble classifiers", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 (canadian institute for advanced research)", "venue": "URL http://www. cs. toronto. edu/kriz/cifar. html,", "year": 2010 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "K. Lee", "H. Lee", "J. Shin" ], "title": "A simple unified framework for detecting out-of-distribution samples and adversarial attacks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "arXiv preprint arXiv:1603.06560,", "year": 2016 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M. Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E. Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Sébastien Marcel", "Yann Rodriguez" ], "title": "Torchvision the machine-vision package of torch", "venue": "In Proceedings of the 18th ACM international conference on Multimedia,", "year": 2010 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Riccardo Moriconi", "Marc Peter Deisenroth", "K.S. Sesh Kumar" ], "title": "High-dimensional bayesian optimization using low-dimensional feature spaces", "venue": null, "year": 2019 }, { "authors": [ "Martin Pelikan", "David E Goldberg", "Erick Cantú-Paz" ], "title": "Boa: The bayesian optimization algorithm", "venue": "In Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation-Volume", "year": 1999 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Chun-Chen Tu", "Paishun Ting", "Pin-Yu Chen", "Sijia Liu", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh", "Shin-Ming Cheng" ], "title": "Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Ziyu Wang", "Masrour Zoghi", "Frank Hutter", "David Matheson", "Nando De Freitas" ], "title": "Bayesian optimization in high dimensions via random embeddings", "venue": "In Twenty-Third International Joint Conference on Artificial Intelligence,", "year": 2013 }, { "authors": [ "Puyudi Yang", "Jianbo Chen", "Cho-Jui Hsieh", "Jane-Ling Wang", "Michael I Jordan" ], "title": "Ml-loo: Detecting adversarial examples with feature attribution", "venue": null, "year": 1906 } ]
[ { "heading": "1 INTRODUCTION", "text": "It has been shown that machine learning models, including deep neural networks, are vulnerable to adversarial examples (Goodfellow et al., 2014; Szegedy et al., 2013; Chen et al., 2017a). Therefore, evaluating the robustness of a given model becomes crucial for security sensitive applications. In order to evaluate the robustness of deep neural networks, researchers have developed “attack algorithms” to generate adversarial examples that can mislead a given neural network while being as close as possible to the original example (Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017b; Chen et al., 2017b). Most of these attack methods are based on maximizing a loss function with a gradient-based optimizer, where the gradient is either computed by back-propagation (in the white-box setting) or finite-difference estimation (in the soft-label blackbox setting). Although these methods work well on standard neural networks, when it comes to complex or even discontinuous models, such as decision trees and detection-based defense models, they cannot be directly applied because the gradient is not available.\nHard-label black-box attacks, also known as decision-based attacks, consider the most difficult but realistic setting where the attacker has no information about the model structure and parameters, and the only valid operation is to query the model to get the corresponding decision-based (hard-label) output (Brendel et al., 2017). This type of attacks can be used as a “universal” way to evaluate robustness of any given models, no matter continuous or discrete. For instance, Cheng et al. (2018); Chen et al. (2019a) have applied decision-based attacks for evaluating robustness of Gradient Boosting Decision Trees (GBDT) and random forest. Current decision-based attacks, including Brendel et al. (2017); Cheng et al. (2018); Chen et al. (2019b); Cheng et al. (2019), are based on iterative local updates – starting from an initial point on the decision surface, they iteratively move the points along the surface until reaching a local minimum (in terms of distance to the original example). The update is often based on gradient estimation or some other heuristics. However, the local update nature makes these methods sensitive to the starting point. As we demonstrate in Figure 1(a), the perturbation of converged adversarial examples for a neural network are quite different for different initialization configurations, and this phenomenon becomes more severe when it comes to discrete models such as GBDTs (see Figure 1(b)). This makes decision-based attacks converge to a sub-\noptimal perturbation. As a result, the solution cannot really reflect the robustness of the targeted model.\nTo overcome these difficulties and make decision-based attacks better reflect the robustness of models, we propose a meta algorithm called BOSH-attack that consistently boosts the solution quality of existing iterative local update based attacks. Our main idea is to combine Bayesian optimization, which finds solution closer to global optimum but suffers from high computation cost, with iterative local updates, which converges fast but often get stuck in local minimum. Specifically, given a decision based attack A, our algorithm maintains a pool of solutions and at each iteration we run A for m steps on each solution. The proposed Bayesian Optimization resampling (BO) and Successive Halving (SH) are then used to explore important solution space based on current information and cut out unnecessary solution paths.\nOur contributions are summarized below:\n1. We conduct thorough experiments to show that current decision-based attacks often converge to a local optimum, thus further improvements are required. 2. Based on the idea of Bayesian optimization and successive halving, we design a meta algorithm to boost the performance of current decision-based attack algorithms and encourage them to find a much smaller adversarial perturbation efficiently. 3. Comprehensive experiments demonstrate that BOSH-attack can consistently boost existing decision-based attacks to find better examples with much smaller perturbation. In addition to the standard neural network models, we also test our algorithms on attacking discrete GBDT models and detector-based defense models. Moreover, our algorithm can reduce the computation cost by 10x compared to the naive approach." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Given a classification model F : Rd → {1, . . . , C} and an example x0, adversarial attacks aim to find the adversarial example that is closest to x0. For example, an untargeted attack aims to find the minimum perturbation to change the predicted class, which corresponds to the following optimization problem:\nmin δ ‖δ‖ s.t. F (x0 + δ) 6= F (x0). (1)\nExactly minimizing (1) is usually intractable; therefore, we can only expect to get a feasible solution of (1) while hoping ‖δ‖ to be as small as possible.\nWhite-box Attack. For neural networks, we can replace the constraint in (1) by a loss function defined on the logit layer output, leading to a continuous optimization problem which can be solved by gradient-based optimizers. This approach has been used in popular methods such as FGSM (Goodfellow et al., 2014), C&W attack (Carlini & Wagner, 2017b) and PGD attack (Madry et al., 2017). All these white-box attacks developed for neural networks assume the existence of the gradient. However, for models with discrete components such as GBDT, the objective cannot be easily defined and gradient-based white-box attacks are not applicable. There are few white-box attacks developed for specific discrete models, such as Mixed Integer Linear Programming (MILP) approach for attacking tree ensemble (Kantchelian et al., 2016). However, those algorithms are time consuming and require significant efforts for developing each model.\nSoft-label Black-box Attack Black box setting considers the cases when an attacker has no direct access to the model’s parameter and architecture, and the only valid operation is to query input examples and get the corresponding model output. In the soft-label black box setting, it is assumed that the model outputs the probability of each label for an input query. Chen et al. (2017b) showed that the attack can still be formulated as an optimization problem where the objective function value can be computed while the gradient is unavailable. Based on this, various zeroth order optimization algorithms have been proposed, including NES-attack (Ilyas et al., 2018a), EAD attack (Chen et al., 2018), bandit-attack (Ilyas et al., 2018b), Autozoom (Tu et al., 2019), Genetic algorithm (Alzantot et al., 2018).\nHard-label Black-box attack (Decision-based attack) In this paper, we focus on the hard-label black box attack (also known as decision-based attack). In contrast to the soft-label setting, the\nattacker can only query the model and get the top-1 predicted label without any probability information. To minimize the perturbation in the decision-based setting, Brendel et al. (2017) first proposed a Boundary Attack based on random walk on the decision surface. Later on, Cheng et al. (2018) showed that hard-label attack can be reformulated as another continuous optimization problem, and zeroth order optimization algorithms such as NES can be used to solve this problem. Cheng et al. (2019) further reduces the queries by only calculating the sign of ZOO updating function, and Chen et al. (2019b) proposed another algorithm to improve over boundary attack. Such methods can converge quickly but suffer from local optimum problems, and thus require more careful and thorough search in the solution space.\nProbability based black-box optimization There are two commonly used methods to solve a black-box or non-differentiable optimization problem: gradient-based and probabilistic-based algorithms. Gradient-based methods are based on iterative local updates until convergence, while probabilistic-based algorithms such as Bayesian optimization (BO) (Pelikan et al., 1999; Snoek et al., 2012) approximate the objective function by a probabilistic model. Generally speaking, gradient-based methods are commonly used in black-box attack because it converges fast. However, these methods often stuck in some local optimal directions, especially when the searching space is high dimensional and non-convex. Probabilistic-based algorithms are frequently used in low-dimensional problems such as hyperparameter tuning and can have a better chance to find more global optimal values (Snoek et al., 2012; Bergstra et al., 2011). However, the computation cost grows exponentially while the dimension increases and quickly become unacceptable. Therefore, they cannot be directly applied to generate adversarial examples. In this paper, we attempt to combine Bayesian Optimization and iterative local updates to improve the solution quality of current attack algorithms while being able to scale to high dimensional problems.\nCombinatorial heuristic and genetic algorithms There exist various heuristic algorithms commonly applied to combinatorial optimization problems. In these algorithms, they try to leverage the effects between greedy and random. Commonly, they will search for different directions and drop the bad ones, and then put more attention on the relatively good candidates. Greedy randomized adaptive search (Feo & Resende, 1995) finds good solutions in an iterative way. It first generates a set of solutions and use a greedy function to rank these solutions. Later, good candidates are placed in a restricted candidate list, and randomly chosen when forming the solution. Tabu search (Glover & Laguna, 1999) selects a candidate and checks its immediate neighbors, trying to find an improved solution. In order to avoid stucking in local optimal areas, it maintains a list called tabu list to store past good solutions (often local optimal solutions). In further searches, it will prevent from looking for this areas. Other approaches like Genetic Algorithms (GA) and Simulated Annealing (SA) also try to adopt random in the searching process. In this paper, we simply use Successive Halving (SH) (Jamieson & Talwalkar, 2016) to remove unimportant candidate configurations iteratively. The details can be found in Section 3.1." }, { "heading": "3 THE PROPOSED ALGORITHM", "text": "Observation: Decision based attacks are easily stuck in local optimum. Most existing adversarial attacks adopt iterative local updates to find adversarial examples – starting from an initial point, they iteratively update the solution until convergence. For example, in white-box attacks such as C&W and PGD methods, they aim at optimizing a non-convex loss function by iterative gradient updates. In Figure 1(a), we plot the 2-dimensional projection of the decision surface of a neural network. We can observe that there are two local minimums1. Results show that there are two local minimums and the attack algorithm converges to one of them according the the initialization region. Similarly, in decision-based attacks, existing methods start from some point on the decision surface and then iteratively update the point locally on the surface either by gradient update (Cheng et al., 2018; Chen et al., 2019b) or random walk (Brendel et al., 2017). In Figure 1(b) we plot the decision surface of a GBDT. We observe a similar issue that there are many local minima in the GBDT decision boundary.\n1Here local minimum indicates a point on the decision boundary that has shortest distance to the original example, compared to other nearby points on the decision boundary. Those local minimums are the points where a decision-based attack can converge to.\nWe further quantify how serious the problem is. On an MNIST network, Figure 2(a) shows the distribution of converged adversarial perturbations of C&W attack (white-box attack) and SignOPT attack (decision-based attack) under approximately 400 random initial points. We observe that the converged solutions of C&W attack are quite concentrated between [1.41, 1.47]. However, when considering decision-based attack such as Sign-OPT, the converged solutions are widely spread from 1.36 to 1.55. In general, our experiments suggested that decision based attacks are much more sensitive to initialization. This is because they only update solutions on the decision boundary while C&W and PGD attack can update solution inside/outside the boundary.\nFurthermore, such phenomenon is obvious when the victim model is GBDT. For example, in Figure 2(b) we can see the converged solution spread from 0.5 to 1.5 when applying Sign-OPT attack. Therefore, the solution of any single run of Sign-OPT on GBDT cannot really reflect the minimum adversarial perturbation of the given model, and thus it is crucial to design an algorithm that converges to a better solution. Since the phenomenon is more severe for decision based attacks, we will mainly focus on improving the quality of decision based attacks in this paper, while in general our method can also be used to boost the performance of white-box attacks marginally, as illustrated in Appendix A." }, { "heading": "3.1 A GENERAL MECHANISM FOR IMPROVED DECISION BASED ATTACK", "text": "Given a local update based attackA, our goal is to find a solution with improved quality. To this end, we propose a meta algorithm to address this issue by integrating probability-based (Bayesian) blackbox optimization with iterative local updates. As shown in Algorithm 1, our algorithm maintains a candidate pool Pa that stores all the active configurations, where each configuration u ∈ Pa is an intermediate iterate of algorithm A. Also, we assume that there is an attack objective C such that C(u) measures the quality of the solution. For decision based attacks, the goal is to find the optimal direction to minimize the distance to the boundary along that direction (Cheng et al., 2018).\nTherefore, u is the direction of adversarial perturbation and\nC(u) = min λ>0\nλ s.t. f ( x0 + λ u\n‖u‖\n) 6= y0,\nwhere y0 is the correct label. This can be computed by a fine-grained plus binary search procedure (see Cheng et al. (2018)), and in fact, in most of the algorithms C(u) is directly maintained during the optimization procedure (Brendel et al., 2017; Cheng et al., 2019).2 At each iteration, we run m iterations of A on each active configuration u ∈ Pa to get the improved configurations. Then we conduct the following two operations to reduce the candidate pool size and to resample new configurations to explore important subspace based on Bayesian optimization. We discuss each step in details as below.\nSuccessive Halving (SH) to cut unimportant candidate configuration. After updating each candidate by m iterations, we compute the objective function value of each candidate and discard the worst half of them. Iteratively reducing the candidate set into half accelerates the algorithm, while still maintaining an accurate solution pool. This idea has been used in hyperparameter search (Jamieson & Talwalkar, 2016) but has not been used in adversarial attack.\nBayesian Optimization (BO) for Guided Resampling. To introduce variance in the intermediate steps and explore other important region, we propose a guided resampling strategy to refine the candidate pool. The general idea is to resample from the solution space in the middle step based on the knowledge acquired before and focus on promising subareas. Specifically, we use a Bayesian optimization method called Tree Parzen Estimator (TPE) (Bergstra et al., 2011) to resample new configurations.\nIn order to do resampling, we maintain another pool Ps that stores all the previous iterations performed including the cutted ones, since all the information will be useful for resampling. As shown in Algorithm 2, we first divide the observed data in Ps into worse and better parts based on the associated objective function value. We then train two separate Kernel Density Estimators (KDE) denoted as l(·) and g(·) on these two subsets.{\nl(u) = p(C(u) ≤ α|u,Ps), g(u) = p(C(u) > α|u,Ps).\n(2)\nThe parameter α is set to 20%, which ensures the better part l(u) has 20% of configurations in Ps and the worse part g(u) has the remaining 80%, relatively. Later, we sample new data with the minimum value of l(·)/g(·), which can be proved to have maximum relative improvement in Equation 4 (see more information in Appendix B). Since we can not directly find such points, we sample for a few times (the number is set to 100 during the experiment) from l(·) and keep the one with the minimal l(·)/g(·).\n2When combining our method with white-box attacks, u will be a d-dimensional vector in the input space, and C(u) will be the objective defined in C&W or PGD attack.\nAlgorithm 1 The proposed BOSH attack framework. Input: Model f , original example x0, attack objective C, gradient-based attack algorithmA, cutting\ninterval M , cutting rate s, cutting interval increase rate m. 1: Randomly sample k initial configurations to form Pa (Gaussian or uniform random). 2: Ps ← Pa. 3: for t = 1, 2, . . . do 4: for each ut ∈ Pa do // perform attack on all configurations 5: for j = 1, . . . ,M do // conduct M iterations before cutting 6: u′t ← A(ut) 7: Ps ← Ps ∪ {(u′t, C(u′t))}. // Record all interval steps 8: Update ut in Pa with u′t. // Update the configuration in Pa 9: Delete the worst s% of configurations from Pa\n10: Pa ← Pa∪ TPE-resampling(Ps, |Pa| ∗ s%) 11: M ←M · (1 +m)%. // Increase the searching interval\nAlgorithm 2 Tree Parzen Estimator resampling. Input: Observed datas Ps, resample times T ;\n1: Initialize Pl as an empty list; 2: Divide u ∈ Ps into two subset L (better) and H (worse) based on objective function C; 3: Build two separate KDEs on L and H denoted as l(·) and h(·) respectively; 4: Use Grid Search to find the best KDE bandwidth bl, bh for l(·) and h(·); 5: for each t ∈ [0, T ] do 6: initialization: k = 0, min score =∞; 7: while k < max sample times do 8: Sample utk from l(·); 9: if min score > g(utk)/l(utk) then;\n10: ut ← utk; 11: min score = g(utk)/l(utk);" }, { "heading": "12: k ← k + 1", "text": "13: Pl ← Pl ∪ {(ut, C(ut))}; 14: return Pl;\nThe reason we use TPE for resampling is that the computational cost grows linearly with the number of data points in Ps. In comparison, traditional Bayesian optimization method like Gaussian Process (GP) will require cubic-time to generate new points. Therefore, TPE is more suitable for high dimensional problems.\nIn the experiments we find that the final best configuration mostly comes from resampling, instead of the set of starting configurations. This proves the effectiveness of resampling during search, the quantitative results will be shown in Section 4.2." }, { "heading": "4 EXPERIMENTS", "text": "We conduct experiments on various models and datasets to verify the efficiency and effectiveness of the proposed approach. We try to enhance the performance of decision-based attack on image classification tasks like MNIST, CIFAR-10 and ImageNet, and also conduct experiments on tree model like GBDT and detection model like LID. Furthermore, we demonstrate that our meta-algorithm is also able to improve existing white-box attacks." }, { "heading": "4.1 DECISION-BASED ATTACK ON NEURAL NETWORKS", "text": "We conduct experiments on three standard datasets: MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al., 2010) and ImageNet-1000 (Deng et al., 2009). The neural network model architecture is the same with the ones reported in Cheng et al. (2018): for both MNIST and CIFAR we use the network with four convolution layers, two max-pooling layers and two fully-connected layers, which achieve 99.5% accuracy on MNIST and 82.5% accuracy on CIFAR-10 as reported in (Carlini & Wagner, 2017b; Cheng et al., 2018). For ImageNet, we use the pretrained Resnet-50 (He et al., 2016) network provided by torchvision (Marcel & Rodriguez, 2010), which achieves a Top-1 accuracy of 76.15%. We randomly select 100 examples from test sets for evaluation. The parameters of the proposed algorithms can be found in Table 6 in Appendix D.\nImproved solution quality of existing methods We compare the solution quality of the proposed algorithm with three existing decision-based attack methods: Boundary attack (Brendel et al., 2017), OPT-attack (Cheng et al., 2018) and Sign-OPT attack (Cheng et al., 2019) on MNIST, CIFAR-10 and ImageNet data sets. For our algorithm, we use Sign-OPT attack as the base algorithm and set k = 30 for the initial candidate pool. The average L2 perturbation of our method and baselines are presented in Table 8. Note that all the decision based attacks maintain intermediate iterates on the decision boundary, so they always output a successful attack. The main comparison is the average L2 perturbation to alter the predictions. We also follow Cheng et al. (2018) to report Attack Success Rate (ASR) by calculating ratio of adversarial examples with perturbation < ( is chosen based on\ndifferent tasks). The results show that can help decision-based attacks achieve lower L2 perturbation and higher attack success rate. The detailed analysis is shown in the next section.\nThe proposed algorithm can also be used to boost the performance of other decision based attacks. Table 7 in the Appendix demonstrates that the proposed algorithm consistently improves the L2 perturbation and success rate of Boundary attack and OPT-attack." }, { "heading": "4.2 ANALYSIS", "text": "We then conduct a study to test each component of our algorithm and compare with the baselines. The experiment is done on MNIST data using Sign-OPT attack as the base attack method. The results are summarized in Table 2.\nComparison with naive mulitple initialization approach. A naive way to improve the solution quality of existing attack is to run the attack on multiple random initialization points. This strategy has been used in white-box attack3 and is also applicable to the decision based attacks. We compare Sign-OPT with 30, 50, 100 initial points and the proposed BOSH boosted Sign-OPT approach in Table 2. The results demonstrate that successive halving requires much less queries than naively running multiple initial configurations. Due to resampling, the proposed approach converges to a better solution under the same initial pool size. For example, to achieve average 0.91 L2 perturbation, BOSH boosted Sign-OPT requires 10 times less queries than multi-initial Sign-OPT.\nSize of the initial pool. The size of initial pool (denoted by k in our algorithm) is an important parameter. Table 2 shows that increasing k only has a marginal effect after k ≈ 30. When introducing cutting and resampling mechanism into the Sign-OPT attack, the final best perturbation is less sensitive to the number of starting directions, which means that resampling tend to make the search less dependent on the starting directions. Detailed discussion is in Appendix C.\n3See the leaderboard at https://github.com/MadryLab/mnist_challenge\nEffect of successive halving and TPE resampling. We study the effect of these two components separately. As shown in Figure 3(a), the approach of successive halving keeps throwing away the worse s percent of configurations until converge during a specific interval until there is only one sample left. When combining this with resampling, as in Figure 3(b), our algorithm finds directions that are better than original ones. We observed emperically that the final best direction often comes from resampling instead of the original starting directions. This observation demonstrates the importance of resampling in the intermediate steps. Furthermore, Table 2 shows that combining Sign-OPT with successive halving (second column) has worse solutions compared with BOSH Sign-OPT. This indicates that resampling is important for getting a better solution.\nWhat is the best cutting interval? The parameter M decides how many iterations are applied using base attacker before the next cutting/resampling stage. This is an important parameter to be tuned. If M is too small, some solution paths will be wrongly throwing away; while if M is too large, the whole procedure requires a large number of queries. In our experiment, we use a subset of images to tune this parameter and find that the images in the same dataset often share similar best cutting interval. This reduces lots of unnecessary computations. The parameters for different datasets are shown in Appendix D." }, { "heading": "4.3 DECISION-BASED ATTACK ON OTHER MODELS", "text": "We conduct untargeted attack on gradient boosting decision tree (GBDT). Since Sign-OPT does not include the experiment with GBDT, we use the OPT-based attack (Cheng et al., 2018) and apply our meta algorithm on top of it. We consider two datasets, MNIST and HIGGS, and use the same models provided by (Cheng et al., 2018).4\nWe compare the average L2 perturbation and the attack success rate in Table 3. The results show that the proposed method significantly boosts the performance of OPT attack. The overall improvement is more significant than attacking neural networks. This is mainly because that the decision boundary of GBDT contains more local minima than neural networks, as plotted in Figure 1.\n4 The MNIST model is downloaded from LightGBM and use the parameters in https://github. com/Koziev/MNIST_Boosting, which achieves 98.09% accuracy. For HIGGS, we can achieve 0.8457 accuracy relatively." }, { "heading": "4.3.1 DECISION-BASED ATTACK ON DETECTION MODELS", "text": "To improve the performance of neural networks, a line of research, such as KD+BU (Feinman et al., 2017), LID (Ma et al., 2018), Mahalanobis (Lee et al., 2018) and ML-LOO (Yang et al., 2019), has been focusing on screening out adversarial examples in the test stage without touching the training of the original model. Besides comprehensive evaluation of our attack on various classification models with a variety of data sets, we carry out experimental analysis of our untargeted attack on one state-of-the-art detection model LID (Ma et al., 2018) on MNIST data set. To train a detection model on MNIST, we first train a simple classification network composed of two convolutional layers followed by a hidden dense layer with 1024 units. Then we apply C&W attack to this model to generate adversarial examples from the original test samples. Finally we train LID detectors with the original test samples and adversarial examples we have generated with the standard train/test split. The state-of-the-art detection model LID achieves 0.99 test accuracy.\nC&W high confidence attack (Carlini & Wagner, 2017a) has been shown to have great performance in attacking various detection models. So we compare the averageL2 perturbation and attack success rate of three attacking methods C&W high confidence attack, Sign-OPT attack and BOSH Sign-OPT attack in Table 4. At each query, we define the attack to be successful if it fools both the detector model and the original model. The results show that the proposed method can significantly boost the performance of the Sign-OPT attack and it achieves much better performance than C&W high confidence attack." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a meta algorithm to boost the performance of existing decision based attacks. In particular, instead of traversing a single solution path when searching for an adversarial example, we maintain a pool of solution paths to explore important regions. We show empirically that the proposed algorithm consistently improves the solution quality of many existing decision based attacks, and can obtain adversarial examples with improved quality on not only neural networks, but also other decision based models, such as GBDT and detection-based models." }, { "heading": "B BAYESIAN OPTIMIZATION AND SUCCESSIVE HALVING", "text": "B.1 BAYESIAN OPTIMIZATION (BO)\nBayesian Optimization (BO) has been successfully applied to optimize a function which is nondifferentiable or black-box like finding the hyper-parameters of neural networks in AutoML area. It mainly adopts the idea to sample new points based on the past knowledge. Basically, Bayesian optimization finds the optimal value of a given function f : X → R in a iterative manner: at each iteration i, BO uses a probabilistic model p(f |D) to estimate and approach the unknown function f based on the data points that are already observed by the last iterations. Specifically, it samples new data points xt = argmaxx u(x|D1:t−1) where u is the acquisition function and D1:t−1 = {(x1, y1), . . . , (xt−1, yt−1)} are the t − 1 samples queried from f so far. The most widely used acquisition functions is the expected improvement (EI):\nEI(x) = Ex∼p max(f(x)− f(x+), 0) (4)\nWhere f(x+) is the value of the best sample generated so far and x+ is the location of that sample, i.e. x+ = arg maxxi∈D f(xi).\nB.2 THE TREE PARZEN ESTIMATOR (TPE).\nThe Tree Parzen Estimator (TPE). TPE (Bergstra et al., 2011) is a Bayesian optimization method proposed to solve the hyper-parameter tuning problems that uses a kernel density estimator (KDE) to approximate the distribution of D instead of trying to model the objective function f directly. Specifically, it models the p(x|y) and p(y) instead of p(y|x), and define p(x|y) using two separate KDE l(x) and g(x):\np(x|y) = { l(x) y ≤ α g(x) y > α\n(5)\nwhere α is a constant between the lowest and largest value of y inD. Bergstra et al. (2011) shows that maximizing the radio l(x)/g(x) is equivalent to optimizing the EI function described in Equation 4 (see Theorem 1 for more detail). In such setting, the computational cost of generating a new data point by KDE grows linearly with the number of data points already generated, while traditional Gaussian Process (GP) will require cubic-time.\nTheorem 1 In Equation 5, maximizing the radio l(x)/g(x) is equal to optimizing the Expected Improvement (EI) in Equation 4\nProof:\nThe Expected Improvement can also be written as: EI(x) = Ex∼p max(f(x)−f(x+), 0) = ∫ α −∞ (y∗−y)p(y|x)dy = ∫ α −∞ (α−y)p(x|y)p(y) p(x) dy (6)\nAssume that γ = p(y < α), then:\np(x) = ∫ R p(x|y)p(y)dy = γl(x) + (1− γ)g(x) (7)\nTherefore,∫ α −∞ (α− y)p(x|y)p(y)dy = l(x) ∫ α −∞ (α− y)p(y)dy = γαl(x)− l(x) ∫ α −∞ p(y)dy (8)\nSo finally,\nEIα(x) = γαl(x)− l(x)\n∫ α −∞ p(y)dy\nγl(x) + (1− γ)g(x) ∝ (γ + g(x) l(x) (1− γ))−1 (9)\nWhich means maximizing l(x)/g(x) is equivalent to maximize the EI function.\nB.3 SUCCESSIVE HALVING.\nSuccessive Halving. The idea behind Successive Halving (Jamieson & Talwalkar, 2016) can be easily illustrated by it’s name: first initialize a set of configurations and perform some calculations on them, then evaluate the performance of all configurations and discard the worst half od these configurations, this process continues until there is only one configuration left. BOHB (Falkner et al., 2018) combines HyperBand (derived from Successive Halving) (Li et al., 2016) and TPE to solve the AutoML problem and achieve greate success.\nHowever, these methods are originally applied to the hyper-parameter tuning problem where the parameters need to be searched are not too much(approximately 10-20), it will suffer from ”dimensional curse” when the number of parameters grows larger and the computation cost needed will be unacceptable. There are already some work (Moriconi et al., 2019; Wang et al., 2013) try to use BO in high dimension, while we still found in experiment that simply use BO can not converge as good as gradient-based methods." }, { "heading": "C DISCUSSION ABOUT THE NUMBER OF STARTING DIRECTIONS", "text": "We discuss the problem that how many starting points are enough for a successful attack. In order to find the best number of starting points, we conduct attack on an image with different number of starting directions, for a specific number of starting directions, we also run several times and average the result to reduce variance. Figure 5 shows the attack on MNIST image using Sign-OPT method, we can see the effect that the number of starting direction have on the final converging perturbation. Also, We can find that the standard deviation is smaller and the final perturbation is lower when resampling by TPE is introduced. This is probably because TPE resampling also introduce variance in the middle step and making the algorithm not completely depends on the starting directions, this will also helps increase the probability to find a better optimal value ." }, { "heading": "D PARAMETERS OR DIFFERENT DATASETS", "text": "" }, { "heading": "E BOOSTING DECISION-BASED ATTACK ALGORITHMS", "text": "To demonstrate that our algorithm can consistently boost existing hard-label attack algorithms, we try to enhance the performance of three decision-based algorithms described in Section 4.1. All the parameters are equal to Section 4.1 and we use 30 starting directions for our boosting algorithm. The results are shown in Table 7." }, { "heading": "F RUN-TIME COMPARISON", "text": "We evaluate the efficiency of various algorithms based on the number of queries, it is commonly used in the papers of this area. In this section we also include the run-time performance comparison of these methods. We use one Nvidia GTX 1080 Ti to conduct the experiments, but the run-time will reduce if we use multiple GPUs since our boosting algorithm can be easily parallelize (searches with different directions do not depend on each other)." }, { "heading": "G ATTACK SUCCESS RATE UNDER DIFFERENT PERTURBATION", "text": "In this section, we show the results of Sign-OPT and Boosted Sign-OPT attack on MNIST and CIFAR-10 dataset. We mainly show how Attack Success Rate (ASR) changes based on different perturbations." }, { "heading": "H TIME COMPLEXITY ANALYSIS", "text": "We briefly analyse the number of queries our algorithm need regarding the parameters we mentioned in Algorithm 1. Generally speaking, in first several cutting intervals (we consistently set to 3 in the experiments), our boosting algorithm requires about k times more queries than single search. This is because we resample new configurations while cutting unpromising one. After this, we only cut and do not resample, and run until there is only one configuration left and it converges.\nAs discussed in Algorithm 1, assume that the cutting interval is M , cutting rate is s, cutting interval increase rate is m and initial number of starting configurations is k.\nIn the cutting and resampling phase, since we only resample 3 times, we need:\nk ∗ (M +M(1 +m) +M(1 +m)2) (10) queries. After that, we only cut the unpromising configurations, so we need:\nk ∗M(1 +m)2 ∗ (1 + s% + s2% + ...) (11)\nqueries.\nWe can see that the main differences between the original and our algorithm is the initial number of staring configurations k, and we also discuss how k influences the results in Section 4.2 and Appendix B." } ]
2,019
BOSH: AN EFFICIENT META ALGORITHM FOR DECISION-BASED ATTACKS
SP:cefb35a0bba2e8d3b11b1c81bde283b4e0699da6
[ "This work examines the recently proposed randomized smoothing method for certifying the robustness of neural networks. The authors explain a theoretical framework for analyzing randomized smoothing as a certification method, propose two alternative definitions of robustness (D_MR and D_inf), and prove that using Gaussian noise for smoothing is near “optimal” for L2 robustness, while using exponential noise for smoothing is optimal for L_inf robustness (the authors do this by establishing a lower bound on the noise necessary for smoothing to work). This also leads the authors to the interesting conclusion that randomized smoothing may not be scalable to high dimensional data for L_inf robustness.", "The authors propose a new definition for robustness of random functions. This definition is ideal for analyzing the certified robustness under randomized smoothing techniques. They analyze and show that the Gaussian smoothing is near optimal for \\ell_2 smoothing as the mean maximum error is only off by a factor of log d where d is the dimension from the optimal mean maximum energy. This is the case even under a more strict definition of robustness defined as D_\\infty. Moreover, the authors show that indeed smoothing with an exponential family is optimal under D_\\infty robustness metric with radius measured in \\ell_\\infty." ]
Randomized smoothing, which was recently proved to be a certified defensive technique, has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions still remain unanswered in the existing frameworks, such as (i) whether Gaussian mechanism is an optimal choice for certifying `2-normed robustness, and (ii) whether randomized smoothing can certify `∞-normed robustness (on high-dimensional datasets like ImageNet). To answer these questions, we introduce a unified and selfcontained framework to study randomized smoothing-based certified defenses, where we mainly focus on the two most popular norms in adversarial machine learning, i.e., `2 and `∞ norm. We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the `2 and `∞-normed robustness. We further show that the largest `∞ radius certified by randomized smoothing is upper bounded by O(1/ √ d), where d is the dimensionality of the data. This theoretical finding suggests that certifying `∞-normed robustness by randomized smoothing may not be scalable to high-dimensional data. The veracity of our framework and analysis is verified by extensive evaluations on CIFAR10 and ImageNet.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Mark Bun", "Thomas Steinke" ], "title": "Concentrated differential privacy: Simplifications, extensions, and lower bounds", "venue": "In Theory of Cryptography Conference,", "year": 2016 }, { "authors": [ "Mark Bun", "Jonathan Ullman", "Salil Vadhan" ], "title": "Fingerprinting codes and the price of approximate differential privacy", "venue": "SIAM Journal on Computing,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Dimitrios Diochnos", "Saeed Mahloujifar", "Mohammad Mahmoody" ], "title": "Adversarial risk and robustness: General definitions and implications for the uniform distribution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Krishnamurthy Dvijotham", "Robert Stanforth", "Sven Gowal", "Timothy A Mann", "Pushmeet Kohli" ], "title": "A dual approach to scalable verification of deep networks", "venue": "In UAI,", "year": 2018 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In Theory of cryptography conference,", "year": 2006 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models", "venue": "arXiv preprint arXiv:1810.12715,", "year": 2018 }, { "authors": [ "Moritz Hardt", "Kunal Talwar" ], "title": "On the geometry of differential privacy", "venue": "In Proceedings of the forty-second ACM symposium on Theory of computing,", "year": 2010 }, { "authors": [ "Warren He", "James Wei", "Xinyun Chen", "Nicholas Carlini", "Dawn Song" ], "title": "Adversarial example defense: Ensembles of weak defenses are not strong", "venue": "In 11th USENIX Workshop on Offensive Technologies (WOOT", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "arXiv preprint arXiv:1802.03471,", "year": 2018 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Second-order adversarial attack and certifiable robustness", "venue": "arXiv preprint arXiv:1809.03113,", "year": 2018 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Francesco Orabona", "Dávid Pál" ], "title": "Optimal non-asymptotic lower bound on the minimax regret of learning with expert advice", "venue": "arXiv preprint arXiv:1511.02176,", "year": 2015 }, { "authors": [ "Aditi Raghunathan", "Jacob Steinhardt", "Percy Liang" ], "title": "Certified defenses against adversarial examples", "venue": "arXiv preprint arXiv:1801.09344,", "year": 2018 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Thomas Steinke", "Jonathan Ullman" ], "title": "Between pure and approximate differential privacy", "venue": "arXiv preprint arXiv:1501.06095,", "year": 2015 }, { "authors": [ "Thomas Steinke", "Jonathan Ullman" ], "title": "Between pure and approximate differential privacy", "venue": "Journal of Privacy and Confidentiality,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Jonathan Uesato", "Brendan ODonoghue", "Pushmeet Kohli", "Aaron Oord" ], "title": "Adversarial risk and the dangers of evaluating against weak attacks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Roman Vershynin" ], "title": "High-dimensional probability: An introduction with applications in data science, volume 47", "venue": null, "year": 2018 }, { "authors": [ "Shiqi Wang", "Kexin Pei", "Justin Whitehouse", "Junfeng Yang", "Suman Jana" ], "title": "Efficient formal safety analysis of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "δ)differentially private. Bun" ], "title": "2018) first give the optimal rate of one-way marginal estimation which is improved by Steinke & Ullman (2016)", "venue": null, "year": 2016 }, { "authors": [ "Thus", "A : R" ], "title": "Similar to the proof for Theorem 12, we can reduce our problem to studying the lower bound of one-way marginal for 1-size data problem in the -DP model. Now we first consider the case of r = 1", "venue": null, "year": 2010 } ]
[ { "heading": null, "text": "√ d), where d is the dimensionality of the data. This theoretical finding sug-\ngests that certifying `∞-normed robustness by randomized smoothing may not be scalable to high-dimensional data. The veracity of our framework and analysis is verified by extensive evaluations on CIFAR10 and ImageNet." }, { "heading": "1 INTRODUCTION", "text": "The past decade has witnessed tremendous success of deep learning in handling various learning tasks like image classification (Krizhevsky et al., 2012), natural language processing (Cho et al., 2014), and game playing (Silver et al., 2016). Nevertheless, a major unresolved issue of deep learning is its vulnerability to adversarial samples that are almost indistinguishable from natural samples to humans but can mislead deep neural networks (DNNs) to make wrong predictions with high confidence (Szegedy et al., 2013; Goodfellow et al., 2014). This phenomenon, referred to as adversarial attack, is considered to be one of the biggest threats to the deployment of many deep learning systems. Thus, a great deal of effort has been devoted to developing defensive techniques for it. However, the majority of the existing defenses are of heuristic nature (i.e., without any theoretical guarantees), implying that they may be ineffective against stronger attacks. Recent works (He et al., 2017; Athalye et al., 2018; Uesato et al., 2018) have confirmed this concern, and showed that most of those heuristic defenses actually fail to defend stronger adaptive attacks. This forces us to shift our attentions to certifiable defenses as they can classify all the samples in a predefined neighborhood of the natural samples with a theoretically-guaranteed error bound. Among all existing certifiable defensive techniques, randomized smoothing emerges as the most popular one due to its scalability to large datasets and arbitrary networks. Remarkably, using the Gaussian mechanism for randomized smoothing, Cohen et al. (2019) successfully certify 49% accuracy on the original ImageNet dataset under adversarial perturbations with `2 norm less than 0.5. Despite these successes, there are still several unanswered questions regarding randomized smoothing based certified defenses. One of such questions is, why should Gaussian noise be used for randomized smoothing to certify `2-normed robustness, and is Gaussian mechanism the best option? Another important question is regarding the generalizability of this method to other norms, especially the `∞ norm. If randomized smoothing can be used to certify `∞-normed robustness, what mechanism is the optimal choice?\nTo shed light on the above questions, we propose in this paper a unified and self-contained framework for randomized smoothing-based certified defenses. We look at the problem from a differential privacy’s point of view and present two types of robustness in this framework. One is motivated by\n-differential privacy ( -DP), which uses∞-divergence to measure the distance between the probabilities of predictions on randomized natural samples and randomized adversarial samples and is therefore called D∞ robustness. The other is inspired by -zero concentrated differential privacy ( -zCDP) that uses the Maximal Relative Rényi (MR) divergence as the probability distance measurement and is called DMR robustness. For both of them, we focus on certifying robustness in either `2 or `∞ norm by randomized smoothing. Specifically, our contributions are five-fold:\n1. We propose a unified and self-contained framework for certifyingD∞ and/orDMR robustness in `2 and `∞ norms by randomized smoothing.\n2. In our framework, we demonstrate that the Gaussian mechanism is a near optimal choice for certifying DMR robustness in `2 norm, and the robust radius is O(1).\n3. We also prove that an exponential mechanism is the optimal choice for certifying D∞ robustness in `∞ norm, but the robust radius is only O(1/d), making it unscalable to highdimensional data.\n4. We show that the Gaussian mechanism is also a near optimal choice for certifyingDMR robustness in `∞ norm, but the robust radius isO(1/ √ d log d), making it also hardly scalable\nto high-dimensional data. 5. The largest robust `∞ radius that can be certified by randomized smoothing to achieve DMR robustness is upper bounded by O(1/ √ d).\nTable 1 summarizes the (near) optimal mechanisms of our framework for certifying the `2 and `∞- normed robustness." }, { "heading": "2 RELATED WORK", "text": "There are three main approaches for certified defenses. The first approach formulates the task of adversarial verification as an optimization problem and solves it by relaxations (Dvijotham et al., 2018; Raghunathan et al., 2018; Wong & Kolter, 2018). The second approach uses different techniques, such as interval analysis and abstract interpretations, to maintain an outer approximation of the output at each layer through the network. (Mirman et al., 2018; Wang et al., 2018; Gowal et al., 2018). The third approach uses randomized smoothing to certify robustness, and is gaining popularity recently due to its strong scalability (Lecuyer et al., 2018; Li et al., 2018; Cohen et al., 2019) to large datasets and arbitrary networks. For this approach, Lecuyer et al. (2018) showed that randomized smoothing can certify the `2 and `1-normed robustness by using inequalities from differential privacy. Li et al. (2018) achieved a stronger guarantee on the `2-normed robustness using tools from information theory. Cohen et al. (2019) further obtained a tight guarantee on the `2-normed robustness using Gaussian noise. A remaining issue in all of these works is that they did not give answers to questions like why Gaussian noise is used to certify the `2-normed robustness and what is the best mechanism to certify the `∞-normed robustness. To answer these questions, we present in this paper a new general framework to study randomized smoothing based certified defenses." }, { "heading": "3 ROBUSTNESS MOTIVATED BY DIFFERENTIAL PRIVACY", "text": "In this section, we introduce our framework. Let x be a data sample and y ∈ Y be its label, where Y is the label set. We denote by f(·) a deterministic classifier with prediction f(x) for any data sample x. If there exists an x′ in a small lp ball centered at x and with f(x′) 6= f(x), x′ is viewed as an adversarial sample.\nDefinition 1 (Randomized Classifier (Cohen et al., 2019)). Given an input x, the prediction of a randomized classifier g(·) is defined as\nargmax c∈Y P (g(x) = c).\nSpecifically, for a randomized smoothing classifier g(x) = f(x + Z), where Z is a random vector and f(·) is a deterministic classifier, the prediction of x is the class of c whose region S , {x̃ ∈ Rd, f(x̃) = c} has the largest probability measure in the distribution of x+ Z (x̃ ∼ p(x+ Z)).\nBefore introducing our framework, we first recall the definition of robustness for a deterministic classifier in (Diochnos et al., 2018). Definition 2 (Robustness (Diochnos et al., 2018)). For a given classifier f , a sample x and some norm ‖ · ‖. f is (r, ‖ · ‖)-(error-region) robust on the sample x if\n∀x′ ∈ B(x, r), f(x) = f(x′), (1)\nwhere B(x, r) is the ball centered at x and with norm ‖ · ‖ and radius r.\nNote that in Definition 2, the classifier is assumed to be deterministic. To generalize the concept of robustness to randomized classifiers (see Definition 1), we define a relaxed version of the (errorregion) robustness. Since g(x) is a random value, instead of using equality, we measure the difference between g(x) and g(x′) by a certain divergence. This leads us to the following definition, which is a basic concept in our framework that will be used throughout the paper. Definition 3 (Relaxed Robustness). For a given (randomized) classifier g(·), a sample x and some norm ‖ · ‖, the classifier g is (r,D, ‖ · ‖, )-(error-region) robust on x if\n∀x′ ∈ B(x, r),max{D(g(x), g(x′)), D(g(x′), g(x))} ≤ . (2)\nwhere D is some divergence metric between two probability distributions. The max function is used to ensure that the measurement is symmetric.\nCompared with Definition 2, there are two additional terms in Definition 3: represents the “distance” or difference between the distributions of g(x) and g(x′). When is small, we expect that the distributions of predictions on x and x′, i.e., g(x) and g(x′), are almost the same, which is just a generalization of the equality in Definition 2. D is some divergence measurement between two probability distributions. In this paper, we use two types of divergence, ∞-Divergence and Maximal Relative Rényi Divergence, to measure the distance between two probability distributions. Correspondingly, we have two types of robustness called D∞ and DMR robustness. Definition 4 (∞-Divergence). The∞-Divergence D∞ of distributions P and Q is defined as\nD∞(P‖Q) = sup x∈supp(Q)\nlog P (x)\nQ(x) ,\nwhere supp(Q) is the support of the distribution Q. Definition 5 (Maximal Relative Rényi Divergence). The Maximal Relative Rényi Divergence DMR(P‖Q) of distributions P and Q is defined as\nDMR(P‖Q) = max α∈(1,∞) Dα(P‖Q) α ,\nwhere Dα(P‖Q) is the Rényi divergence between P and Q, which is defined as\nDα(P‖Q) = 1\nα− 1 logEx∼Q(\nP (x) Q(x) )α.\nDefinition 6 (D∞ Robustness). A randomized smoothing mechanism A(·) (including classifiers) is a (r,D∞, ‖ · ‖, )-robust mechanism if\n∀x′ ∈ B(x, r),max{D∞(A(x),A(x′)), D(A(x′),A(x))} ≤ , (3)\nwhere ‖ · ‖ is the norm of the ball B(x, r). If a randomized smoothing classifier g(·) satisfies Eq. (3), it is a (r,D∞, ‖ · ‖, )-robust classifier or it certifies D∞ Robustness.\nD∞ Robustness is motivated by the notion of -differential privacy ( -DP) (Dwork et al., 2006). To achieve -DP for a randomized algorithm, we can use several mechanisms such as Laplacian mechanism or Exponential mechanism (see (Dwork et al., 2014) for details). However, it is known that adding Gaussian noise often does not lead to -DP, but rather ( , δ)-DP (Dwork et al., 2014) which has an additional parameter δ and thus is harder to be incorporated in our framework. To alleivate this issue, we employ Maximal Relative Rényi Divergence as the the probability distance measurement to define another type of robustness, namely DMR robustness. Definition 7 (DMR Robustness). A randomized smoothing mechanism A(·) is a (r,DMR, ‖ · ‖, )- robust mechanism if\n∀x′ ∈ B(x, r),max{DMR(A(x),A(x′)), DMR(A(x′),A(x))} ≤ . (4) If a randomized smoothing classifier g(·) satisfies Eq. (4), it is a (r,DMR, ‖ · ‖, )-robust classifier or it certifies DMR Robustness.\nDMR Robustness is inspired by the notion of zero-Concentrated Differential Privacy (zCDP) (Bun & Steinke, 2016), whose connection to DP is shown in the following theorem. Theorem 8 ((Bun & Steinke, 2016)). Let P and Q be two probability distributions satisfying the conditions of D∞(P‖Q) ≤ and D∞(Q‖P ) ≤ . Then, DMR(P‖Q) ≤ 12 2.\nTheorem 8 indicates that DMR-robustness is a relaxed version of D∞-robustness. Remark (Connections between D∞ & DMR Robustness and Standard Definitions). Although D∞ & DMR Robustness are seemingly new concepts defined in this paper, they actually have several connections with the existing frameworks Lecuyer et al. (2018) and Cohen et al. (2019). Specifically, as long as D∞ robustness is certified, the expected output stability bound in Lecuyer et al. (2018) will be guaranteed with δ′ = 0. And if DMR robustness is certified, the expected output stability bound in Lecuyer et al. (2018) will be guaranteed with ′ = (c + 1) √ and δ′ = exp(− c 2\n4 ), according to Theorem 10. Besides, the “scale” of the robust radius certified by our framework is similar the “scale” of the robust radius in Cohen et al. (2019), according to Corollary 11. Theorem 9 (Postprocessing Property). Let g(x) = f(A(x)) be a randomized classifier, where f(·) is any deterministic function (classifier). g(·) is (r,D, ‖ · ‖, )-robust if A(·) is (r,D, ‖ · ‖, )-robust (where D includes D∞ and DMR).\nThe above theorem is derived from the post-processing properties of DP and zCDP. A detailed proof (explanation) is given in Appendix B. This property allows us to concentrate only on the randomized smoothing mechanism A without needing to consider the specific form of the deterministic function (classifier) f(·). Next, we consider the cases of certifying D∞ or DMR robustness using `2 and `∞-norm.\n3.1 CERTIFYING `2-NORMED ROBUSTNESS\nThe following theorem shows that randomized smoothing by the Gaussian mechanism is (r,DMR, ‖ · ‖, )-robust. Theorem 10. Let f be any classifier and g(x) = f(x+z) be its corresponding randomized classifier for samples x ∈ Rd, where z ∼ N (0, σ2Id). Then, g(·) is (r,DMR, ‖ · ‖2, r 2\n2σ2 )-robust on any x. Moreover, let denote r 2\n2σ2 . Then, for any λ > 0 and any measurable set S 6= ∅, the following holds with probability at least 1− exp(−λ 2\n4 ),\nlog P (g(x) ∈ S) P (g(x′) ∈ S) ≤ λ+ √ . (5)\nThat is, when λ = c √ , log P (g(x)∈S)P (g(x′)∈S) ≤ (c + 1) √ with probability 1 − exp(− c 2\n4 ). In practice, c = 3 is enough to achieve a high probability. Corollary 11. Adding Gaussian noise z ∈ N (0, σ2Id) can defend any x′ ∈ B(x, r = √\n2 σ) that satisfies the condition of DMR(g(x)‖g(x′)) ≤ with probability at least 1 − exp(− c 2\n4 ). Furthermore, √ can be calculated (bounded) by (log pa − log pb)/2(1 + c) or (log pa/(1− pa))/2(1 + c) (binary case), where pa and pb are respectively the probabilities of the randomized classifier g(·) returning the most probable class ca and the runner-up class cb on input x.\nDetailed proofs for Theorem 10, Corollary 11, and all the following theorems are provided in Appendix B. From Theorem 9, we can see that for classifiers like g(x) = f(x + z), we only need to prove that the randomized mechanism A(x) = x+ z(z ∼ N (0, σ2Id)) is (r,DMR, ‖ · ‖2, r 2\n2σ2 )- robust. Also, the connection between and pa, pb can be derived for all or √ (in the certified radii) as in Corollary 11. Note that a similar theorem has also been proved by Cohen et al. (2019). But there are some major differences between our framework and theirs (Cohen et al., 2019). Specifically, our framework certifies the robustness with a probability of failure, and the certified radius r depends on c that controls the probability of failure. A smaller c yields a larger r compared to those in Cohen et al. (2019), and vice versa. Moreover, in our framework, we show that the Gaussian mechanism is a near optimal option, by providing a lower bound below for all possible noises that can certify the `2-normed DMR robustness.\nNext, we consider the following unanswered question (i.e., the first question). Since there are infinite ways of sampling z, a natural problem is to determine whether Gaussian mechanism is the optimal option to certify the `2-normed DMR robustness. To answer this question, we first give a lower bound on the magnitude of the noise z added in the randomized smoothing mechanismA(x) = x+z to ensure that A(x), as well as f(A(x)), is (r,DMR, ‖ · ‖2, )-robust. If the magnitude of Gaussian noise is close to the lower bound, then Gaussian mechanism is considered as “near optimal”.\nTheorem 12 (Lower Bound of the Noise). For any ≤ O(1), if there is a (2r,DMR, ‖ · ‖2, 2 )- robust randomized smoothing mechanism A(x) = x + z : [0, r√\nd ]d 7→ [0, r√ d ]d such that for all\nx ∈ [0, r√ d ]d, E[‖z‖∞] = EA‖A(x)− x‖∞ ≤ α, for some α ≤ O(1), then it must be true that α ≥ Ω( r√ ). In another word, Ω( r√ ) is the lower\nbound of the expected `∞ norm of the random noise.\nTheorem 12 indicates that the expected `∞ norm of the added random noise should be at least Ω( r√ ) to guarantee (r,DMR, ‖ ·‖2, )-robustness. For Gaussian mechanism, the expected `∞ norm is O(σ √ log d) ((Orabona & Pál, 2015)), which is O( r√ √ log d) according to Corollary 11. This means that Gaussian mechanism is near optimal (i.e., up to an O( √\nlog d) factor) here. Equivalently, if we fix the magnitude of the expected `∞-norm of the added noise as α, the largest radius r that can be certified by any (r,DMR, ‖ · ‖2, )-robust randomized smoothing mechanisms is upper bounded by O(α √ ), which is also close to the robust radius guaranteed by Gaussian mechanism (up to an O( √ log d) factor)." }, { "heading": "3.2 CERTIFYING `∞-NORMED ROBUSTNESS", "text": "Previous work on the randomized smoothing-based certified defenses (Cohen et al., 2019; Li et al., 2018) mainly uses Gaussian noise to certify the `2-normed robustness. Thus, another natural question (i.e., the second question) is to determine whether randomized smoothing can use some mechanism to certify the `∞-normed robustness. In this section, we consider this question using our general framework.\nBefore extending our result to the `∞-normed case, we first recall the `2-normed case and investigate the form of the density function of Gaussian noise: p(z) ∝ exp(−‖z‖ 2 2\nσ2 ). Based on this, we conjecture that, to certify `∞-normed robustness, we can sample the noise using an exponential mechanism:\np(z) ∝ exp (−‖z‖∞ σ ). (6)\nWe show in the following theorem that randomized smoothing by (6) certifies (r,DMR, ‖ · ‖∞, ·)- robustness, which could be considered as an extension of the `2-normed case. Moreover, we can prove that it is (r,D∞, ‖ · ‖∞, ·)-robust. However, the certified radius r is O(1/d), which implies that it is unscalable to high-dimensional data.\nTheorem 13. Let f be any classifier and g(x) = f(x+z) be its corresponding randomized classifier for sample x ∈ Rd, where the noise z ∼ p(z) in (6). Then, g(·) is (r,DMR, ‖ · ‖∞, r 2\n2σ2 )-robust. Moreover, it is (r,D∞, ‖ · ‖∞, rσ )-robust.\nRemark 14. Due to the high dimensionality of samples in real world applications, directly sampling z ∼ p(z) by the Markov Chain Monte Carlo (MCMC) algorithm requires a large number of randomwalks that can incur high computational cost. To alleviate this issue, we adopt an efficient sampling method from (Steinke & Ullman, 2015) that first samplesR fromGamma(d+1, σ) and then samples z from [−R,R]d uniformly. The complexity of this sampling algorithm is only O(d).\nComparing Theorems 10 and 13, we can see that randomized smoothing via (6) can certify a region that has (almost) the same radius as that of Gaussian distribution in the `2-normed case, due to similarity in their density functions and the robustness guarantees. In the following theorem we show that the magnitude of the noise added by (6) is much larger than that of Gaussian distribution in the `2-normed case. Theorem 15. For the distribution that can guarantee Theorem 13, the following theorem holds\nEz[‖z‖∞] = dσ. (7)\nNote that compared with the Gaussian noise added in Theorem 10 which satisfies the condition of Ez[‖z‖∞] = O(σ √ log d), the expected `∞-norm of the distribution in (6) is proportional to the dimensionality d of the data, which is quite large. This means that for any image data, at least one pixel will be perturbed by the magnitude of dσ, which will completely ruin the accuracy of the classification network. However, if we want the noise to have a magnitude of O(1), σ needs to be O(1/d), and so does the robust radius.\nTheorem 15 is a somewhat negative result for randomized smoothing using distribution (6) to certify the `∞-normed robustness. Thus, an immediate question is whether exponential mechanism is the right choice to certify the `∞-normed robustness. The following theorem shows that for any (r,D∞, ‖ · ‖∞, rσ )-robust randomized smoothing mechanism, the expected `∞-norm of the added noise is lower bounded by Ω(dσ). Thus, combining the following theorem with Theorem 15, we can conclude that the exponential mechanism is actually an optimal choice to certify D∞ robustness. Theorem 16. For any (2r,D∞, ‖ · ‖∞, 2 )-robust mechanismA(x) = x+ z : [0, r]\nd 7→ [0, r]d such that E[‖z‖∞] = EA‖A(x)− x‖∞ ≤ α,∀x ∈ [0, r]d, it must be true that α ≥ Ω( rd ).\nFrom Theorem 16 we can see that, for any (·, D∞, ‖ · ‖∞, 2 )-robust randomized smoothing mechanism, if we fix the expectation of the `∞-norm of the added noise in the exponential mechanism as α, the largest `∞ radius that can be certified is upper bounded by O(α /d). Compared with the `2-normed case in Theorem 11, we can see that there is an additional factor ofO(1/d), which makes it unscalable to high-dimensional data. Equivalently, if we want the same radius to be certified as in the Theorem 10, the expected `∞-norm of the added noise needs to be at least Ω( rd ), which will be too large for any image data.\nThe less than ideal lower bound in Theorem 16 is for D∞-robustness. Since DMR-robustness is more relaxed than D∞-robustness, a natural question is thus to determine whether the lower bound can be improved by switching toDMR-robustness. Unfortunately, the following theorem shows that a similar phenomenon still holds for DMR-robustness. Theorem 17. For any (2r,DMR, ‖ · ‖∞, 2 )-robust mechanism A(x) = x + z : [0, r]\nd 7→ [0, r]d such that\nE[‖z‖∞] = EA‖A(x)− x‖∞ ≤ α,∀x ∈ [0, r]d,\nit must be true that α ≥ Ω( r √ d√ ).\nFrom Theorems 17 and 15 we can see that in the definition of (2r,DMR, ‖ · ‖∞)-robustness, adding noise according to (6) is not near optimal. The following theorem shows that in this case, Gaussian mechanism is actually a near optimal choice.\nTheorem 18. Let r, > 0 be some fixed number and A(x) = x + z with z ∼ N (0, dr 2\n2 ). Then, A(·) is (r,DMR, ‖ · ‖∞, )-robust. E[‖z‖∞] = EA‖A(x)−x‖∞ is upper bounded by O( r √ d log d√ ).\nFrom Theorem 17 and 18, we can conclude that for all randomized smoothing mechanisms that are (·, 0, DMR, ‖ · ‖∞, 2 )-robust, if the expected `∞-norm of the added noise is fixed to be α, the\nlargest radius that can be certified is upper bounded by O( √ α√ d\n), and the largest radius that can be certified by Gaussian mechanism is O(1/ √ d log d) (and σ is Ω( α√\nlog d )). If α and are both set\nto be O(1), the largest radius that can be certified using Gaussian mechanism to achieve DMRrobustness is greater than the largest radius that can be certified to achieve D∞-robustness by at least a factor of O( √ d/ log d). This is reasonable since the definition of DMR-robustness is more relaxed. Obviously, there is some trade-off between the rigorousness of the notion of robustness and the largest certified robust radius, i.e., when the robustness is relaxed, the largest certified radius increases. We will investigate this trade-off more in the future research." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND MODELS", "text": "The performance of our framework is verified on two widely-used datasets, i.e., CIFAR10 and ImageNet∗. Following Cohen et al. (2019), we use a 110-layer residual network and the classical ResNet-50 as the base models for CIFAR10 and ImageNet respectively. Note that it may be difficult for the models to classify noisy images without seeing any noisy samples in the training stage. Thus, we train all the models by adding appropriate Gaussian noise on the training images. The certified accuracy for radius R is defined as the fraction of the test set whose certified radii are larger than R †. The value of in all our derived certified radii can be calculated by pa (or pa and pb) as shown in the proof of Corollary 11. It is also worth noting that we do not compare our results with (Cohen et al., 2019) in the experiments because our framework and (Cohen et al., 2019) endow robustness with different definitions. Moreover, our work does not aim at improving the tightness of the guarantee on the `2-normed robustness but aims at presenting a general and self-contained framework to study some remaining issues, such as the optimality of the Gaussian mechanism, and the specific mechanisms to certify the `∞-normed robustness." }, { "heading": "4.2 EMPIRICAL RESULTS", "text": "Certifying the `2-normed Robustness To certify the `2-normed Robustness, as we explained in previous section, Gaussian mechanism is a near optimal option. Thus, we mainly evaluate the performance of Gaussian mechanism in our framework. We first fix the value of σ in Gaussian mechanism and show the certified accuracy of the classifiers trained by varied Gaussian noises in Figure 1. As shown in Figure 1, using σ = 0.50 Gaussian noise to train the classifier is a good setting here. So in Figure 2, we evaluate the Gaussian mechanism with different σ values on the classifier trained by σ = 0.50 Gaussian noise. Overall, on CIFAR-10, our framework can certify approximately 20% accuracy under `2 = 1.0 perturbation‡. We also show the results on ImageNet by Figures 4 and 5 in Appendix C.\n∗Pixel value range is [0.0, 1.0] †For more details, please refer to (Cohen et al., 2019) ‡On CIFAR-10, `2 = 1.0 perturbation allows 4/255 perturbation on every pixel\nCertifying the `∞-normed Robustness To certify the `∞-normed robustness, we evaluate the performance of the Exponential mechanism in the definition of D∞-robustness and the Gaussian mechanism in the definition of DMR-robustness. As shown in Figure 3, the `∞ radii that can be certified by Gaussian mechanism are about 10 ∼ 20 times (i.e., O( √ d/ log d) with d = 3072 as shown in our theories) larger than the `∞ radii certified by the exponential mechanism. On ImageNet, as shown in Figure 6 in Appendix C, the robust radii are less than 1/255 (due to scaling in O(1/d) or O(1/ √ d log d)), indicating that certifying the `∞-normed robustness by randomized smoothing may not be applicable to high-dimensional data." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present a general framework for certifying two types of robustness (D∞ andDMRrobustness) in the `2 and `∞ norms by randomized smoothing. Under our framework, we first give the answers to the remaining questions in the previous studies on randomized smoothing-based certifiable defenses, i.e., the optimality of Gaussian mechanism and the possibility to certify the `∞-normed robustness. Specifically, we demonstrate that (i) Gaussian mechanism is a near optimal option to certify DMR-robustness in `2 norm by giving a lower bound on all DMR-robust mechanisms, with certified radii scaling in O(1); (ii) an exponential mechanism is the optimal choice for certifying D∞-robustness in `∞ norm, with certified radii scaling in O(1/d); (iii) Gaussian mechanism is a near optimal option to certify DMR-robustness in `∞ norm, with certified radii scaling in O(1/ √ d log d); (iv) the largest `∞ radius that can be certified by randomized smoothing in our\nframework is upper bounded by O(1/ √ d), indicating that randomized smoothing may not be scalable to high-dimensional data in terms of certifying the `∞-normed robustness." }, { "heading": "A DIFFERENTIAL PRIVACY BACKGROUND", "text": "In this section, we briefly introduce the concepts of differential privacy used in this paper. Definition 19 (Differential Privacy (DP) (Dwork et al., 2006)). Given a data universe X , we say that two datasets D,D′ ⊆ X are neighbors if they differ by only one entry, which is denoted by D ∼ D′. A randomized algorithm A is -differentially private (DP) if for all neighboring datasets D,D′ the following holds\nD∞(A(D)‖A(D′)) ≤ .\nIntuitively, DP ensures that an adversary cannot infer whether or not a participant (data sample) is participating in dataset D due to the fact that the distribution of A(D) is almost the same as that of A(D′), which means that DP-mechanisms are robust to 1-sample change. Now consider the case where D is some 1-size dataset (i.e., one data sample). Then, DP ensures that the distribution of A(D) and A(D′) are almost the same, where D′ is just any other data sample. Inspired by notion of DP, we define D∞ robustness in Definition 6. Definition 20 (Zero-Concentrated Differential Privacy (zCDP)). A randomized mechanism A is called -zCDP, if for all D ∼ D′\nmax{DMR(A(D)‖A(D′)), DMR(A(D′)‖A(D))} ≤ . (8)\nzCDP is a relaxed version of DP according to Theorem 8. Motivated by zCDP, we define DMR robustness in Definition 7." }, { "heading": "B OMITTED PROOFS", "text": "Proof of Theorem 9. This theorem can be easily proved by the following lemma,\nLemma 21 ((Bun & Steinke, 2016)). Let P and Q be two distributions on Ω and let f : Ω 7→ Θ be a deterministic function. Let f(P ) and f(Q) denote the distributions on Θ induced by applying f to P and Q respectively. Then we have\nDα(f(P )‖f(Q)) ≤ Dα(P‖Q).\nSimilar post-processing property also holds when α = ∞ (Dwork et al., 2006). Therefore, if A(·) satisfies Definition 6 or 7, then f(A(·)) will satisfy Definition 6 or 7 for any deterministic function (classifier) f(·).\nProof of Theorem 10. By Theorem 9, we only need to show that the randomized smoothing mechanism A(x) = x+ z is (r,DMR, ‖ · ‖2)-robust, which can be proved by the following lemma. Lemma 22 ((Bun & Steinke, 2016)). Let x, x′ ∈ Rd, and α ∈ [1,∞). Then\nDα(N (x, σ2Id)‖N (x′, σ2Id)) = α‖x− x′‖22\n2σ2 .\nThus for all x′ ∈ B(x, r), we have DMR(A(x)‖A(x′)) ≤ r 2\n2σ2 .\nNext we prove (5). To prove this inequality, we first define the loss random variable.\nDefinition 23 ((Bun & Steinke, 2016)). Let Y and Y ′ be random variables on Ω. We define the loss random variable between Y and Y ′, denoted by Z = Loss(Y ‖Y ′), as follows: Define a function F : Ω 7→ R by F (y) = log P[Y=y]P[Y ′=y] . Then Z is distributed according to F (Y ).\nBy this we can write Z = Loss(g(x)‖g(x′)) and rewrite DMR(g(x)‖g(x′)) as\n∀α ∈ (1,∞],E[e(α−1)Z ] ≤ e(α−1) r2 2σ2 α.\nThis implies that Z is sub-Gaussian. By using the tail-bound of sub-Gaussian (Vershynin, 2018), we have\nP[Z > λ+ ] ≤ exp(−λ 2\n4 ), (9)\nwhere r 2\n2σ2 = .\nProof of Corollary 11. Since we fix = r 2 2σ2 , the certified radius is r = √\n2 σ. Now we prove the upper bound of √ for a classifier g(·). Given Theorem 10, we should have\nlog P (g(x) = ca)\nP (g(x′) = ca) ≤ (c+ 1)\n√ ,\nand\nlog P (g(x′) = cb)\nP (g(x) = cb) ≤ (c+ 1)\n√ ,\nsince x is also in B(x′, r). Then we have log P (g(x ′)=ca)\nP (g(x′)=cb) ≥ log P (g(x)=ca)P (g(x)=cb) −2(c+1)\n√ . According\nto Definition 1, as long as log P (g(x ′)=ca)\nP (g(x′)=cb) > 0, g(·) can correctly classify x′. Thus, as long as\nlog P (g(x)=ca)P (g(x)=cb) −2(c+1) √ > 0 (i.e., √ < (log pa− log pb)/2(1+c)), g(·) classifies x′ as ca.\nProof of Theorem 12. Let {x1, x2, · · · , x2d} = {0, r√d} d. For each xi, we use the same adversarial example x′ = 0 to derive the lower bound. Since A is (2r,DMR, ‖ · ‖2, 2 )-robust, we have for all xi, xj , i, j ∈ [2d],\nmax{DMR(A(xi)‖A(xj)), DMR(A(xj)‖A(xi))} ≤ 2 ·\n2 = .\nThat is A is -zCDP on the dataset X = {0, r√ d }d. Next we will prove the lower bound for all\n-zCDP mechanisms.\nWe first consider the case where r = √ d, and then generalize it to any r. Before that we will first prove the lower bound of one-way marginal (i.e., mean estimation) under -zCDP. For an n-size dataset X ∈ Rn×d, the one-way marginal is just h(D) = 1n ∑n i=1Xi, where Xi is the i-th row of X . Specifically, when n = 1, one-way marginal is just the data point itself. We show the following theorem,\nTheorem 24. If there exists an -zCDP mechanism A : {0, 1}d 7→ [0, 1]d such that for all x ∈ {0, 1}d\nE‖A(x)− x‖∞ ≤ α, (10) then 1 ≥ Ω( √\nd α2 ).\nProof of Theorem 24. To prove this theorem, our idea is to first use the connection between -zCDP and ( , δ)-DP.\nLemma 25 (Prop.1.3 in Bun & Steinke (2016)). If A is -zCDP, then it is ( + 2 √ log 1δ , δ)differentially private.\nBun et al. (2018) first give the optimal rate of one-way marginal estimation which is improved by Steinke & Ullman (2016).\nLemma 26 (Theorem 1.1 in Steinke & Ullman (2016)). For every ≤ O(1), every 2−Ω(n) ≤ δ ≤ 1 n1+Ω(1) and every α ≤ 110 , if A : ({0, 1}\nd)n 7→ [0, 1]d is ( , δ)-DP and E[‖A(D)− h(D)‖∞] ≤ α, then\nn ≥ Ω(\n√ d log 1δ\nα ). (11)\nSetting n = 1, = + 2 √ log 1δ in Lemma 26, we can see that if E[‖A(x) − x‖∞] ≤ α then 1 ≥ Ω( √ d log 1δ\n( +2 √ log 1δ )α ) ≥ Ω( √ d√ α2\n), where the last inequality is due to the fact that √ log 1δ\n+2 √ log 1δ ≥\nΩ( 1√ ).\nNow we come back to the proof for any r. If A : {0, r√ d }d 7→ [0, r√ d ]d is -zCDP, where EA‖A(xi) − xi‖∞ ≤ α, then we have EA‖ √ d r A(xi) − √ d r xi‖∞ ≤ √ d r α. Thus, √ d r A is an - zCDP mechanism on {0, 1}d 7→ [0, 1]d. By Theorem 24 with α = √ d r α ≤ O(1), we have\n1 ≥ Ω( r√ α2 ), i.e., α ≥ Ω( r√ ). (12)\nProof of Theorem 13. We will first prove that A(x) = x + Z is (r,D∞, ‖ · ‖∞, rσ )-robust. Then by Theorems 8 and 9, we can easily show that g(·) is (r,DMR, ‖ · ‖∞, r 2\n2σ2 )-robust. Consider x, x′, ‖x′ − x‖∞ ≤ r. Then, for any y we have\np(y − x) p(y − x′) = exp(−‖y−x‖∞σ ) exp(−‖y−x\n′‖∞ σ )\n≤ exp(‖y − x ′‖∞ − ‖y − x‖∞ σ ) ≤ exp(‖x ′ − x‖∞ σ ) ≤ exp( r σ ).\nThus, for any subset S we have\nlog A(x) ∈ S A(x′) ∈ S = log\n∫ S p(z−x)d z∫\nS p(z−x′)d z\n≤ r σ .\nProof of Theorem 15. Define the distribution D on [0,∞) to be Z ∼ D, meaning Z = ‖z‖∞ for z ∼ p(z), where p(z) is in (6). The probability density function of D is given by\npD(z) ∝ zd−1 exp(− z\nσ ),\nwhich is obtained by integrating the probability density function (6) over the infinity ball of radius z with surface area d2dzd−1 ∝ zd−1. pD is the Gamma distribution with shape d and mean σ, and thus E[z] = dσ.\nProof of Theorem 16. Let X = {x1, x2, · · · , x2d} = {0, r}d be the set of samples. Since A is (2r, ‖ · · · ‖∞)-robust and ‖xi − xj‖∞ ≤ 2r, we know that\nmax{D∞(A(xi)‖A(xj)), D∞(A(xj)‖A(xi))} ≤ .\nThus, A : Rd 7→ Rd is -DP on X . Similar to the proof for Theorem 12, we can reduce our problem to studying the lower bound of one-way marginal for 1-size data problem in the -DP model. Now we first consider the case of r = 1. We have the following lemma which is given by Hardt & Talwar (2010).\nLemma 27 (Theorem 1.1 in (Hardt & Talwar, 2010)). If there exists an -DP mechanism A : {0, 1}d 7→ [0, 1]d satisfying the following inequality for all x ∈ {0, 1}d\nE‖A(x)− x‖∞ ≤ α, (13)\nthen 1 ≥ Ω( d α ).\nNow we consider any -DP mechanism A : {0, r}d 7→ [0, r]d. If\nE[‖A(x)− x‖∞] ≤ α,\nthen E[‖ 1rA(x)− 1 rx‖∞] ≤ α r . That is, 1 rA(x) : {0, 1} d 7→ [0, 1]d. Thus, by lemma 26 we can see that 1 ≥ Ω( dr α ).\nProof of Theorem 17. The proof is almost the same as that of Theorem 12. Assume that we have a set of data points X = {x1,x2 · · · , x2d} = {0, r}d. A will also be -zCDP on X as in the proof of Theorem 12. Thus, if\nE[‖A(x)− x‖∞] ≤ α,\nthen E[‖1\nr A(x)− 1 r x‖∞] ≤ 1 r α.\nThis means that 1rA(x) : {0, 1} d 7→ [0, 1]d is -zCDP. Thus, by Theorem 24 we must have 1 ≥ Ω( √ dr2\nα2 ).\nProof of Theorem 18. The proof is almost the same as that of Theorem 10. By Lemma 22, we have\nDα(N (x, dr2 2 )‖N (x′, dr 2 2 )) = α ‖x− x′‖22 dr2 ≤ αd ‖x− x ′‖2∞ dr2 ≤ α .\nTherefore,A(x) = x+ z with z ∼ N (0, dr 2\n2 ) is (r,DMR, ‖ · ‖∞, )-robust. The bound of E[‖z‖∞]\ncan be easily proved by substituting σ in O(σ √ log d) ((Orabona & Pál, 2015)) with √ dr2\n2 ." }, { "heading": "C MORE EXPERIMENTAL RESULTS (IMAGENET)", "text": "C.1 CERTIFYING `2 ROBUSTNESS\nC.2 CERTIFYING `∞ ROBUSTNESS" } ]
2,019
A UNIFIED FRAMEWORK FOR RANDOMIZED SMOOTHING BASED CERTIFIED DEFENSES
SP:8b572d1f037184bb002765442e1ab35f57a1f084
[ "This paper considers the L2 normalization of samples “z” from a given prior p(z) in Generative Adversarial Netowks (GAN) and autoencoders. The L2 normalization corresponds to projecting samples onto the surface of a unit-hypersphere. Hence, to attempt to justify this normalization, the authors rely on some already established results regarding high dimensional hyperspheres. In particular, the focus is on the fact that, the Euclidean distance between any given point on a hypersphere and another randomly sampled point on the hypersphere tends to a constant, when the number of dimensions goes to infinity. This result is then used to show that the Wasserstein distance between two arbitrary distributions on a hypersphere converges to a constant when the number of dimensions grows. Based on this result, the authors claim that projecting the latent samples onto the surface of a hypersphere would make GAN less sensitive to the choice of the prior distribution. Moreover, they claim that such normalization would also benefits inference, and that it addresses the issue of variational inference in VAE.", "This paper proposes a novel autoencoder algorithm, named Spherical AutoEncoder (SAE). In this paper, the authors argue that the sphere structure has good properties in high-dimensional. To leverage the properties, proposed algorithm centerizes latent variables and projects them onto unit sphere. To show the empirical performance of the proposed approach, the authors perform image reconstruction and generation using FFHQ dataset and MNIST dataset." ]
Variational inference is a fundamental problem in Variational Auto-Encoder (VAE). By virtue of high-dimensional geometry, we propose a very simple algorithm completely different from existing ones to solve the inference problem in VAE. We analyze the unique characteristics of random variables on spheres in high dimensions and prove that the Wasserstein distance between two arbitrary datasets randomly drawn from a sphere are nearly identical when the dimension is sufficiently large. Based on our theory, a novel algorithm for distribution-robust sampling is devised. Moreover, we reform the latent space of VAE by constraining latent random variables on the sphere, thus freeing VAE from the approximate optimization pertaining to the variational posterior probability. The new algorithm is named as Spherical Auto-Encoder (SAE), which is in essence the vanilla autoencoder with the spherical constraint on the latent space. The associated inference is called the spherical inference, which is geometrically deterministic but is much more robust to various probabilistic priors than the variational inference in VAE for sampling. The experiments on sampling and inference validate our theoretical analysis and the superiority of SAE.
[]
[ { "authors": [ "Kevin Beyer", "Jonathan Goldstein", "Raghu Ramakrishnan", "Uri Shaft" ], "title": "When is “nearest neighbor", "venue": "In International Conference on Database Theory, pp", "year": 1999 }, { "authors": [ "Avrim Blum", "John Hopcroft", "Ravi Kannan" ], "title": "Foundations of Data Science", "venue": null, "year": 2020 }, { "authors": [ "Ali Borji" ], "title": "Pros and cons of GAN evaluation measures", "venue": null, "year": 2018 }, { "authors": [ "Tony Cai", "Jianqing Fan", "Tiefeng Jiangd" ], "title": "Distributions of angles in random packing on spheres", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Tim R. Davidson", "Luca Falorsi", "Nicola De Cao", "Thomas Kipf", "Jakub M. Tomczak" ], "title": "Hyperspherical variational auto-encoders", "venue": "In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2018 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": null, "year": 2019 }, { "authors": [ "David L. Donoho" ], "title": "Neighborly polytopes and sparse solutions of underdetermined linear equations", "venue": "Technical report, Stanford University,", "year": 2005 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Olivier Mastropietro", "Alex Lamb", "Martin Arjovsky", "Aaron Courville" ], "title": "Adversarially learned inference", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2014 }, { "authors": [ "Ari Heljakka", "Arno Solin", "Juho Kannala" ], "title": "Pioneer networks: Progressively growing generative autoencoder", "venue": "In arXiv:1807.03026,", "year": 2018 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of GANs for improved quality, stability, and variation", "venue": "In Proceedings of the 6th International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": null, "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Proceedings of the 2th International Conference on Learning Representations (ICLR),", "year": 2013 }, { "authors": [ "A1 Lehnen", "Gary Wesenberg" ], "title": "The sphere game in n dimensions", "venue": null, "year": 2002 }, { "authors": [ "Chunyuan Li", "Hao Liu", "Changyou Chen", "Yunchen Pu", "Liqun Chen", "Ricardo Henao", "Lawrence Carin" ], "title": "LICE: Towards understanding adversarial learning for joint distribution matching", "venue": null, "year": 2017 }, { "authors": [ "R.D. Lord" ], "title": "The distribution of distance in a hypersphere", "venue": "The Annals of Mathematical Statistics,", "year": 1954 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning (ICML),", "year": 2014 }, { "authors": [ "Ilya Tolstikhin", "Olivier Bousquet", "Sylvain Gelly", "Bernhard Schoelkopf" ], "title": "Wasserstein auto-encoders", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "It takes (only) two: Adversarial generatorencoder networks", "venue": "In arXiv:1704.02304,", "year": 2017 }, { "authors": [ "Jiacheng Xu", "Greg Durrett" ], "title": "Spherical latent spaces for stable variational autoencoders", "venue": "In Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep generative models, such as Variational Auto-Encoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) and Generative Adversarial Network (GAN) (Goodfellow et al., 2014), play more and more important role in machine learning and computer vision. However, the problem of variational inference in VAE is still challenging, especially for high-dimensional data like images.\nTo be formal, let X = {x1, . . . ,xn} denote the set of observable data points and Z = {z1, . . . ,zn} the set of desired latent vectors, where xi ∈ Rdx and zi ∈ Rdz . Let pg(x|z) denote the likelihood of generated sample conditioned on latent variable z and p(z) the prior, where g denotes the decoder . The encoder f in VAE parameterizes the variational posterior qf (z|x) in light of the lower bound of the marginal log-likelihood\nlog pg(x) = log ∫ pg(x|z)p(z)dz = log ∫ qf (z|x) qf (z|x) pg(x|z)p(z)dz (1)\n≥ −DKL[qf (z|x)||p(z)] + Eq[log pg(x|z)]. (2)\nThe first term DKL[qf (z|x)||p(z)] constrains the encoded latent codes to the prior via the KLdivergence, and the second term Eq[log pg(x|z)] serves to guarantee the reconstruction accuracy of inputs. For a Gaussian pg(x|z) of diagonal covariance matrix, log pg(x|z) reduces to the varianceweighted squared error (Doersch, 2016).\nThe lower-bound approximation of the log-likelihood provides a feasible solution for VAE. But it also causes new problems. For example, the generated sample g(z) deviates from the real distribution of X when sampling from the given prior due to that the learnt qf (z|x) is incapable of matching the prior distribution well. Besides, the reconstruction g(f(x)) is not satisfactory either. For imagery data, bluriness usually occurs.\nIn order to manipulate real images for GAN models, we usually need to formulate an encoder via the framework of VAE. The variational inference also applies in this scenario. The problems of VAE are the obstacles of putting the GAN encoder in the right way either. There are other methods of learning\nan encoder for GAN in the adversarial way such as (Dumoulin et al., 2017; Li et al., 2017; Ulyanov et al., 2017; Heljakka et al., 2018). However, the structural precision of reconstruction is generally inferior to the VAE framework, because the high-level semantics of objects outweigh low-level structures for such methods (Donahue & Simonyan, 2019). Besides, the concise architecture of VAE is more preferred in this scenario. Therefore, learning precise latent variables z = f(x) is critical to applications of VAE and GAN.\nUsing a different theory in this paper, we propose a simple method to circumvent the problem. Our contributions are summarized as follows. 1) We introduce the volume concentration of highdimensional spheres. Based on the concentration property, we point out that projecting on a sphere for data that are distributed according to the spherical mass produces little difference from the viewpoint of the volume in high-dimensional spaces. Thus, it is plausible to perform inference pertaining to VAE on the sphere. 2) We further analyze the probability distribution of distances between two arbitrary sets of random points on the sphere in high dimensions and illustrate the phenomenon of distance convergence. Furthermore, we prove that the Wasserstein distance between two arbitrary datasets randomly drawn from a high-dimensional sphere are nearly identical, meaning that the data on the sphere are distribution-robust for generative models with respect to Wasserstein distance. 3) Based on our theoretical analysis, we propose a very simple algorithm for sampling generative models. The same principle is also harnessed to reformulate VAE. The spherical normalization is simply put on latent variables instead of variational inference while preserving randomness of latent variables by centerization. In contrast to VAE and variational inference, we name such an autoencoder as Spherical Auto-Encoder (SAE) and the associated inference as spherical inference. 4) We perform extensive experiments to validate our theoretical analysis and claims with sampling and inference." }, { "heading": "2 LATENT VARIABLES ON SPHERE", "text": "For latent variables or data points sampled from some priors, the projection on the unit sphere can can be easily performed by\nz ← z/‖z‖. (3) This spherical normalization for priors fed into the generator is employed in StyleGAN that is the phenomenal algorithm in GANs (Karras et al., 2018b). To test the robustness of StyleGAN against the diverse distributions, we conduct two groups of experiments with the input z sphere-normalized and not sphere-normalized when training StyleGAN. As shown in Figure 1, the diversity of generated\nfaces is good for two different distributions with normalized z, whereas the face modes become similar for the case of the uniform distribution that z is not normalized. This experiment indicates that StyleGAN with sphere-normalized z is much more robust to the variation of variable modes from different distributions.\nInspired by this comparison, we interpret the benefit of using random variables on spheres by virtue of high-dimensional geometry in this section. Based on these theories, a novel algorithm is proposed for random sampling and spherical inference for GAN and VAE." }, { "heading": "2.1 VOLUME CONCENTRATION", "text": "For high-dimensional spaces, there are many counter-intuitive phenomena that will not happen in low-dimensional spaces. For a convenient analysis, we assume that the center of the sphere Sd embedded in Rd+1 is at the origin. We first present the concentration property of sphere volume in Rd+1. One can find the proof in (Blum et al., 2020). Theorem 1. Let V (r) and V ((1− )r) denote the volumes of the two concentric spheres of radius r and (1− )r, respectively, where 0 < < 1. Then\nV ((1− )r) /V (r) = (1− )d. (4) And if = t/d, V ((1− )r) /V (r)→ e−t when d→∞, where t is a constant.\nTheorem 1 says that the volume of the d-dimensional sphere of radius (1− )r rapidly goes to zero when d goes large, meaning that the interior of the high-dimensional sphere is empty. In other words, nearly all the volume of the sphere is contained in the thin annulus of width r = rt/d. The width becomes very thin when d grows. For example, the annulus of width that is 0.9% of the radius contains 99% of all the volume for the sphere in R512. To help understand this counter-intuitive geometric property, we make a schematic illustration in Figure 2.\nThe probabilistic operations can be very beneficial from the volume concentration of spheres. Suppose that we perform the probabilistic optimization pertaining to latent variables sampled according to the distribution of the sphere volume. The probability mass is also empty due to the volume concentration. Therefore, the error is controllable if we perform it on the sphere, provided that these latent variables lie in high-dimensional spaces. For VAE, therefore, we can write a feasible approximation\nlog pg(x) = log ∫ Int(Sd) pg(x|z)p(z)dz ≈ log ∫ Sd pg(x|z)p(z)dz, (5)\nwhere Int(Sd) denotes the interior of Sd, ‖z‖ ≤ r, and d is sufficiently large. The spherical approximation for log pg(x) is an alternative scheme except the lower bound approximation presented in equation (1). In fact, the distributions defined on the sphere have been already exploited to reformulate VAE, such as the von Mises-Fisher distribution (Davidson et al., 2018; Xu & Durrett, 2018). But the algorithms proposed in (Davidson et al., 2018; Xu & Durrett, 2018) still fall into the category using the variational inference like the vanilla VAE. To eliminate this constraint, we need more geometric analysis presented in the following section." }, { "heading": "2.2 DISTANCE CONVERGENCE", "text": "To dig deeper, we examine the pairwise distance between two arbitrary points randomly sampled on Sd. The following important lemma was proved by (Lord, 1954; Lehnen & Wesenberg, 2002).\nLemma 1. Let ξ denote the Euclidean distance between two points randomly sampled on the sphere Sd of radius r. Then the probability distribution of ξ is\nρ(ξ) = ξd−2\nc(d)rd−1\n[ 1− ( ξ\n2r\n)2] d−32 , (6)\nwhere the coefficient c(d) is given by c(d) = √ πΓ ( d−1 2 ) /Γ ( d 2 ) . And the mean distance ξµ and the standard deviation ξσ are\nξµ = 2d−1r\n[ Γ ( d 2 )]2 √ πΓ ( d− 12 ) and ξσ = √2r√1− ξ2µ 2r2 , (7)\nrespectively, where Γ is the Gamma function. Furthermore, ξµ → √ 2r ( 1− 18d ) and ξσ → r√2d when d goes large.\nLemma 1 tells that the pairwise distances between two arbitrary points randomly sampled on Sd approach to be mutually identical and converge to the mean ξµ = √ 2r when d grows. The associated standard deviation ξσ → 0. This result is in some extent surprising compared to the intuition in low-dimensional spaces. We display the average distance and its standard deviation in Figure 3, showing that the convergence process is fast. Taking R512 for example, we calculate that ξµ = 1.4139 and ξσ = 0.0313. The standard deviation is only 2.21% of the average distance, meaning that the distance discrepancy between arbitrary zi and zj is rather small. This surprising phenomenon is also observed for neighborly polytopes when solving the sparse solution of underdetermined linear equation (Donoho, 2005) and for nearest neighbor search in high dimensions (Beyer et al., 1999).\nWith Lemma 1, we can study the property of two different random datasets on Sd, which serves to distribution-free sampling and spherical inference in generative models. To this end, we first introduce the computational definition of Wasserstein distance. Let Z = {z1, . . . ,zn} and Z′ = {z′1, . . . ,z′n} be the datasets of random variables drawn from Sd at random, respectively. Then the 2-Wasserstein distance is defined as\nW 22 (Z,Z′) = min ω ∑n i=1 ∑n j=1 ωij‖zi − z′j‖2 (8)\ns.t. ∑n i=1ωij = ∑n j=1ωij = 1, (9)\nwhere ω is the doubly stochastic matrix. By Lemma 1, it is not hard to derive the following Theorem. Theorem 2. 1 W2(Z,Z′)→ √ 2nr with zero standard deviation when d→∞.\nTheorem 2 says that despite the diverse distributions, the 2-Wasserstein distance between two arbitrary sets of random variables on the sphere converges to a constant when the dimension is sufficiently large. For generative models, this unique characteristic for datasets randomly sampled from highdimensional spheres bring great convenience for distribution-robust sampling and spherical inference. For example, if Z and Z′ obeys the different distributions, the functional role of Z′ nearly coincides\n1It suffices to note that the case described in Theorem 2 is essentially different from the approximate solution of Wasserstein distance via Monte Carlo sampling, where the points sampled from the unit sphere are applied as projection subspaces.\nwith that of Z with respect to Wasserstein distance, provided that both Z and Z′ are randomly drawn from the high-dimensional sphere. The specific distributions of Z and Z′ affect the result negligibly under such a condition. We will present the specific application of Theorem 2 in the following section.\nIn fact, we can obtain the bounds of W2(Z,Z′) using the proven proposition about the nearlyorthogonal property of two random points on high-dimensional spheres (Cai et al., 2013). However, Theorem 2 is sufficient to solve the problem raised in this paper. So, we bypass this analysis to simplify the theory for easy readability." }, { "heading": "3 ALGORITHM FOR SAMPLING AND INFERENCE", "text": "We will detail the distribution-robust algorithms for sampling and spherical inference. A new simplified form of VAE will be presented in this section as well." }, { "heading": "3.1 SAMPLING", "text": "To acquire generative results from VAEs and GANs, we need to sample random vectors from a pre-defined prior and then feed them into the decoder or the generator. According to Theorem 1 and Theorem 2, however, this prior limitation can be eliminated if we project these random vectors on the corresponding sphere. To achieve this, we perform two-step manipulations on the sampled dataset from arbitrary prior distributions. The procedure is detailed in Algorithm 1. The centerization operation is motivated from the central limit theorem in probability, which transforms the distributionspecific Z to be nearly distribution-agnostic (on the sphere). The spherization is to project these centerized vectors on the unit sphere. We find that in practice, this simple algorithm works well to reduce the bias caused by various distributions or data modes for VAEs and GANs.\nAlgorithm 1 Distribution-robust sampling for generative models 1: Sample Z = {z1, . . . ,zn} ∼ P (z) . P (z) is an arbitrary distribution 2: Centerization by zji ← z j i − 1d ∑ j z j i for each zi . z j i is the j-th entry of zi\n3: Spherization by z̃i ← zi/‖zi‖ for each zi 4: Return Z̃ = {z̃1, . . . , z̃n}" }, { "heading": "3.2 SPHERICAL INFERENCE", "text": "According to Theorem 2, we may know that sampling is robust to random variables if they are randomly sampled from the high-dimensional sphere. Theorem 1 guarantees that the error can be negligible even if they deviate from the sphere, as long as they are distributed near the spherical surface. This tolerance to various modes of random variables allow us to devise a simple solution to replace the variational inference for VAE, i.e. the spherical inference we call. To be specific, we only need to constrain the centerized latent variables on the sphere, as opposed to the conventional way of employing the KL-divergence DKL[qf (z|x)||p(z)] and its variants with diverse priors. The sequential mappings of the autoencoder under our framework can be shown by\nx f7−→ z︸ ︷︷ ︸\nencoder 7−→ (z − ẑ1) 7−→ zi/‖zi‖︸ ︷︷ ︸ spherical constraint on the latent space 7−→ z̃ g7−→ x̃︸ ︷︷ ︸ decoder , (10)\nwhere ẑ = 1d ∑ j z j i and 1 is the all-one vector. We can write the objective function for this type of autoencoder as min f,g ‖x− x̃‖2`p , s.t. spherical constraint on z, (11)\nwhere `p denotes the p-norm. The objective function of our algorithm is much simpler than that of VAE and its variants based on the variational inference or various sophisticated regularizers on the latent space.\nIt is clear that we utilize the geometric characteristics of latent variables on the sphere rather than some additional losses to optimize the latent space. Our algorithm is geometric and free from the probability optimization whose performance is usually limited with the approximation dilemma. In fact, the framework of VAE in (10) reduces to a standard autoencoder with the spherical constraint. There is no variational inference needed here. To highlight this critical difference, we call our algorithm Spherical Auto-Encoder (SAE)." }, { "heading": "4 RELATED WORK", "text": "Little attention has been paid on examining geometry of latent spaces in the field of generative models. So we find few works directly related to ours. Most relevant ones are the application of von Mises-Fisher (vMF) distribution as the probability prior (Davidson et al., 2018; Xu & Durrett, 2018). The vMF distribution is defined on the sphere. The sampling and variational inference are both performed with latent variables drawn on the sphere. However, the algorithms proposed in (Davidson et al., 2018; Xu & Durrett, 2018) both rely on the variational inference as VAE does with inequality (1). For our algorithm, the whole framework is deterministic and there is no approximation involved for inferring latent codes.\nFor sampling, our geometric analysis is directly inspired by ProGAN Karras et al. (2018a) and StyleGAN (Karras et al., 2018b) that have already applied the spherical normalization for sampled inputs. We study the related theory and extend the case to arbitrary distributions for both GANs and VAEs. Another related method is to sample priors along the great circle when performing the interpolation in the latent space for GANs (White, 2016). The empirical results show that such sampling yields more smooth interpolated generation. This algorithm is perfectly compatible with our theory and algorithm. Therefore, it can also be harnessed in our algorithm when performing interpolation as well.\nWasserstein Auto-Encoder (WAE) (Tolstikhin et al., 2018) is an alternative way of optimizing the model distribution and the prior distribution using Wasserstein distance. SAE is different from WAE because we do not really use Wasserstein distance for computation in the latent space. We just leverage Wasserstein distance to establish Theorem 2 for the theoretical analysis. Adversarial AutoEncoder (AAE) (Makhzani et al., 2015) is another interesting method of replacing the variational inference with adversarial learning in the latent space. But both WAE and AAE need some priors to match, which are essentially different from SAE. β-VAE (Higgins et al., 2017) improves the flexibility of VAE by using a regularization coefficient to modulate the capacity of latent information. However, β-VAE is restricted by priors like VAE." }, { "heading": "5 EXPERIMENT", "text": "We conduct the experiments to test our theory and algorithms in this section. Three aspects pertaining to generative algorithms are taken into account, including sampling GANs, learning the variants of autoencoder, and sampling the decoders.\nThe FFHQ dataset (Karras et al., 2018b) is a more complex face dataset with large variations of faces captured in the wild. We test VAE and our SAE algorithm with this benchmark dataset. We use the image size of 128× 128, which is larger than the commonly chosen size in the related work and also more challenging than 64× 64 or 32× 32 for (variational) autoencoders to reconstruct." }, { "heading": "5.1 SAMPLING GAN", "text": "Our first experiment is to test the sampling effect using four distributions. We employ StyleGAN trained with random variables sampled from the normal distribution. The other three distributions are opted to test the generation with different priors for training, i.e. the uniform, Poisson, and Chi-squared distributions. The shapes of these three distributions are significantly distinctive from that of the normal distribution. Thus, the generalization capability of the generative model can be effectively unveiled when fed with priors that are not involved during training. We follow the experimental protocol in (Karras et al., 2018a;b) that StyleGAN is trained on the FFHQ face dataset and Fréchet inception distance (FID) (Borji, 2018) is used as the quality metrics of generative results. We take dz = 512, which is set in StyleGAN. This dimension is also used for both VAE and SAE for face data.\nFrom Table 1, we can see that the generative results by the normal distribution is significantly better than the others when tested with the original samples. The uniform distribution is as good as the normal distribution when projected on the sphere. This is because the values for each random vector are overall symmetrically distributed according to the origin. They satisfy the condition in Theorem 2 after the spherical projection. The accuracy of Poisson and Chi-squared distributions is considerably improved after centerization, even better than the vanilla uniform distribution. But the" }, { "heading": "VAE", "text": "accuracy difference between all the compared distributions is rather negligible after centerization and spherization, empirically verifying the theory presented in Theorem 2." }, { "heading": "5.2 AUTOENCODERS", "text": "We compare the vanilla VAE with the normal distribution (Kingma & Welling, 2013) with our SAE algorithm for reconstruction and sampling tasks2.\nFrom Figure 4, we can see that the face quality of SAE outperforms that of VAE. The imagery details like semantic structures are preserved much better for SAE. For example, the sunglasses in the sixth image is successfully recovered by SAE, whereas VAE distorts the face due to this occlusion. It is worth emphasizing that the blurriness for images reconstructed by SAE is much less than that by VAE, implying that the spherical inference is superior to the variational inference in VAE. The different accuracy measurements in Table 2 also indicate the consistently better performance of SAE.\nTo test the generative capability of the models, we also perform the experiment of sampling the decoders as done in section 5.1. Prior samples are drawn from the normal, uniform, Poisson, and Chisquared distributions, respectively, and then fed into the decoders to generate faces. Figure 5 illustrates the generated faces of significantly different quality with respect to four types of samplings. The style of the generated faces by SAE keeps consistent, meaning that SAE is rather robust to different probability priors. This also empirically verifies the correctness of Theorem 2 by solving the real problem. As a comparison, the quality of the generated faces by VAE varies with probability priors. In other words, VAE is sensitive to the outputs of the encoder with the variational inference, which is probably the underlying reason of the difficulty of training VAE with sophisticated architectures. We also present the experimental results on MNIST and CelebA in Appendix.\n2We fail to train a convergent model for the spherical VAE (S-VAE) with von Mises-Fisher distribution on the FFHQ dataset. So this algorithm is not compared here. The experiment on MNIST is provided in Appendix." }, { "heading": "6 CONCLUSION", "text": "In this paper, we attempt to address the issue of the variational inference in VAE and the limitation of prior-sensitive sampling in GAN. By analyzing the geometry of volume concentration and distance convergence on the high-dimensional sphere, we prove that the Wasserstein distance converges to be a constant for two datasets randomly sampled from the sphere when the dimension goes large. Based on this unique characteristic, we propose a very simple algorithm for sampling and spherical inference. The sampled data from priors are first centerized and then projected onto the unit sphere before being fed into decoders (or generators). Such random variables on the sphere are robust to the diverse prior distributions. With our theory, the vanilla VAE can be reduce to a standard autoencoder with the spherical constraint on the latent space. In other words, the conventional variational inference in VAE is replaced by the simple operations of centerization and spherization. The new autoencoder is named as Spherical Auto-Encoder (SAE). The experiments on the FFHQ face data validate the effectiveness of our new algorithm for sampling and spherical inference. It is worth noting that the applications of our theory and the novel algorithm are not limited for VAEs and GANs. Interested readers may explore the possibility in their scenarios." }, { "heading": "A.1 RECONSTRUCTION ON FFHQ", "text": "" }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.2 SAMPLING VAE AND SAE ON FFHQ", "text": "" }, { "heading": "A.3 RECONSTRUCTION ON CELEBA", "text": "" }, { "heading": "A.4 SAMPLING ON CELEBA", "text": "" }, { "heading": "A.5 RECONSTRUCTION ON MNIST", "text": "" }, { "heading": "A.6 SAMPLING S-VAE, VAE, AND SAE ON MNIST", "text": "A.7 VISUALIZATION OF INFERENCE" } ]
2,019
null
SP:237b129348ea81633989e45c3db9b6e8ef6fdfa2
[ "This paper studies multi-task learning (MTL) from the deep learning perspective where a number of layers are shared between tasks followed by specific heads for each task. One of the main challenges in this problem is to decide the best configuration among a large number of possible ones (e.g., the number of layers , number of neurons, when to stop the shared part of the network). In this paper, the authors fix the network architecture, and learn which filters (among the already learned ones) should be dedicated to (and hence fine-tuned for) a specific, and which ones should be shared between multiple tasks. ", "This paper proposes a framework for learning multi-task convolutional neural networks. For each layer of the network, the proposed algorithm assigns a subset of the layer's channels to each of the tasks. This is in contrast to existing methods that assign whole layers to tasks. There are two key ideas here: (1) instead of searching in the space of binary assignments of layers to tasks, search in the continuous space of fractions of channels assigned to each layer, subject to some consistency constraints; this allows for using finite differences for gradient estimation which can be fed into a black-box optimization procedure; (2) the use of distillation to estimate the performance of a given assignment, rather than retraining many models. Experimentally, the proposed framework performs relatively well on the Visual Decathlon benchmark." ]
Multi-task learning promises to use less data, parameters, and time than training separate single-task models. But realizing these benefits in practice is challenging. In particular, it is difficult to define a suitable architecture that has enough capacity to support many tasks while not requiring excessive compute for each individual task. There are difficult trade-offs when deciding how to allocate parameters and layers across a large set of tasks. To address this, we propose a method for automatically searching over multi-task architectures that accounts for resource constraints. We define a parameterization of feature sharing strategies for effective coverage and sampling of architectures. We also present a method for quick evaluation of such architectures with feature distillation. Together these contributions allow us to quickly optimize for parameter-efficient multi-task models. We benchmark on Visual Decathlon, demonstrating that we can automatically search for and identify architectures that effectively make trade-offs between task resource requirements while maintaining a high level of final performance.
[]
[ { "authors": [ "Bowen Baker", "Otkrist Gupta", "Ramesh Raskar", "Nikhil Naik" ], "title": "Accelerating neural architecture search using performance prediction", "venue": "arXiv preprint arXiv:1705.10823,", "year": 2017 }, { "authors": [ "Shawn LE Beaulieu", "Sam Kriegman", "Josh C Bongard" ], "title": "Combating catastrophic forgetting with developmental compression", "venue": "arXiv preprint arXiv:1804.04286,", "year": 2018 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Yi-Min Chou", "Yi-Ming Chan", "Jia-Hong Lee", "Chih-Yi Chiu", "Chu-Song Chen" ], "title": "Unifying and merging well-trained deep neural networks for inference stage", "venue": "arXiv preprint arXiv:1805.04980,", "year": 2018 }, { "authors": [ "Boyang Deng", "Junjie Yan", "Dahua Lin" ], "title": "Peephole: Predicting network performance before training", "venue": "arXiv preprint arXiv:1712.03351,", "year": 2017 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": "arXiv preprint arXiv:1701.08734,", "year": 2017 }, { "authors": [ "Robert M French" ], "title": "Catastrophic forgetting in connectionist networks", "venue": "Trends in cognitive sciences,", "year": 1999 }, { "authors": [ "Xiaoxi He", "Zimu Zhou", "Lothar Thiele" ], "title": "Multi-task zipping via layer-wise neuron sharing", "venue": "arXiv preprint arXiv:1805.09791,", "year": 2018 }, { "authors": [ "Lu Jiang", "Zhengyuan Zhou", "Thomas Leung", "Li-Jia Li", "Li Fei-Fei" ], "title": "Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels", "venue": null, "year": 2018 }, { "authors": [ "Lukasz Kaiser", "Aidan N Gomez", "Noam Shazeer", "Ashish Vaswani", "Niki Parmar", "Llion Jones", "Jakob Uszkoreit" ], "title": "One model to learn them all", "venue": "arXiv preprint arXiv:1706.05137,", "year": 2017 }, { "authors": [ "Manoj Kumar", "George E Dahl", "Vijay Vasudevan", "Mohammad Norouzi" ], "title": "Parallel architecture and hyperparameter search via successive halving and classification", "venue": "arXiv preprint arXiv:1805.10255,", "year": 2018 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": "arXiv preprint arXiv:1902.07638,", "year": 2019 }, { "authors": [ "Jason Liang", "Elliot Meyerson", "Risto Miikkulainen" ], "title": "Evolutionary architecture search for deep multitask networks", "venue": "arXiv preprint arXiv:1803.03745,", "year": 2018 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "arXiv preprint arXiv:1712.00559,", "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Oriol Vinyals", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Hierarchical representations for efficient architecture search", "venue": "arXiv preprint arXiv:1711.00436,", "year": 2017 }, { "authors": [ "Shikun Liu", "Edward Johns", "Andrew J Davison" ], "title": "End-to-end multi-task learning with attention", "venue": "arXiv preprint arXiv:1803.10704,", "year": 2018 }, { "authors": [ "Arun Mallya", "Svetlana Lazebnik" ], "title": "Piggyback: Adding multiple tasks to a single, fixed network by learning to mask", "venue": "arXiv preprint arXiv:1801.06519,", "year": 2018 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search provides a competitive approach to reinforcement learning", "venue": "arXiv preprint arXiv:1803.07055,", "year": 2018 }, { "authors": [ "Elliot Meyerson", "Risto Miikkulainen" ], "title": "Beyond shared hierarchies: Deep multitask learning through soft layer ordering", "venue": "arXiv preprint arXiv:1711.00108,", "year": 2017 }, { "authors": [ "Ishan Misra", "Abhinav Shrivastava", "Abhinav Gupta", "Martial Hebert" ], "title": "Cross-stitch networks for multi-task learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "arXiv preprint arXiv:1802.01548,", "year": 2018 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Learning multiple visual domains with residual adapters", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Efficient parametrization of multidomain deep neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Clemens Rosenbaum", "Tim Klinger", "Matthew Riemer" ], "title": "Routing networks: Adaptive selection of non-linear functions for multi-task learning", "venue": "arXiv preprint arXiv:1711.01239,", "year": 2017 }, { "authors": [ "Amir Rosenfeld", "John K Tsotsos" ], "title": "Incremental learning through deep adaptation", "venue": "arXiv preprint arXiv:1705.04228,", "year": 2017 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Sebastian Ruder", "Joachim Bingel", "Isabelle Augenstein", "Anders Søgaard" ], "title": "Learning what to share between loosely related tasks", "venue": "arXiv preprint arXiv:1705.08142,", "year": 2017 }, { "authors": [ "Andrei A Rusu", "Sergio Gomez Colmenarejo", "Caglar Gulcehre", "Guillaume Desjardins", "James Kirkpatrick", "Razvan Pascanu", "Volodymyr Mnih", "Koray Kavukcuoglu", "Raia Hadsell" ], "title": "Policy distillation", "venue": "arXiv preprint arXiv:1511.06295,", "year": 2015 }, { "authors": [ "Sahil Sharma", "Ashutosh Jha", "Parikshit Hegde", "Balaraman Ravindran" ], "title": "Learning to multi-task by active sampling", "venue": "arXiv preprint arXiv:1702.06053,", "year": 2017 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Jan Peters", "Juergen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "In Evolutionary Computation,", "year": 2008 }, { "authors": [ "Junho Yim", "Donggyu Joo", "Jihoon Bae", "Junmo Kim" ], "title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Amir R Zamir", "Alexander Sax", "William Shen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling task transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Arber Zela", "Aaron Klein", "Stefan Falkner", "Frank Hutter" ], "title": "Towards automated deep learning: Efficient joint neural architecture and hyperparameter search", "venue": "arXiv preprint arXiv:1807.06906,", "year": 2018 }, { "authors": [ "Yu Zhang", "Qiang Yang" ], "title": "A survey on multi-task learning", "venue": "arXiv preprint arXiv:1707.08114,", "year": 2017 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Multi-task learning allows models to leverage similarities across tasks and avoid overfitting to the particular features of any one task (Caruana, 1997; Zamir et al., 2018). This can result in better generalization and more robust feature representations. While this makes multi-task learning appealing for its potential performance improvements, there are also benefits in terms of resource efficiency. Training a multi-task model should require less data, fewer training iterations, and fewer total parameters than training an equivalent set of task-specific models. In this work we investigate how to automatically search over high performing multi-task architectures while taking such resource constraints into account.\nFinding architectures that offer the best accuracy possible given particular resource constraints is nontrivial. There are subtle trade-offs in performance when increasing or reducing use of parameters and operations. Furthermore, with multiple tasks, one must take into account the impact of shared operations. There is a large space of options for tweaking such architectures, in fact so large that it is difficult to tune an optimal configuration manually. Neural architecture search (NAS) allows researchers to automatically search for models that offer the best performance trade-offs relative to some metric of efficiency.\nHere we define a multi-task architecture as a single network that supports separate outputs for multiple tasks. These outputs are produced by unique execution paths through the model. In a neural network, such a path is made up of a subset of the total nodes and operations in the model. This subset may or may not overlap with those of other tasks. During inference, unused parts of the network can be ignored by either pruning out nodes or zeroing out their activations (Figure 1). Such architectures mean improved parameter efficiency because redundant operations and features can be consolidated and shared across a set of tasks.\nWe seek to optimize for the computational efficiency of multi-task architectures by finding models that perform as well as possible while reducing average node use per task. Different tasks will require different capacities to do well, so reducing average use requires effectively identifying which tasks will ask more of the model and which tasks can perform well with less. In addition, performance is affected by how nodes are shared across tasks. It is unclear when allocating resources whether sets of tasks would benefit from sharing parameters or would instead interfere.\nWhen searching over architectures, differences in resource use can be compared at different levels of granularity. Most existing work in NAS and multi-task learning searches over the allocation and use of entire layers (Zoph & Le, 2016; Fernando et al., 2017; Rosenbaum et al., 2017), we instead partition out individual feature channels within a layer. This offers a greater degree of control over both the computation required by each task and the sharing that takes place between tasks.\nThe main obstacle to address in searching for effective multi-task architectures is the vast number of possibilities for performing feature partitioning as well as the significant amount of computation required to evaluate and compare arrangements. A naive brute search over different partitioning strategies is prohibitively expensive. We leverage our knowledge of the search space to explore it more effectively. We propose a parameterization of partitioning strategies to reduce the size of the search space by eliminating unnecessary redundancies and more compactly expressing the key features that distinguish different architectures.\nIn addition, the main source of overhead in NAS is evaluation of sampled architectures. It is common to define a surrogate operation that can be used in place of training a full model to convergence. Often a smaller model will be trained for a much shorter number of iterations with the hope that the differences in accuracy that emerge early on correlate with the final performance of the full model. We propose a strategy for evaluating multi-task architectures using feature distillation which provides much faster feedback on the effectiveness of a proposed partitioning strategy while correlating well with final validation accuracy.\nIn this work we provide:\n• a parameterization that aids automatic architecture search by providing a direct and compact representation of the space of sharing strategies in multi-task architectures.\n• an efficient method for evaluating proposed parameterizations using feature distillation to further accelerate the search process.\n• results on Visual Decathlon (Rebuffi et al., 2017) to demonstrate that our search strategy allows us to effectively identify trade-offs between parameter use and performance on diverse and challenging image classification datasets." }, { "heading": "2 RELATED WORK", "text": "Multi-Task Learning: There is a wide body of work on multi-task learning spanning vision, language, and reinforcement learning. The following discussion will center around designing multi-task architectures for deep learning in vision (Ruder, 2017; Caruana, 1997; Zhang & Yang, 2017). There are many obstacles to overcome in multi-task architecture design, but the most pressing concerns depend largely on how the problem setting has been defined. Two distinguishing factors include:\n• Task Ordering: Are all tasks available at all times or are they presented one after the other? • Fixed vs Learned Strategies: Is a uniform strategy applied across tasks or is a task-specific\nsolution learned?\nThe former is important as work in which tasks are presented sequentially must address catastrophic forgetting (French, 1999). This is less of a concern in our work as we train on all tasks at once. As for the latter, finding a solution tuned to a specific set of tasks requires the same sort of outer-loop optimization seen in neural architecture search (Zoph & Le, 2016) which is time-consuming and expensive. The contributions presented in this work seek to make this process more manageable.\nMulti-Task Architectures: A strong baseline for multi-task architectures is the use of a single shared network (Caruana, 1997; Kaiser et al., 2017). Deep networks are overparameterized in such a way that the same layers can be applied across different domains while producing features that are useful to different ends. Using a shared architecture is common in reinforcement learning to train a single agent to perform many tasks with a uniform observation and action space (Espeholt et al., 2018; Sharma et al., 2017). A common technique to train single shared models well in both reinforcement learning and vision is distillation of multiple models into one (Beaulieu et al., 2018; Yim et al., 2017; Rusu et al., 2015; He et al., 2018; Chou et al., 2018).\nIn work where tasks are presented sequentially, the focus is often to build on top of an existing network while not disrupting its ability to solve its original task (Mallya & Lazebnik, 2018; Rebuffi et al., 2018; Rosenfeld & Tsotsos, 2017). Currently, many methods freeze the network’s weights so they are not changed while learning a new task. Examples include masking out specific filter weights (Mallya & Lazebnik, 2018) or introducing auxiliary layers (Rebuffi et al., 2018). Another approach is to dynamically expand a network with additional capacity for each new task (Yoon et al., 2018). All of these methods build on top of a fixed model, meaning that new tasks must perform the computation required for the original task as well as take additional steps to be task-specific.\nIt is also common to build multi-task architectures from sets of layers that are run in parallel (Misra et al., 2016; Rosenbaum et al., 2017; Fernando et al., 2017; Meyerson & Miikkulainen, 2017). Cross-stitch networks compute activations for each task as a learned weighted sum across these layers (Misra et al., 2016). This sort of soft attention over features can be seen in other multi-task architecture work as well (Liu et al., 2018; Ruder et al., 2017). There are approaches to search over paths through these layers such that each task has a unique, optimal execution path (Rosenbaum et al., 2017; Fernando et al., 2017). Similar to work in single-task NAS, the best path is found by either reinforcement learning or evolutionary algorithms (Fernando et al., 2017; Liang et al., 2018). The optimal trade-offs in parameter sharing may occur at a more fine-grained level than entire layers, so instead of working with parallel blocks of layers we divide up individual feature channels.\nNeural Architecture Search: There are three main areas in which contributions are made for more effective architecture search: search space, optimization, and sample evaluation.\nSearch space: With a well-designed search space, it is possible to randomly sample and arrive at high performing solutions (Li & Talwalkar, 2019; Liu et al., 2017b). In general, NAS operates in a discrete space where entire layers are included or not. We instead propose a continuous search space where slight changes can be made in how resources are allocated across tasks. This allows alternatives for optimization that would not apply in other NAS work.\nOptimization: Leading approaches either use reinforcement learning or genetic algorithms for NAS (Zoph & Le, 2016; Real et al., 2018; Pham et al., 2018). This search is difficult and the tradeoffs between approaches are unclear (Li & Talwalkar, 2019). We test the effectiveness of random sampling and evolutionary strategies optimization (Mania et al., 2018; Wierstra et al., 2008).\nEvaluating Samples: Training a model to convergence is time-consuming and resource intensive. It is not realistic to sample thousands of architectures and train them all. Instead one must use a cheaper form of evaluation. Some options include preserving weights across samples for faster training (Pham et al., 2018), successive halving (Kumar et al., 2018), progressive steps to increase complexity (Liu et al., 2017a), as well as techniques to model the expected performance of sampled architectures (Deng et al., 2017; Brock et al., 2017; Baker et al., 2017). It is unclear how well surrogate functions correlate with final model performance (Zela et al., 2018). We investigate the use of distillation for performing this evaluation." }, { "heading": "3 MULTI-TASK FEATURE PARTITIONING", "text": "Sharing in the context of multi-task architecture search is often adjusted at the level of individual layers (Rosenbaum et al., 2017; Fernando et al., 2017), but given that state-of-the-art architectures work so well across datasets and tasks we choose to preserve the layer level execution path for each task and focus on sub-architectural changes that can be made. In this case, by deciding whether or not individual feature channels within each layer are available. This way, every task experiences the exact same organization of layers, but a unique calculation of intermediate layer features.\nConcretely, given a feature tensor F ∈ Rc×h×w we define a binary mask mf ∈ {0, 1}c for each task, and during a forward pass of the network multiply F by mf to zero out all channels not associated with a particular task. We further define a mask for the backward pass, mb ∈ {0, 1}c, whose non-zero elements are a subset of the non-zero elements in mf . Gradients are calculated as usual through standard backpropagation, but any weights we wish to leave unchanged will have their gradients zeroed out according to mb.\nTogether, these masks can capture the training dynamics seen in many existing multi-task architecture designs (Rosenbaum et al., 2017; Rebuffi et al., 2018). For example, one can devote an outsized proportion of features to a task like ImageNet classification, then make these features available during forward inference on a new smaller dataset. A backward mask, mb, can then be defined to ensure that the ImageNet weights remain untouched when finetuning on the new task.\nThere are a number of advantages to allocating resources at the channel level. There is enough flexibility to allow fine-grained control allotting specific weights to particular subsets of tasks. And after training, it is straightforward to prune the network according to any mask configuration. Meaning for simple tasks that only require a small subset of channels we can use a fraction of compute at test time while still leveraging the advantages of joint training with other tasks.\nAn important implementation detail is that masks are applied every other layer. Consider, for example, making a task use half the model. You might think to set half the values of mf to one and apply it to each layer. But that would mean c2 inputs and c 2 outputs at each layer which only uses one quarter of the original model. Instead, applying a mask at every other layer produces the desired behavior of allocating half the model to the task." }, { "heading": "3.1 PARTITIONING PARAMETERIZATION", "text": "Now that we have decided to partition up feature channels, how do we go about finding the best masks for each task? Consider defining a binary matrix that specifies all partitioning masks: M ∈ {0, 1}c×n, where c is the number of feature channels and n is the total number of tasks. A direct search over this matrix is problematic. It is not straightforward to optimize over a space of many discrete values, and one must account for significant redundancy given that all permutations of channels are equivalent. Moreover, naive random sampling would never cover the full space of partitioning strategies (consider the probability of randomly sampling two masks m that were mutually exclusive). In order to see diverse degrees of feature sharing, the overlap of channels between masks must be explicitly accounted for.\nThus instead of searching over M , we propose searching directly over the features of M that determine performance: 1) the number of feature channels used by each task, and 2) the amount of sharing between each pair of tasks. The former decides the overall capacity available for the task while the latter shapes how tasks help or interfere with each other. We explicitly parameterize these factors in a matrix P . We can compute P = 1cM\nTM , where the diagonal elements of P provide the percentage of feature channels used by each task and the off-diagonal elements define the percentage of overlapping features between task pairs.\nWhen sampling new partitioning strategies, we sample directly from P and identify a corresponding mask M to match it. To remove some ill posed parts of the space we take additional steps to adjust P . More details of this process as well as how we derive M from P can be found in the appendix.\nThis representation has a number of distinct advantages. It is not tied to the number of channels in a given layer, so a single parameterization can be used for layers of different sizes. It is low dimensional, particularly since n is typically much smaller than c. And it is interpretable, providing a clear impression of which tasks require more or less network capacity and which tasks train well together. Moreover we get an immediate and direct measurement of average node usage per task, this is simply the mean of the diagonal of P . We will use this metric to compare the resource efficiency of different proposed partitioning strategies." }, { "heading": "4 OPTIMIZATION STRATEGY", "text": "In order to optimize over different parameterizations P , there are two key ideas to cover: how we choose samples from the space and how we evaluate and compare different samples." }, { "heading": "4.1 SEARCH STRATEGIES", "text": "We treat our search setting as a black box optimization problem where given a particular parameterization we have a function which returns a score assessing its quality. Based on this score we can then choose how to further sample new parameterizations. We investigate two strategies for finding good constraint matrices.\nRandom sampling: The first is to simply randomly sample values. This has already been demonstrated to serve as a strong baseline in some architecture search work (Li & Talwalkar, 2019; Liu et al., 2017a). With the low dimensionality of the matrix as well the additional steps taken to preprocess constraints, it is not unreasonable that much of the space can be covered with random samples. Random samples serve well to map out large swaths of the search space and identify the principle choices that affect final performance. Concretely, a random matrix P can be sampled with values taken uniformly from 0 to 1. If a particular resource target is desired, it is trivial to bias or restrict samples to a specific range of parameter usage.\nEvolutionary strategies: Because P is continuous, it is possible to search over parameterizations with gradient-based optimization. We run experiments using evolutionary strategies 1. More specifically, we use a simple implementation with the modifications as described by Mania et al. (2018). A gradient direction is approximated by sampling several random directions in the parameter space and computing finite differences to see which directions seem most promising. A weighted average is then computed across all directions to calculate the gradient to update the current parameters.\nA key feature of our approach is that we modify the algorithm to prioritize parameterizations that use as few channels as necessary per task. An additional L2 weight regularization term is added to the parameters on the diagonal of P . This serves to reduce the number of channels used by each task, in particular those that can be pulled down without affecting the overall accuracy and performance of the model. By controlling the strength of this regularization we can tune the importance of resource efficiency in the search process.\nUsing this optimization strategy is only possible because of the parameterization defined in 3.1. Approximating gradients make sense in the context of the continuous constraints defined in P , and we can more effectively explore the space of multi-task architectures using this signal. This is different from existing architecture search work where search decisions correspond to the coarse selection of entire computational blocks and their connections to each other." }, { "heading": "4.2 SAMPLE EVALUATION", "text": "Finally, we must evaluate different partitioning schemes. But as discussed, determining the relative effectiveness of one partitioning over another by training models to convergence is expensive. One possible strategy is to train models for a short period of time assuming that the relative differences in performance that appear early in training should correlate well with differences in performance when trained for longer. We instead propose to use feature distillation to observe the representational capacity of a partitioned layer. We test how well shared multi-task layers can reproduce the activations of corresponding single-task layers. By only focusing on a few layers, we reduce total computation and the number of weights that need to be tuned. In addition, directly distilling to intermediate layer activations provides a more direct training signal than a final classification loss.\nGiven a proposed partitioning mask, we initialize new layers to be distilled and load reference models for each target task. Input to the layer is generated by passing through the task-specific pretrained model up to a target depth. The resulting features are then passed through the subsequent layers of the pretrained model as well as the new shared layers. In the new layers, intermediate features are masked according to the proposed partitioning. This procedure is illustrated in Figure 2.\nWe use a mean-squared error loss to supervise the shared layers such that their output features match those produced by the reference teacher models. We can measure the effectiveness of this distillation by replacing the original pretrained layers with the new shared layers and measuring the updated model accuracy.\n1This can also be referred to as “random search” (Mania et al., 2018), but we will instead use “evolutionary strategies” to avoid confusion with random sampling which is also called random search in existing NAS work (Li & Talwalkar, 2019).\nIt is important to emphasize that we are not using distillation to get a final multi-task model as we do not want to be limited by the performance of individual pre-trained models. Instead, distillation serves as a proxy task for quickly evaluating partitioning strategies. We do not run the distillation process to convergence, only for a brief interval, and it serves as a sufficient signal to provide feedback on different parameterizations. This leads to a dramatic reduction in the time required to evaluate a particular masking strategy." }, { "heading": "5 EXPERIMENTS", "text": "We run a number of experiments to investigate the role our proposed parameterization and distillation play in finding multi-task architectures that minimize task computation and parameter use while achieving high accuracy. All experiments are performed using the Visual Decathlon dataset (Rebuffi et al., 2017). Visual Decathlon is composed of many well-established computer vision classification datasets of various sizes and respective difficulties. There is sufficient diversity that it is difficult to determine which datasets would benefit from more or less network capacity and parameter sharing.\nWe investigate how a model performs when trained on nine Decathlon tasks at once (all datasets except for ImageNet). We initialize a shared ResNet model with a separate fully connected layer output for each task. To simplify experiments, we freeze the first two-thirds of the model and only apply feature partioning to the last third. For training, we alternate mini-batches sampled from each dataset and apply a standard cross-entropy classification loss at the appropriate task specific output. More thorough implementation and experiment details can be found in the appendix.\nIt is unclear that performance will necessarily be better with any feature restriction as opposed to using the full model for all tasks. One question is whether partitioning well leads to a reduction in interference across tasks and perhaps improved performance. In addition, we wish to see the overall relationship between performance and feature restriction in this multi-task setting. What’s the best performance possible as average feature use is reduced further and further?" }, { "heading": "5.1 DISTILLATION", "text": "Before performing our search we need to know that given a sampled set of feature masks M , distillation performance correlates well with the final accuracy of a model trained to convergence. This will determine whether our proposed distillation is a reasonable surrogate in lieu of full training. The higher the correlation between the two, the more confidence we can place in our search process.\nWhen performing distillation we initialize the child layers with pretrained layers from an ImageNet model, since the parent single-task networks have also been initialized from the same model. This accelerates the distillation process. The whole process takes just one minute on a P100 GPU. Further details are available in the appendix.\nWe sample many random partitioning masks and run both the distillation procedure and full training to convergence. As a baseline, we also see how well final validation accuracy compares to accuracies seen earlier in training. We compare to the accuracies after 5k and 10k iterations (corresponding to 5 and 10 minutes of training). As seen in Table 1, the distillation procedure (which takes a fraction\nof the time) correlates higher with final accuracy. This allows us to sample and compare many more parameterizations during our search process and have more confidence that the top performing parameterizations will do well when training a full model." }, { "heading": "5.2 ARCHITECTURE SEARCH", "text": "Randomly sampling parameterizations: To map out the performance of different partitioning strategies we sample random parameterizations and plot distillation performance against the average percentage of allocated features (the mean of the diagonal of P ) in Figure 4 (left). From the distribution of random samples we get an impression of the best performance possible at different degrees of resource use. The combination of fast feedback with distillation plus effective search space coverage with our proposed parameterization produces this information with less samples and in less time. At high levels of average feature use, choice of partitioning can only make so much of a difference. We are interested in the opposite - how well we can do when restricting task computation as much as possible. Here, partitioning well is necessary to achieve high performance.\nAlso it is important to note that explicitly specifying the degree of sharing between tasks is critical. This is shown in the middle plot of Figure 4. We evaluate different partitioning strategies with three fixed sets of values for the diagonal of P , only adjusting the amount of sharing that takes place between tasks. There can be a significant difference in performance in each of these cases, and as expected, sharing affects performance more and more as average parameter use goes down as there is more flexibility when choosing how to overlap features. It is important that feature sharing is parameterized when doing any sort of optimization.\nFinally, we look at per-task results (Figure 5). In this particular setting, every task benefits from using as many features as possible. This may have to do with using a model pretrained on ImageNet, but it makes sense that tasks benefit from using as many features as they can. An important facet to this is how much of those features are shared. As average parameter usage increases across tasks (indicated by a lighter color), individual tasks suffer as they now share more of their features and must deal with the interference of other tasks.\nEvolutionary strategies: As mentioned above, the distribution of random samples gives an immediate impression of the best level of performance possible as a function of average parameter use. Evolutionary strategies provides a means to more directly push this edge of performance even further. We visualize the search process by plotting samples over the course of optimization overlaid over the distribution of samples found by random sampling (Figure 4 (right)). ES optimization quickly identifies samples that provide the best accuracy given their current level of parameter use and densely samples in this space, making slight changes for any last available improvements to performance. Furthermore, adjusting the weight decay penalty used during optimization controls the resource use of the final partitioning strategy. This allows us to easily tune the optimization to reach the best architecture that meets specific resource needs.\nThe best parameterization found with evolutionary strategies outperforms a number of baselines for performing partitioning as seen in Table 2. We compare across several strategies with different degrees of feature use and sharing. We measure validation accuracy of models trained to convergence (averaged over five trials). These baselines include: independent partitions that split features evenly across tasks, sharing half of available feature channels and splitting the rest, and finally, sharing all feature channels. In line with our random sampling, the more channels given across tasks, the better performance. Sharing everything does the best amongst these baselines. However, using the parameterization found from our optimization both reduces average channel use and achieves better performance overall.\nWe see that there exist partitioning strategies that cut down average feature dramatically while still maintaining the same overall performance. This is in large part due to simple tasks that only need a small fraction of channels (DPed for example in Fig 5). By taking away the interference caused by these simpler tasks, harder tasks stand to gain more and that can be seen in Table 2 with tasks like CIFAR100, Flowers, and Omniglot seeing the largest gains from an effective partitioning strategy." }, { "heading": "6 CONCLUSION", "text": "In this work we investigate efficient multi-task architecture search to quickly find models that achieve high performance under a limited per-task budget. We propose a novel strategy for searching over feature partitioning that automatically determines how much network capacity should be used by each task and how many parameters should be shared between tasks. We design a compact representation to serve as a search space, and show that we can quickly estimate the performance of different partitioning schemes by using feature distillation." }, { "heading": "A APPENDIX", "text": "A.1 PARTITIONING PARAMETERIZATION\nRefining P : We define a simple set of operations that convert from the raw search space P to a constraint matrix P̃ that is more likely to correspond to feasible masks. Given the knowledge that pairwise values of P are conditioned on its diagonal terms, it is not possible for there to be more overlap between two tasks than the channels used by any one task. That is, no off-diagonal element Mij should be greater than the corresponding diagonal elements Mii and Mjj .\nWe remap all off-diagonal elements to appropriate values determined by the diagonal of the matrix. This means that for any off-diagonal element in P , 0 now maps to the minimum possible overlap and 1 to the maximum possible overlap of the two tasks. The procedure is defined as follows:\nD = diag(P )1T ∈ Rn×n (1) Pmin = max(0,D +DT − J) (2) Pmax = min(D,DT ) (3)\nP̃ = P I + (P (Pmax − Pmin) + Pmin) (J − I) (4)\nwhere 1 and J are the column vector and the matrix of ones respectively, and represents Hadamard product.\nDeriving M from P̃ : Now, we must find a feasible mask M to satisfy the constraints specified in P̃ . This can be formulated as a mixed integer programming problem. To show that, let P̃ ∈ [0, 1]n×n denote a given parameterization of constraints, we are interested in determining a binary mask M by minimizing:\nminimize n∑\ni=1\nφi + n∑ i=1 n∑ j=1 ξij subject to: (5)\nc∑ k=1 Mki ≤ P̃ii + φi ∀i ∈ [1, n] (6)\nc∑ k=1 tijk ≤ P̃ij + ξij ∀i, j ∈ [1, n] (7)\ntijk ≤ 1\n2 (Mki +Mkj) ∀i, j ∈ [1, n] (8)\ntijk ≥Mki +Mkj − 1 ∀i, j ∈ [1, n] (9) tijk,Mki ∈ {0, 1}, φi, ξij ≥ 0 ∀k ∈ [1, c] (10)\nWe employ two techniques. First we introduce two slack variables φ and ξ to relax the unary task constraint in equation 6 and pairwise task constraints in equation 7. Second, we convert the nonlinear constraints between tasks by introducing the binary auxiliary variable t. For each pair of tasks (i, j), in a perfect solution without slack variables, we have:\nMTM = cP̃ = n∑ i=1 n∑ j=1 c∑ k=1 tijk (11)\nTwo tricks are used: auxiliary variable t for non-linear constraint and slack variables. The search space now is significantly reduced, and this becomes a mixed integer programming problem which can be conveniently solved by off-the-shelf solvers.\nA.2 ADDITIONAL EXPERIMENT DETAILS\nMulti-task training: For full model training, we use a batchsize of 64 with SGD and momentum at a learning rate of 0.05 for 100k iterations, dropping to a learning rate of 0.005 at iteration 75k. All training was done on a single Nvidia P100 GPU. We followed the exact training, validation, and test splits provided by Visual Decathlon.\nSeveral steps are taken to ensure that a model trained simultaneously on multiple tasks converges well:\n• Batch normalization: We maintain separate batch normalization statistics per task as done in (Rebuffi et al., 2017). This adds minimal parameter overhead and accelerates training.\n• Momentum: We maintain separate gradient statistics when using momentum with SGD. This is important given our use of feature partitioning. At any given training step, we do not want the unused weights associated with other tasks to be updated.\n• Training curriculum: Rather than uniformly sampling across tasks we apply a simple strategy to choose mini-batches from tasks inversely proportional to their current training accuracy. Tasks that lag behind get sampled more often. Evidence for this approach has been demonstrated in a multi-task reinforcement learning setting (Sharma et al., 2017). We find a curriculum over tasks more effective than a curriculum over individual samples (Jiang et al., 2018).\n• Pretrained ImageNet model: All experiments are performed with a pretrained ImageNet model. The model is trained using the PyTorch implementation made available by Rebuffi et al. (2018). Because ImageNet is orders of magnitude larger than any of the other datasets in Decathlon and takes much longer to train we exclude it in our partitioning experiments to focus on the interactions of other datasets.\nThe main hyperparameters that determine performance were learning rate and a temperature term that controlled the task sampling curriculum. This temperature term determines whether minibatches are sampled uniformly across tasks or whether tasks with low training accuracy are weighted more heavily. For both hyperparameters we arrive at the final value by a simple grid search.\nFinal validation accuracy reported in Table 2 (in the main paper) is averaged across 5 trials.\nFrozen layers: To further simplify experiments, we freeze and share the first two-thirds of the network. Partitioning is thus only performed on the last third of the model. By only updating the weights of the last block, we focus attention on the layers where task-specific features are most important without restricting the model’s representational capacity to fit each task.\nIn all of our experiments we use an ImageNet-pretrained ResNet model made up of three computational blocks with four layers each. We freeze the first two computational blocks and only perform partitioning on the last set of layers. The justification for this stems from analysis performed with individual single task models. We compare feature differences across finetuned task-specific models. These models were trained with no restrictions initialized from an ImageNet model until converging to high accuracy on some target task. Because we start with a pretrained model we can make mean-\ningful comparisons of each model’s channel activations to see how task feature use diverges after finetuning.\nWe compare intermediate task features after passing in a shared image into every model. An important detail here is that we control for the batch normalization statistics associated with the dataset that the image is sampled from. The subsequent features produced by each model are almost identical all the way up through the first two-thirds of the model. Aside from subtle differences, task-specific differentiation did not occur until the final third of the model where features were still somewhat correlated but differed dramatically model to model. This is visualized in Figure 6. Because of this we decided the task-specific differentiation afforded by feature partitioning would not be as important in earlier stages of the model, and experiments would be more informative and also faster to run while focusing only on the last set of layers.\nDistillation details: We do not use the accuracy-based curriculum used in normal training during distillation and instead alternate mini-batches uniformly across each task. Distillation training is done for a brief 3000 iterations with a batch size of 4 and a learning rate of 1 which is dropped by a factor of 10 at iteration 2000.\nDistillation is done on the last four ResNet layers at once to match the final training setting as closely as possible. All scores reported when performing distillation are averaged across three trials.\nEvolutionary strategies details: The optimization curves shown in the paper are from runs that have each taken 1000 samples, these were performed on machines with 4 P100 GPUs. Given that sample evaluation takes roughly a minute, the whole procedure takes just over four hours.\nAt each step, 16 random parameter directions are sampled and these are both added and subtracted from the current parameterization P to produce 32 new samples to evaluate. A gradient is calculated based on the results of these samples, and a gradient descent step is applied to the current parameters with a learning rate of 0.1. Both clipping and a sigmoid operation were tested to ensure that values remain between 0 and 1 with no discernible difference in optimization performance." } ]
2,019
null
SP:fcee5370a61cbfb74d07727f29d83623f2f452e5
[ "The paper proposes a layer-wise method for training the weights of a binary-tree-structured neural network such that it correctly reproduces certain classes of Boolean functions defined by binary-tree-structured Boolean circuits. Specifically, this paper shows analytically that if a circuit satisfies a property termed “local correlation” where there is sufficient correlation between every gate in the circuit and the true output label of the circuit, then this circuit can be learned by a neural network with the same structure as the circuit by training it one layer at a time from the input to the output. The paper motivates this by showing empirically that the k-parity problem with some bias to the labels can be learned by a neural network, but that this does not work when there is no bias in the labels, implying that this bias is necessary for successful learning. The paper shows formally that instances of the k-parity problem satisfy the local correlation assumption and can thus be learned, and also shows that there exists at least one distribution given by a simple generative model that satisfies this assumption and is thus also learnable in this manner. ", "This paper aims to study the correlation between the neural network's input and output by abstracting the network as a binary tree Boolean circuit problem. The paper is well-written, motivations are clearly presented, and literature reviews are well placed. The contributions are mainly theoretical, and the experimental plots are simply used for concept illustrations, therefore the correctness of the theoretical analysis has no empirical evaluations. " ]
Training neural-networks is computationally hard. However, in practice they are trained efficiently using gradient-based algorithms, achieving remarkable performance on natural data. To bridge this gap, we observe the property of local correlation: correlation between small patterns of the input and the target label. We focus on learning deep neural-networks with a variant of gradient-descent, when the target function is a tree-structured Boolean circuit. We show that in this case, the existence of correlation between the gates of the circuit and the target label determines whether the optimization succeeds or fails. Using this result, we show that neural-networks can learn the (log n)-parity problem for most product distributions. These results hint that local correlation may play an important role in differentiating between distributions that are hard or easy to learn.
[]
[ { "authors": [ "Emmanuel Abbe", "Colin Sandon" ], "title": "Provable limitations of deep learning", "venue": "arXiv preprint arXiv:1812.06369,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li" ], "title": "What can resnet learn efficiently, going beyond kernels", "venue": "arXiv preprint arXiv:1905.10337,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "arXiv preprint arXiv:1811.04918,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "arXiv preprint arXiv:1811.03962,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Aditya Bhaskara", "Rong Ge", "Tengyu Ma" ], "title": "Provable bounds for learning some deep representations", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Ruosong Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": null, "year": 1901 }, { "authors": [ "Eugene Belilovsky", "Michael Eickenberg", "Edouard Oyallon" ], "title": "Greedy layerwise learning can scale to imagenet", "venue": "arXiv preprint arXiv:1812.11446,", "year": 2018 }, { "authors": [ "Eugene Belilovsky", "Michael Eickenberg", "Edouard Oyallon" ], "title": "Decoupled greedy learning of cnns", "venue": "arXiv preprint arXiv:1901.08164,", "year": 2019 }, { "authors": [ "Avrim Blum", "Adam Kalai", "Hal Wasserman" ], "title": "Noise-tolerant learning, the parity problem, and the statistical query model", "venue": "Journal of the ACM (JACM),", "year": 2003 }, { "authors": [ "Alon Brutzkus", "Amir Globerson" ], "title": "Globally optimal gradient descent for a convnet with gaussian inputs", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Alon Brutzkus", "Amir Globerson" ], "title": "Why do larger models generalize better? a theoretical perspective via the xor problem", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alon Brutzkus", "Amir Globerson", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Sgd learns overparameterized networks that provably generalize on linearly separable data", "venue": "arXiv preprint arXiv:1710.10174,", "year": 2017 }, { "authors": [ "Alon Brutzkus", "Amit Daniely", "Eran Malach" ], "title": "Id3 learns juntas for smoothed product distributions", "venue": "arXiv preprint arXiv:1906.08654,", "year": 2019 }, { "authors": [ "Amit Daniely" ], "title": "Sgd learns the conjugate kernel class of the network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Abhimanyu Das", "Sreenivas Gollapudi", "Ravi Kumar", "Rina Panigrahy" ], "title": "On the learnability of deep random networks", "venue": "arXiv preprint arXiv:1904.03866,", "year": 2019 }, { "authors": [ "Simon S Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "arXiv preprint arXiv:1810.02054,", "year": 2018 }, { "authors": [ "Roozbeh Farhoodi", "Khashayar Filom", "Ilenna Simone Jones", "Konrad Paul Kording" ], "title": "On functions computed on trees", "venue": "arXiv preprint arXiv:1904.02309,", "year": 2019 }, { "authors": [ "Vitaly Feldman", "Parikshit Gopalan", "Subhash Khot", "Ashok Kumar Ponnuswami" ], "title": "New results for learning noisy parities and halfspaces", "venue": "In 2006 47th Annual IEEE Symposium on Foundations of Computer Science", "year": 2006 }, { "authors": [ "Vitaly Feldman", "Parikshit Gopalan", "Subhash Khot", "Ashok Kumar Ponnuswami" ], "title": "On agnostic learning of parities, monomials, and halfspaces", "venue": "SIAM Journal on Computing,", "year": 2009 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Gil Kalai" ], "title": "Boolean functions: Influence, threshold and noise", "venue": "In European Congress of Mathematics,", "year": 2018 }, { "authors": [ "Michael Kearns", "Ming Li", "Leonard Pitt", "Leslie Valiant" ], "title": "On the learnability of boolean formulae", "venue": "In Annual ACM Symposium on Theory of Computing: Proceedings of the nineteenth annual ACM conference on Theory of computing,", "year": 1987 }, { "authors": [ "Daphne Koller", "Nir Friedman" ], "title": "Probabilistic graphical models: principles and techniques", "venue": "MIT press,", "year": 2009 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel S Schoenholz", "Yasaman Bahri", "Jascha Sohl-Dickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": null, "year": 1902 }, { "authors": [ "Nathan Linial", "Yishay Mansour", "Noam Nisan" ], "title": "Constant depth circuits, fourier transform, and learnability", "venue": "In 30th Annual Symposium on Foundations of Computer Science,", "year": 1989 }, { "authors": [ "Roi Livni", "Shai Shalev-Shwartz", "Ohad Shamir" ], "title": "On the computational efficiency of training neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Chao Ma", "Lei Wu" ], "title": "A comparative analysis of the optimization and generalization property of two-layer neural network and random feature models under gradient descent dynamics", "venue": null, "year": 1904 }, { "authors": [ "Eran Malach", "Shai Shalev-Shwartz" ], "title": "A provably correct algorithm for deep learning that actually works", "venue": "arXiv preprint arXiv:1803.09522,", "year": 2018 }, { "authors": [ "Eran Malach", "Shai Shalev-Shwartz" ], "title": "Is deeper better only when shallow is good", "venue": "arXiv preprint arXiv:1903.03488,", "year": 2019 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Overparameterized nonlinear learning: Gradient descent takes the shortest path", "venue": "arXiv preprint arXiv:1812.10004,", "year": 2018 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv:1902.04674 [cs, math, stat], February 2019", "venue": "URL http://arxiv.org/abs/1902.04674", "year": 1902 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Shaked Shammah" ], "title": "Failures of gradient-based deep learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Ohad Shamir" ], "title": "Distribution-specific hardness of learning neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Bo Xie", "Yingyu Liang", "Le Song" ], "title": "Diverse neural network learns true target functions", "venue": "arXiv preprint arXiv:1611.03131,", "year": 2016 }, { "authors": [ "Gilad Yehudai", "Ohad Shamir" ], "title": "On the power and limitations of random features for understanding neural networks", "venue": "arXiv preprint arXiv:1904.00687,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION AND MOTIVATION", "text": "It is well known (e.g. Livni et al. (2014)) that while deep neural-networks can express any function that can be run efficiently on a computer, in the general case, training them is computationally hard. Despite this theoretic pessimism, in practice, deep neural networks are successfully trained on real world datasets. Bridging this theoretical-practical gap seems to be the holy grail of theoretical machine learning nowadays. Maybe the most natural direction to bridge this gap is to find a property of data distributions that determines whether training is computationally easy or hard. The goal of this paper is to propose such a property.\nTo motivate this, we first recall the k-parity problem: the input is n bits, there is a subset of k relevant bits (which are unknown to the learner), and the output should be 1 if the number of 1’s among the relevant bits is even and −1 otherwise. It is well known (e.g. Shalev-Shwartz et al. (2017)) that the parity problem can be expressed by a fully connected two layer network or by depth log(n) locally connected 1 network. We observe the behavior of a one hidden-layer neural network trained on the k-parity problem, in two different instances: first, when the underlying distribution is the uniform distribution (i.e. the probability to see every bit is 12 ); and second, when the underlying distribution is a slightly biased product distribution (the probability for every bit to be 1 is 0.6). As can be clearly seen in figure 1, adding a slight bias to the probability of each bit dramatically affects the behavior of the network: while on the uniform distribution the training process completely fails, in the biased case it converges to a perfect solution.\nThis simple experiment shows that a small change in the underlying distribution can cause a dramatic change in the trainability of neural-networks. A key property that differentiates the uniform from the biased distribution is the correlation between input bits and the target label. While in the uniform distribution, the correlation between each bit and the label is zero, in the biased case every bit of the k bits in the parity has a non-negligible correlation to the label (we show this formally in section 5). So, local correlations between bits of the input and the target label seems to be a promising property which separates easy and hard distributions.\nIn this paper, we analyze the problem of learning tree-structured Boolean circuits with neuralnetworks. The key property that we assume is having sufficient correlation between every gate in the circuit and the label. We show that a variant of gradient-descent can efficiently learn such\n1i.e. every two adjacent neurons are only connected to one neuron in the upper layer.\ncircuits for some families of distributions, where at the same time, without correlation, gradientdescent is likely to fail. More concretely, we discuss specific target functions and distributions that satisfy the local correlation requirement. We show that for most product distributions, gradientdescent learns the (log n)-parity problem (parity on log n bits of an input with dimension n). We further show that for every circuit with AND/OR/NOT gates, there exists a generative distribution, such that gradient-descent recovers the Boolean circuit exactly.\nAdmittedly, as the primary focus of this paper is on theoretical analysis, the distributions we study are synthetic in nature. However, to explain the empirical success of neural-networks, we need to verify whether the local correlation property holds for natural datasets as well. To confirm this, we perform the following simple experiment: we train a network with two hidden-layers on a single random patch from images in the ImageNet dataset. We observe that even on a complex task such as ImageNet, a network that gets only a 3×3 patch as an input, achieves 2.6% top-5 accuracy — much better than a random guess (0.5% top-5 accuracy). The full results of the experiment are detailed in the appendix. This experiment highlights that, to some extent, natural datasets display a local correlation property: even a few “bits” of the input already have some non-negligible information on the target label." }, { "heading": "2 RELATED WORK", "text": "In recent years, the success of neural-networks has inspired an ongoing theoretical research, trying to explain empirical observations about their behavior. Some theoretical works show failure cases of neural-networks. Other works give different guarantees on various learning algorithms for neuralnetworks. In this section, we cover the main works that are relevant to our paper.\nFailures of gradient-based algorithms. Various works have shown different examples demonstrating failures of gradient-based algorithm. The work of Shamir (2018) shows failures of gradient descent, both in learning natural target functions and in learning natural distributions. The work of Shalev-Shwartz et al. (2017) shows that gradient-descent fails to learn parities and linear-periodic functions under the uniform distribution. In Das et al. (2019), a hardness result for learning random deep networks is shown. Other similar failure cases are also covered in Abbe & Sandon (2018); Malach & Shalev-Shwartz (2019). While the details of these works differ, they all share the same key principal - if there is no local correlation, gradient-descent fails. Our work complements these results, showing that in some cases, when there are local correlations to the target, gradient-descent succeeds to learn the target function.\nLearning neural-networks with gradient-descent. Recently, a large number of papers have provided positive results on learning neural-networks with gradient-descent. Generally speaking, most of these works show that over-parametrized neural-networks, deep or shallow, achieve performance that is competitive with kernel-SVM. Daniely (2017) shows that SGD learns the conjugate kernel associated with the architecture of the network, for a wide enough neural-network. The work of Brutzkus et al. (2017) shows that SGD learns a neural-network with good generalization, when the target function is linear. A growing number of works show that for a specific kernel induced\nby the network activation, called the Neural Tangent Kernel (NTK), gradient-descent learns overparametrized networks, for target functions with small norm in the reproducing kernel Hilbert space (see the works of Jacot et al. (2018); Xie et al. (2016); Oymak & Soltanolkotabi (2018); Allen-Zhu et al. (2018a;b); Oymak & Soltanolkotabi (2019); Arora et al. (2019); Du et al. (2018); Ma et al. (2019); Lee et al. (2019)). While these results show that learning neural-networks with gradientdescent is not hopeless, they are in some sense disappointing — in practice, neural-networks achieve performance that are far better than SVM, a fact that is not explained by these works. A few results do discuss success cases of gradient-descent that go beyond the kernel-based analysis (Brutzkus & Globerson, 2017; 2019; Allen-Zhu & Li, 2019; Yehudai & Shamir, 2019). However, these works still focus on very simple cases, such as learning a single neuron, or learning shallow neural-networks in restricted settings. In this work we deal with learning deep networks, going beyond the common reduction to linear classes of functions.\nLayerwise optimization algorithms. In this paper, we analyze the behavior of layerwise gradientdescent — optimizing one layer at a time, instead of the common practice to optimize the full network end-to-end. We do so since such algorithm greatly simplifies our theoretical analysis. While layerwise training is not a common practice, recent works (Belilovsky et al., 2018; 2019) have shown that such algorithms achieve performance that are competitive with the standard end-to-end approach, scaling up to the ImageNet dataset. We note that other theoretical works have studied iterative algorithms that learn neural-networks layer-by-layer (Arora et al., 2014; Malach & ShalevShwartz, 2018). However, our work focuses specifically on layerwise gradient-descent, considering the problem of learning Boolean circuits.\nLearning Boolean Circuits. The problem of learning Boolean circuits has been studied in the classical literature of theoretical machine learning. The work of Kearns et al. (1987) gives various positive and negative results on the learnability of Boolean Formulas, including Boolean circuits. The work of Linial et al. (1989) introduces an algorithm that learns a constant-depth circuit in quasipolynomial time. Another work by Kalai (2018) discusses various properties of learning Boolean formulas and Boolean circuits. Our work differs from the above in various aspects. Our main focus is learning deep neural-networks with gradient descent, where the target function is implemented by a Boolean circuit, and we do not aim to study the learnability of Boolean circuits in general. Furthermore, we consider Boolean circuits where a gate can take any Boolean functions, and not only AND/OR/NOT, as is often considered in the literature of Boolean circuits. On the other hand, we restrict ourselves to the problem of learning circuits with a fixed structure of full binary trees. We are not aware of any work studying a problem similar to ours." }, { "heading": "3 PROBLEM SETTING", "text": "We consider the problem of learning binary classification functions over the Boolean cube. So, let X = {±1}n be the instance space and Y = {±1} be the label set. Throughout the paper, we assume the target function is given by a Boolean circuit. In general, such assumption effectively does not limit the set of target functions, as any computable function can be implemented by a Boolean circuit. We define a circuit C to be a directed graph with n input nodes and a single output node, where each inner node has exactly two incoming edges, and is labeled by some arbitrary Boolean function f : {±1}2 → {±1}, which we call a gate 2. For each node v in C we denote by γ(v) ∈ { f : {±1}2 → {±1} } its gate. We recursively define hv,C : {±1}n → {±1} to be:\nhv,C(x) = γ(v) (hu1,C(x), hu2,C(x))\nwhere u1, u2 are the two nodes with outcoming edges to v. Finally, define hC = ho,C , where o is the output node.\nWe study the problem of learning the target function hC , when C is a full binary tree, and n = 2d, where d is the depth of the tree. The leaves of the tree are the input bits, ordered by x1, . . . xn. Admittedly, such assumption greatly limits the set of target functions, but still gives a rather rich family of functions. For example, such circuit can calculate the parity function on any k bits of the input (the function calculated by f(x) = ∏ i∈I xi for some set of indexes I). We note that the total number of functions calculated by such tree grows like 6n, as shown in Farhoodi et al. (2019). 2Note that in the literature on Boolean circuits it is often assumed that the gates are limited to being AND/OR and NOT. We allow the gates to take any Boolean function, which makes this model somewhat stronger.\nWe introduce a few notations that are used in the sequel. Fix some tree structured binary circuit C. This circuit has d levels, and we denote vi,j the j-th node in the i-th level of the tree, and denote γi,j = γ(vi,j). Fix some i ∈ [d], let ni := 2i, and denote by Γi : {±1}ni → {±1}ni/2 the function calculated by the i-th level of the circuit:\nΓi(x) = ( γi−1,1(x1, x2), . . . , γi−1,ni/2(xni−1, xni) ) For i < i′, we denote: Γi...i′ := Γi ◦ · · · ◦ Γi′ . So, the full circuit is given by hC(x) = Γ1...d(x). As noted, our goal is to learn Boolean circuits with neural-networks. To do so, we use a network architecture that aims to imitate the Boolean circuits described above. We replace each Boolean gate with a neural-gate: a one hidden-layer ReLU network, with a hard-tanh3 activation on its output. Formally, let σ be the ReLU activation, and let φ be the hard-tanh activation, so:\nσ(x) = max(x, 0), φ(x) = −1 x ≤ −1 x x ∈ (−1, 1) 1 x ≥ 1\nDefine a neural-gate to be a neural-network with one hidden layer, input dimension 2, with ReLU activation for the hidden-layer and hard-tanh for the output node. Namely, denote gw,v : R2 → R such that:\ngw,v(x) = φ( k∑ l=1 viσ(〈wl,x〉))\nNotice that a neural-gate gw,v of width 4 or more can implement any Boolean gate. That is, we can replace any Boolean gate with a neural-gate, and maintain the same expressive power. To implement the full Boolean circuit defined above, we construct a deep network of depth d (the depth of the Boolean circuit), with the same structure as the Boolean circuit. We define d blocks, each block has neural-gates with the same structure and connectivity as the Boolean circuit. A block BW (i),V (i) : R2 i → R2i−1 , is defined by:\nBW (i),V (i)(x) = [gw(i,1),v(i,1)(x1, x2), gw(i,2),v(i,2)(x3, x4), . . . , gw(i,2i−1),v(i,2i−1)(x2i−1, x2i)]\nWe consider the process of training neural-networks of the form NW,V = BW (1),V (1) ◦ · · · ◦ BW (d),V (d) . Notice that indeed, a network NW,V can implement any tree-structured Boolean circuit of depth d. In practice, neural-networks are trained with gradient-based optimization algorithm, in an end-to-end fashion. That is, the weights of all the layers are optimized together, with gradient updates on a given sample. To simplify the analysis, we instead consider a layerwise optimization algorithm, that performs gradient updates layer-by-layer. While this approach is much less popular, it has been recently shown to achieve performance that are comparable to end-to-end training, scaling up to the ImageNet dataset (Belilovsky et al., 2018).\nDenote by P the average-pooling operator, defined by P (x1, . . . , xn) = 1n ∑n i=1 xi. Denote the hinge-loss by `(ŷ, y) = max(1 − yŷ, 0) and denote the loss on the distribution by LD(f) = E(x,y)∼D [`(f(x), y)]. For a sample S ⊆ X × Y , denote the loss on the sample by LS(f) = 1 |S| ∑\n(x,y)∈S `(f(x), y). The layerwise gradient-descent algorithm for learning deep networks is described in algorithm 1.\nFor simplicity, we assume that the second layer of every neural-gate is fixed, such that v ∈ {±1}. Notice that this does not limit the expressive power of the network. Algorithm 1 iteratively optimizes the output of the network’s layers, starting from the bottom-most layer. For each layer, the average-pooling operator is applied to reduce the output of the layer to a single bit, and this output is optimized with respect to the target label. Note that in fact, we can equivalently optimize each neural-gate separately and achieve the same algorithm. However, we present a layerwise training process to conform with algorithms used in practice.\n3We chose to use the hard-tanh activation over the more popular tanh activation since it simplifies our theoretical analysis. However, we believe the same results can be given for the tanh activation.\nAlgorithm 1 Layerwise Gradient-Descent input:\nSample S ⊆ X × Y , number of iterations T ∈ N, learning rate η ∈ R. Let Nd ← id for i = d . . . 1 do\nInitialize W (i)0 ,V (i)\n0 . for t = 1 . . . T do\nUpdate W (i)t ←W (i) t−1 − η ∂∂W (i)t−1 LS(P (BW (i)t−1,V (i) 0 ◦ Ni))\nend for Update Ni−1 ← BW (i)T ,V (i)0 ◦ Ni\nend for Return N0" }, { "heading": "4 MAIN RESULTS", "text": "Our main result shows that algorithm 1 can learn a function implemented by the circuit C, when running on “nice” distributions, with the local correlation property. We start by describing the distributional assumptions needed for our main results. Let D be some distribution over X ×Y . For some function f : X → X ′, we denote by f(D) the distribution of (f(x), y) where (x, y) ∼ D. Let D(i) be the distribution Γ(i+1)...d(D). Denote by ci,j the correlation between the output of the j-th gate in the i-th layer and the label, so: ci,j := ED(i) [xjy].\nDefine the influence of the j-th gate in the i-th layer with respect to the uniform distribution (U ) by: Ii,j := Px∼U [Γi−1(x) 6= Γi−1(x⊕ ej)] := Px∼U [Γi−1(x) 6= Γi−1(x1, . . . ,−xj , . . . , xn)]\nNow, we introduce the main assumption on the distribution D. We assume the following: Assumption 1. (local correlation) There exists some ∆ ∈ (0, 1) such that for every layer i ∈ [d] and for every gate j ∈ [2i] with Ii,j 6= 0, the value of ci,j satisfies |ci,j | > |ED [y]|+ ∆.\nAnother technical assumption we need to make is the following: Assumption 2. (label bias) There exists some β ∈ (0, 1) such that |ED [y]| > β.\nBefore we present the main results, we wish to discuss the distributional assumptions given above. Assumption 1 is the key assumption required for the algorithm to succeed in learning the target function. Essentially, this assumption requires that the output of every gate in the circuit will “explain” the label slightly better then simply observing the bias between positive and negative examples. Clearly, gates that have no influence on the target function never satisfy this property, so we require it only for influencing gates. While this is a strong assumption, in section 5 we discuss examples of distributions where this assumption typically holds. Furthermore, the experiment described in section 1 hints that this assumption may hold for natural data. Assumption 2 is a simple technical assumption, that requires that the distribution of positive and negative examples is slightly biased. In a sense, we expect that “most” distributions would not be exactly balanced, so this assumption is easy to satisfy.\nNow, consider the case where D (limited to X ) is a product distribution: for every j 6= j′, the variables xj and xj′ are independent, for (x, y) ∼ D. A simple argument shows that any product distribution D that satisfies assumptions 1, satisfies the following properties: Property 1. There exists some ∆ ∈ (0, 1) such that for every layer i ∈ [d] and for every gate j ∈ [2i], the output of the j-th gate in the i-th layer satisfies one of the following:\n• The value of the gate j is independent of the label y, and its influence is zero: Ii,j = 0.\n• The value of ci,j satisfies |ci,j | > |ED [y]|+ ∆. Property 2. For every layer i ∈ [d], and for every gate j ∈ [2i−1], the value of (x2j−1, x2j) (i.e, the input to the j-th gate of layer i− 1) is independent of the label y given the output of the j-th gate: P(x,y)∼D(i) [(x2j−1, x2j) = p, y = y′|γi−1,j(x2j−1, x2j)] = P(x,y)∼D(i) [(x2j−1, x2j) = p|γi−1,j(x2j−1, x2j)] · P(x,y)∼D(i) [y = y′|γi−1,j(x2j−1, x2j)]\nProperty 1 is immediate from assumption 1. The following lemma shows that property 2 is satisfied as well for any product distribution: Lemma 1. Assume D (restricted to X ) is a product distribution (i.e., for every j 6= j′ we have that xj and xj′ are independent, for (x, y) ∼ D). Then D satisfies property 2.\nNotice that properties 1, 2 may hold for distributions that are not product distribution (as we show in the next section). Specifically, property 2 is a very common assumption in the field of Graphical Models (see Koller & Friedman (2009)). For our results to hold in a more general setting, we use properties 1 and 2, instead of assuming that D is a product distribution satisfying assumption 1. So, given a distribution satisfying properties 1, 2 and assumption 2, we show that algorithm 1 achieves an arbitrarily good approximation with high probability, with sample complexity and run-time quasipolynomial in the dimension n: Theorem 1. Let D be a distribution satisfying properties 1, 2 and assumption 2. Assume that for every i we initialize W (i)0 such that ∥∥∥W (i)0 ∥∥∥ max ≤ 1 4 √ 2k . Fix some , δ > 0 and assume that k ≥ log−1( 43 ) log( 2nd δ ), and that η ≤ 1 16k . Assume we sample S ∼ D, with |S| > 128 2 min{∆,2β}2n 11+4 logn−2 log min{∆,2β} log( 8ndδ ). Then, with probability at least 1− δ, when running algorithm 1 on the sample S, the algorithm returns the a function such that:\nE(x,y)∼D [N0(x) 6= hC(x)] ≤\nwhen running T > 3 √\n2 ηmin{∆,2β} n 6.5+2 logn−log min{∆,2β} steps for each layer.\nThe above shows a learnability result in the standard PAC setting (given our distributional assumptions), where we only guarantee approximation of the target function under the given distribution. In fact, we can get a stronger result, and show that the algorithm learns the function hC exactly, with run-time and sample complexity polynomial in n. To get this result, we need to require that there is no feasible pattern (pair of bits) in the Boolean circuit that is extremely rare: Assumption 3. There exists some ∈ (0, 1) such that for every layer i ∈ [d], for every gate j ∈ [2i−1] and for every p ∈ {±1}2 such that P(x,y)∼D(i) [(x2j−1, x2j) = p] 6= 0, it holds that: P(x,y)∼D(i) [(x2j−1, x2j) = p] ≥ .\nIn section 5 we discuss distributions that satisfies assumption 3. Given all the above assumptions, we get the following: Theorem 2. Let D be a distribution satisfying properties 1, 2 and assumptions 2, 3. Assume that for every i we initialize W (i)0 such that ∥∥∥W (i)0 ∥∥∥ max ≤ 1 4 √ 2k . Fix some δ > 0 and assume that k ≥ log−1( 43 ) log( 2nd δ ), and that η ≤ 1 16k . Assume we sample S ∼ D, with |S| > 128 2 min{∆,2β}2 log( 8nd δ ). Then, with probability at least 1 − δ, when running algorithm 1 on the sample S, the algorithm returns a function such that N0(x) = hC(x) for all x ∈ X , when running T > 3 √ 2n\nηmin{∆,2β} steps for each layer.\nWe give the full proof of the theorems in the appendix, and give a sketch of the argument here. Observe that the input to the (i, j)-th neural-gate is a pattern of two bits. The target gate (the (i, j)-th gate in the circuit C) identifies each of the four possible patterns with a single output bit. For example, if the gate is OR, then the patterns {(1, 1), (−1, 1), (1,−1)} get the value 1, and the pattern (−1,−1) gets the value −1. Fix some pattern p ∈ {±1}2, and assume that the output of the (i, j)-th gate on the pattern p is 1. Since we assume the output of the gate is correlated with the label, the loss function draws the output of the neural-gate on the pattern p toward the correlation of the gate. In the case where the output of the gate on p is −1, the output of the neural-gate is drawn to the opposite sign of the correlation. All in all, the optimization separates the patterns that evaluate to 1 from the patterns that evaluate to −1. In other words, the neural-gate learns to implement the target gate. This way, we can show that the optimization process makes the network recover all the influencing gates, so that at the end of the optimization the network implements the circuit.\nObserve that when there is no correlation, the above argument fails immediately. Since the label is slightly biased, when there is no correlation the output of the neural-gate is drawn towards the bias of the label for all the input patterns, regardless of the value of the gate. If the gate is not influencing\nthe target function (i.e. Ii,j = 0), then this clearly doesn’t effect the overall behavior. However, if there exists some influencing gate with no correlation to the label, then the output of the neural-gate will be constant on all its input patterns. Hence, the algorithm will fail to learn the target function. This shows that assumption 1 is in fact critical for the success of the algorithm." }, { "heading": "5 DISTRIBUTIONS", "text": "In the previous section we showed that algorithm 1 can learn tree-structured Boolean circuits in polynomial run-time and sample complexity. These results require some non-trivial distributional assumptions. In this section we study specific families of distributions, and show that they satisfy the above assumptions.\nFirst, we study the problem of learning a parity function on log n bits of the input, when the underlying distribution is a product distribution. The problem of learning parities was studied extensively in the literature of machine learning theory (Feldman et al., 2006; 2009; Blum et al., 2003; ShalevShwartz et al., 2017; Brutzkus et al., 2019), and serves as a good case-study for the above results. In the (log n)-parity problem, we show that in fact most product distributions satisfy assumptions 1-3, hence our results apply to most product distributions. Next, we study distributions given by a generative model. We show that for every circuit with gates AND/OR and NOT, there exists a distribution that satisfies the above assumptions, so algorithm 1 can learn any such circuit exactly." }, { "heading": "5.1 PRODUCT DISTRIBUTIONS", "text": "We observe the k-Parity problem, where the target function is f(x) = ∏ j∈I xj some subset I ⊆ [n] of size |I| = k. A simple construction shows that f can be implemented by a tree structured circuit as defined previously. We define the gates of the first layer by:\nγd−1,j(z1, z2) = z1z2 x2j−1, x2j ∈ I z1 x2j−1 ∈ I, x2j /∈ I z2 x2j ∈ I, x2j−1 /∈ I 1 o.w\nAnd for all other layers i < d− 1, we define: γi,j(z1, z1) = z1z2. Then we get the following: Lemma 2. Let C be a Boolean circuit as defined above. Then: hC(x) = ∏ j∈I xj = f(x).\nNow, let DX be some product distribution over X , and denote pj := PDX [xj = 1]. Let D be the distribution of (x, f(x)) where x ∼ DX . Then for the circuit defined above we get the following result: Lemma 3. Fix some ξ ∈ (0, 14 ). For every product distributionD with pj ∈ (ξ, 1 2−ξ)∪( 1 2 +ξ, 1−ξ) for every j, it holds that if Ii,j 6= 0 then |ci,j | − |E [y]| ≥ ξk and P(z,y)∼Γ(i+1)...d(D) [zj = 1] ∈ (ξ, 1− ξ).\nThe above lemma shows that every product distribution that is far enough from the uniform distribution, or from a constant distribution, satisfies assumptions 1 and 2 with β,∆ = (2ξ)k. Using the fact that at each layer, the output of each gate is an independent random variable (since the input distribution is a product distribution), we get that assumption 3 is satisfied with = ξ2. This gives us the following result: Corollary 1. Let D be a product distribution with pj ∈ (ξ, 12 − ξ) ∪ ( 1 2 + ξ, 1 − ξ) for every j, with the target function being the (log n)-Parity (i.e., k = log n). Then, when running algorithm 1 as described in Theorem 2, with probability 1− δ the algorithm returns the true target function hC , with run-time and sample complexity polynomial in n." }, { "heading": "5.2 GENERATIVE MODELS", "text": "Next, we move beyond product distributions, and observe families of distributions given by a generative model. We limit ourselves to circuits where each gate is chosen from the set {∧,∨,¬∧,¬∨}. For every such circuit, we define a generative distribution as follows: we start by sampling a label\nfor the example, from a slightly imbalanced distribution (to satisfy assumption 2). Then iteratively, for every gate, we sample uniformly at random a pattern from all the pattern that give the correct output. For example, if the label is 1 and the topmost gate is OR, we sample a pattern uniformly from {(1, 1), (1,−1), (−1, 1)}. The sampled pattern determines what should be the output of the second topmost layer. For every gate in this layer, we sample again a pattern that will result in the correct output. We continue in this fashion until reaching the bottom-most layer, which defines the observed example.\nFormally, for a given gate Γ ∈ {∧,∨,¬∧,¬∨}, we denote the following sets of patterns: SΓ = {v ∈ {±1}2 : Γ(v1, v2) = 1}, ScΓ = {±1}2 \\ SΓ We recursively define D(0), . . . ,D(d), where D(i) is a distribution over {±1}2i × {±1}:\n• D(0) is a distribution supported on {(1, 1), (−1,−1)} such that PD(0) [(1, 1)] = 12 + ξ and PD(0) [(−1,−1)] = 12 − ξ, for some 0 < ξ < 1 12 ( 2 3 )d . • To sample (x, y) ∼ D(i), first sample (z, y) ∼ D(i−1). Then, for all j ∈ [2i−1], if zj = 1 sample x′j ∼ Uni(Sγi,j ), and if zj = −1 sample x′j ∼ Uni(Scγi,j ). Set x = [x′1, . . . ,x ′ 2i−1 ] ∈ {±1} 2i , and return (x, y).\nThen we have the following results: Lemma 4. For every i ∈ [d] and every j ∈ [2i], denote ci,j = E(x,y)∼D(i) [xjy]. Then we have:\n|ci,j | − |E [y]| > 1\n3\n( 2\n3\n)d = 1\n3 nlog(2/3)\nWe also need the following simple observation: Lemma 5. For every i ∈ [d] we have Γi(D(i)) = D(i−1).\nBy definition, we have E [y] = 2ξ, so D(d) satisfies assumption 2 with β = ξ. Notice that from Lemma 4, the distribution D(d) satisfies property 1 with ∆ = 13n\nlog(2/3) (note that since we restrict the gates to AND/OR/NOT, all gates have influence). By its construction, the distribution also satisfies property 2, and it satisfies assumption 3 with = ( 1 4 )d = 1n2 . Therefore, we can apply Theorem 2 on the distribution D(d), and get that algorithm 1 learns the circuit C exactly in polynomial time. This leads to the following nice corollary: Corollary 2. With the assumptions and notations of Theorem 2, for every circuit C with gates in {∧,∨,¬∧,¬∨}, there exists a distribution D such that when running algorithm 1 on a sample from D, the algorithm returns hC with probability 1− δ, in polynomial run-time and sample complexity.\nNote that the fact that for every circuit there exists a distribution that can be learned in the PAC setting is trivial: simply take a distribution that is concentrated on a single positive example, and approximating the target function on such distribution is achieved by a classifier that always returns a positive prediction. However, showing that there exists a distribution on which algorithm 1 exactly recovers the circuit, is certainly non-trivial." }, { "heading": "6 DISCUSSION", "text": "In this paper we suggested the property of local corrleation as a possible candidate for differentiating between hard and easy distributions. We showed that on the task of learning tree-structured Boolean circuits, the existence of local correlations between the gates and the target label allows layerwise gradient-descent to learn the target circuit. Furthermore, we showed specific tasks and distributions which satisfy the local correlation property. These results raise a few open questions, which we leave for future work. The most immediate research problem is showing similar results for more general structures of Boolean circuit, and on a wider range of distributions (beyond product distributions or generative models). More generally, we suggest that the local correlation property may be important in a broader context, beyond Boolean circuits. For example, examining whether an equivalent property exists when the target function is a convolutional network is an extremely interesting open problem. Needless to say, finding other properties of natural distribution that determine whether gradient-based algorithms succeed or fail is another promising research direction." }, { "heading": "A EXPERIMENTS", "text": "Figure 2 details the results of the ImageNet experiment discussed in the introduction." }, { "heading": "B PROOFS OF SECTION 4", "text": "We assume w.l.o.g that for every i, j such that Ii,j = 0, the (i, j) gate is constant γi,j ≡ sign(ED [y]). Since the output of this gate has no influence on the output y, we can choose it freely without changing the target function. To prove Theorem 1 and Theorem 2, we observe the behavior of the algorithm on the i-th layer. Let ψ : {±1}ni → {±1}ni be some mapping such that ψ(x) = (ξ1 ·x1, . . . , ξni ·xni) for ξ1, . . . , ξni ∈ {±1}. We also define ϕi : {±1}ni/2 → {±1}ni/2 such that:\nϕi(z) = (ν1z1, . . . , νni/2zni/2) where νj := {\nsign(ci−1,j) ci−1,j 6= 0 1 Ii−1,j = 0\nFix some ′ > 0. We need to handle “bad” examples - examples that “rare” patches appear in them. For every (i, j) gate, we observe all the input patterns p to the (i, j) gate that appear with probability at most ′. Denote the following set of triplets:\nP̃ ′ := { (i, j,p) : P(x,y)∼D(i) [(x2j−1, x2j) = p] < ′ }\nDenote the following set of “bad” examples: X̃ ′ := { x ∈ X : ∃(i, j,p) ∈ P̃ ′ s.t. (z2j−1, z2j) = p for z = Γ(i+1)...d(x) } We have the following important result, which we prove in the sequel:\nLemma 6. Fix > 0 and let ′ ≤ such that P(x,y)∼D [ x ∈ X̃ ′ ] <\n8 √ 2ni min{∆, 2β}. Assume we initialize w(0)l such that ∥∥∥w(0)l ∥∥∥ ≤ 14k . Fix δ > 0. Assume we sample S ∼ D, with |S| >\n128 2 min{∆,2β}2 log( 8ni δ ). Assume that k ≥ log −1( 43 ) log( 8ni δ ), and that η ≤ ni 16k . Let Ψ : X → [−1, 1]ni/2 such that for every x /∈ X̃ ′ we have Ψ(x) = ψ ◦ Γ(i+1)...d(x) for some ψ as defined above. Assume we perform the following updates:\nW (i) t ←W (i) t−1 − η\n∂\n∂W (i) t−1\nLS(P (BW (i)t−1,V (i) 0 ))\nThen with probability at least 1−δ, for t > 3ni√ 2η min{∆,2β} we have: BW (i)t ,V (i)0 (x) = ϕi◦Γi◦ψ(x) for every x /∈ X̃ .\nGiven these results, we can prove the main theorems:\nProof. of Theorem 1 and Theorem 2. Fix δ′ = δd . Let 0 ≥ · · · ≥ d > 0 such that for every i ∈ [d] we have: P(x,y)∼D [ x ∈ X̃ i ] < i−1 min{∆,2β} 8 √ 2ni (we will note the exact value of i later). We show that for every i ∈ [d], w.p at least 1 − (d − i + 1)δ′, after the i-th step of the algorithm we have Ni−1(x) = ϕi ◦ Γi...d(x) for every x /∈ X̃ i−1 . By induction on i:\n• For i = d, we get the required using Lemma 6 with ψ,Ψ = id and = d−1, ′ = d.\n• Assume the above holds for i, and we show it for i − 1. By the assumption, w.p at least 1− (d− i+ 1)δ′ we have Ni−1(x) = ϕi ◦ Γi...d(x) for every x /∈ X̃ i−1 . Observe that:\n∂LD\n∂W (i−1) t\n(P (B W\n(i−1) t−1 ,V (i−1) 0\n◦ Ni−1)) = ∂LNi−1(D)\n∂W (i−1) t\n(P (B W\n(i−1) t ,V (i−1) 0\n))\nSo using Lemma 6 with ψ = ϕi, Ψ = Ni and = i−2, ′ = i−1 we get that w.p at least 1 − δ′ we have B\nW (i−1) T ,V (i−1) 0\n(x) = ϕi−1 ◦ Γi−1 ◦ ϕi(x) for every x /∈ X̃ i−2 . In this\ncase, since ϕi ◦ ϕi = id, we get that for every x /∈ X̃ i−2 : Ni−2(x) = BW (i−1)T ,V (i−1)0 ◦ Ni−1(x) = (ϕi−1 ◦ Γi−1 ◦ ϕi) ◦ (ϕi ◦ Γi...d)(x) = ϕi−1 ◦ Γ(i−1)...d(x) and using the union bound gives the required.\nNotice that ϕ1 = id: by definition of D(0) = Γ1...d(D), for (z, y) ∼ D(0) we have z = Γ1...d(x) and also y = Γ1...d(x) for (x, y) ∼ D. Therefore, we have c0,1 = E(x,y)∼D(0) [xy] = 1, and therefore ϕi(z) = sign(c0,1)z = z. Now, choosing i = 1, the above result shows that with probability at least 1 − δ, the algorithm returns N0 such that N0(x) = ϕ1 ◦ Γ1 ◦ · · · ◦ Γd(x) = hC(x) for every x /∈ X̃ 0 . To prove Theorem 2, it is enough to observe that when taking 0 = · · · = d = , assumption 3 implies that P̃ = ∅ and therefore X̃ = ∅, so the theorem follows. To prove Theorem 1, we take 0 = and inductively define i =\ni−1 min{∆,2β} 32 √ 2n2 . Notice that ∣∣∣P̃ i∣∣∣ ≤ ∑i∈[d] ni2 ∣∣{±1}2∣∣ = 4 ∑logn i=0 2 i−1 = 4n. So, using the union bound we get:\nP(x,y)∼D [ x ∈ X̃ i ] = P(x,y)∼D [ ∪(i,j,p)∈P̃ iΓ(i+1)...d(x)(2j−1,2j) = p ] ≤\n∑ (i,j,p)∈P̃ i P(x,y)∼D [ Γ(i+1)...d(x)(2j−1,2j) = p ] ≤\n∑ (i,j,p)∈P̃ i P(x,y)∼D(i) [(x2j−1, x2j) = p] < ∣∣∣P̃ ′ ∣∣∣ · i ≤ i−1∆ 8 √ 2ni\nNow, observing that d = min{∆,2β}d 25.5dn2d = n log min{∆,2β} n5.5+2 logn gives the required.\nIn the rest of this section we prove Lemma 6. Fix some i ∈ [d] and let j ∈ [ni/2]. With slight abuse of notation, we denote by w(t) the value of the weight w(i,j) at iteration t, and denote v := v(i,j) and gt := gw(t),v . Recall that we defined ψ(x) = (ξ1 · x1, . . . , ξni · xni) for ξ1 . . . ξni ∈ {±1}. Denote D̃(i) := ψ(D(i)) the distribution of (ψ(x), y), where (x, y) ∼ D(i). Let γ := γi−1,j , and let γ̃ such that γ̃(x1, x2) = γ(ξ2j−1 ·x1, ξ2j ·x2). For every p ∈ {±1}2, denote p̃ := (ξ2j−1p1, ξ2jp2), so we have γ(p̃) = γ̃(p). Then we have the following:\nLemma 7. Fix some p ∈ {±1}2 such that (i, j, p̃) /∈ P̃ . For every l ∈ [k] such that 〈w(t)l ,p〉 > 0 and gt(p) ∈ (−1, 1), the following holds:\n−γ̃(p)vlνj〈 ∂LD̃(i)\n∂w (t) l\n,p〉 > √ 2\nni min{∆, 2β}\nProof. Observe the following:\n∂LD̃(i) ∂w (t) l (P (BW (i),V (i))) = E(x,y)∼D̃(i) `′(P (BW (i),V (i))(x)) · ∂ ∂w (t) l 2 ni ni/2∑ j′=1 gw(i,j′),v(i,j′)(x2j′−1, x2j′) = 2\nni E(x,y)∼D̃(i)\n[ −y ∂\n∂w (t) l\ngt(x2j−1, x2j)\n]\n= 2\nni E(x,y)∼D̃(i)\n[ −yvl1{gt(x2j−1, x2j) ∈ (−1, 1)} · 1{〈w(t)l , (x2j−1, x2j)〉 > 0} · (x2j−1, x2j) ]\nWe use the fact that `′(P (BW (i),V (i))(x)) = −y, unless P (BW (i),V (i))(x) ∈ {±1}, in which case gt(x2j−1, x2j) ∈ {±1}, so ∂\n∂w (t) l gt(x2j−1, x2j) = 0. Fix some p ∈ {±1}2 such that 〈w(t)l ,p〉 > 0. Note that for every p 6= p′ ∈ {±1}2 we have either 〈p,p′〉 = 0, or p = −p′ in which case 〈w(t)l ,p′〉 < 0. Therefore, we get the following:\n〈 ∂LD̃(i)\n∂w (t) l\n,p〉 = 2 ni\nE(x,y)∼D̃(i) [ −yvl1{gt(x2j−1, x2j) ∈ (−1, 1)} · 1{〈w(t)l , (x2j−1, x2j)〉 ≥ 0} · 〈(x2j−1, x2j),p〉 ] = 2\nni E(x,y)∼D̃(i) [−yvl1{gt(x2j−1, x2j) ∈ (−1, 1)} · 1{(x2j−1, x2j) = p} ‖p‖]\nDenote qp := P(x,y)∼D(i) [(x2j−1, x2j) = p|γ(x2j−1, x2j) = γ(p)]. Using property 2, we have:\nP(x,y)∼D(i) [(x2j−1, x2j) = p, y = y′] = P(x,y)∼D(i) [(x2j−1, x2j) = p, y = y′, γ(x2j−1, x2j) = γ(p)] = P(x,y)∼D(i) [(x2j−1, x2j) = p, y = y′|γ(x2j−1, x2j) = γ(p)]P(x,y)∼D(i) [γ(x2j−1, x2j) = γ(p)] = qpP(x,y)∼D(i) [γ(x2j−1, x2j) = γ(p), y = y′] = qpP(z,y)∼D(i−1) [zj = γ(p), y = y′]\nAnd therefore: E(x,y)∼D(i) [y1{(x2j−1, x2j) = p}] = ∑\ny′∈{±1}\ny′P(x,y)∼D(i) [(x2j−1, x2j) = p, y = y′]\n= qp ∑\ny′∈{±1}\ny′P(z,y)∼D(i−1) [zj = γ(p), y = y′]\n= qpE(z,y)∼D(i−1) [y1{zj = γ(p)}]\nAssuming gt(p) ∈ (−1, 1), using the above we get:\n〈 ∂LD̃(i)\n∂w (t) l\n,p〉 = 2 √\n2vl ni E(x,y)∼D̃(i) [−y1{(x2j−1, x2j) = p}]\n= 2 √\n2vl ni E(x,y)∼D(i) [−y1{(ξ2j−1x2j−1, ξ2jx2j) = p}]\n= −2 √\n2vl ni E(x,y)∼D(i) [y1{(x2j−1, x2j) = p̃}]\n= − 2 √\n2vlqp̃ ni E(z,y)∼D(i−1) [y1{zj = γ̃(p)}]\nNow, we have the following cases:\n• If Ii−1,j = 0, then by property 1 zj and y are independent, so:\n〈 ∂LD̃(i)\n∂w (t) l\n,p〉 = − 2 √\n2vlqp̃ ni E(z,y)∼D(i−1) [y1{zj = γ̃(p)}]\n= − 2 √\n2vlqp̃ ni E(z,y)∼D(i−1) [y]P(z,y)∼D(i−1) [zj = γ̃(p)]\n= −2 √\n2vl ni E(z,y)∼D(i−1) [y]P(z,y)∼D(i−1) [(x2j−1, x2j) = p̃]\nSince we assume γ̃(p) = sign(E [y]), and using the fact that (i, j, p̃) /∈ P̃ , we get that:\n−γ̃(p)vlνj〈 ∂LD̃(i)\n∂w (t) l\n,p〉 = − sign(E [y])vl〈 ∂LD̃(i)\n∂w (t) l\n,p〉\n= 2 √ 2\nni |E [y]|P(z,y)∼D(i−1) [(x2j−1, x2j) = p̃] >\n2 √ 2\nni β\n• Otherwise, observe that:\n〈 ∂LD̃(i)\n∂w (t) l\n,p〉 = − 2 √\n2vlqp̃ ni E(z,y)∼D(i−1) [y1{zj = γ̃(p)}]\n= − 2 √\n2vlqp̃ ni\nE(z,y)∼D(i−1) [ y 1\n2 (zj · γ̃(p) + 1) ] = − √\n2vlqp̃ ni\n( γ̃(p)ci−1,j + E(z,y)∼D(i−1) [y] )\nAnd therefore, using property 1, since Ii−1,j 6= 0, we get:\n−γ̃(p)vl sign(ci−1,j)〈 ∂LD̃(i)\n∂w (t) l\n,p〉 = √\n2qp̃ ni (|ci−1,j |+ sign(ci−1,j)γ̃(p)E [y])\n≥ √\n2qp̃ ni\n(|ci−1,j | − |E [y]|) > √ 2\nni ∆\nwhere we use the fact that (i, j, p̃) /∈ P̃ .\nWe introduce the following notation: for a sample S ⊆ X ′ × Y , and some function f : X ′ → X ′, denote by f(S) the sample f(S) := {(f(x), y)}(x,y)∈S .\nLemma 8. Fix δ > 0. Assume we sample S ∼ D, with |S| > 128 2 min{∆,2β}2 log 4 δ . Then, with probability at least 1− δ, for every p ∈ {±1}2 such that 〈w(t)l ,p〉 > 0 it holds that:∣∣∣∣∣〈∂LΨ(D)∂w(t)l ,p〉 − 〈 ∂LΨ(S) ∂w (t) l ,p〉 ∣∣∣∣∣ ≤ 2√2ni min{∆, 2β}\nProof. Fix some p ∈ {±1}2 with 〈w(t)l ,p〉 > 0. Similar to what we previously showed, we get that:\n〈 ∂LΨ(S)\n∂w (t) l\n,p〉 = 2 ni\nE(x,y)∼Ψ(S) [ −yvl1{gt(x2j−1, x2j) ∈ (−1, 1)} · 1{〈w(t)l , (x2j−1, x2j)〉 ≥ 0} · 〈(x2j−1, x2j),p〉 ] = 2\nni E(x,y)∼Ψ(S) [−yvl1{gt(x2j−1, x2j) ∈ (−1, 1)} · 1{(x2j−1, x2j) = p} ‖p‖]\n= 2 √ 2\nni E(x,y)∼Ψ(S) [−yvl1{gt(x2j−1, x2j) ∈ (−1, 1)} · 1{(x2j−1, x2j) = p}]\nDenote f(x, y) = −yvl1{gt(x2j−1, x2j) ∈ (−1, 1)} · 1{(x2j−1, x2j) = p}, and notice that f(x, y) ∈ [−1, 1]. Now, from Hoeffding’s inequality we get that:\nPS [∣∣EΨ(S) [f(x, y)]− EΨ(D) [f(x, y)]∣∣ ≥ τ] ≤ exp(−1\n2 |S|τ2 ) So, for |S| > 2τ2 log 4 δ we get that with probability at least 1−\nδ 4 we have:∣∣∣∣∣〈∂LΨ(D)∂w(t)l ,p〉 − 〈 ∂LΨ(S) ∂w (t) l ,p〉 ∣∣∣∣∣ = 2 √ 2 ni ∣∣EΨ(S) [f(x, y)]− EΨ(D) [f(x, y)]∣∣ < 2√2 ni τ\nTaking τ = 8 min{∆, 2β} and using the union bound over all p ∈ {±1} 2 completes the proof. Lemma 9. Fix δ > 0. Assume P(x,y)∼D [ x ∈ X̃ ′ ] <\n8 √ 2ni min{∆, 2β}. Assume we sample\nS ∼ D, with |S| > 128 2 min{∆,2β}2 log 4 δ . Then, with probability at least 1− δ, for every p ∈ {±1} 2 such that (i, j, p̃) /∈ P̃ ′ , and for every l ∈ [k] such that 〈w(t)l ,p〉 > 0 and gt(p) ∈ (−1, 1), the following holds:\n−γ̃(p)vlνj〈 ∂LΨ(D)\n∂w (t) l\n,p〉 > √ 2ni min{∆, 2β}\nProof. Denote α := P(x,y)∼D [ x ∈ X̃ ′ ] , and denote B(i) := BW (i),V (i) . Then, we have the following:∥∥∥∥∥∂LD̃(i)∂w(t)l − ∂LΨ(D) ∂w (t) l ∥∥∥∥∥ = ∥∥∥∥∥E(x,y)∼D [ ∂ ∂w (t) l `(PB(i) ◦ ψ ◦ Γ(i+1)...d(x))− ∂ ∂w (t) l `(PB(i) ◦Ψ(x))\n]∥∥∥∥∥ ≤ E(x,y)∼D\n[∥∥∥∥∥ ∂∂w(t)l `(PB(i) ◦ ψ ◦ Γ(i+1)...d(x))− ∂ ∂w (t) l `(PB(i) ◦Ψ(x)) ∥∥∥∥∥ · 1{x ∈ X̃ ′} ]\n≤ E(x,y)∼D [∥∥ψ ◦ Γ(i+1)...d(x)(2j−1,2j) −Ψ(x)(2j−1,2j)∥∥ · 1{x ∈ X̃ ′}] ≤ 2 √ 2P(x,y)∼D [ x ∈ X̃ ′ ] = 2 √ 2α\nSo we get, using Lemma 7 and Lemma 8, with probability at least 1− δ:\n−γ̃(p)vlνj〈 ∂LΨ(S)\n∂w (t) l\n,p〉 = −γ̃(p)vlνj ( 〈 ∂LD̃(i)\n∂w (t) l\n,p〉+ 〈 ∂LΨ(S)\n∂w (t) l\n,p〉 − 〈 ∂LΨ(D)\n∂w (t) l\n,p〉+ 〈 ∂LΨ(D)\n∂w (t) l\n,p〉 − 〈 ∂LD̃(i)\n∂w (t) l\n,p〉\n)\n≥ −γ̃(p)vlνj〈 ∂LD̃(i)\n∂w (t) l\n,p〉 − ∣∣∣∣∣〈∂LΨ(S)∂w(t)l ,p〉 − 〈 ∂LΨ(D) ∂w (t) l ,p〉 ∣∣∣∣∣− ∣∣∣∣∣〈∂LΨ(D)∂w(t)l ,p〉 − 〈 ∂LD̃(i) ∂w (t) l ,p〉 ∣∣∣∣∣ > √ 2\nni min{∆, 2β} − 2 √ 2ni min{∆, 2β} − ∥∥∥∥∥∂LD̃(i)∂w(t)l − ∂LΨ(D) ∂w (t) l ∥∥∥∥∥ ‖p‖ ≥ 3\n2 √ 2ni min{∆, 2β} − 4α\nSo for α < 8 √ 2ni min{∆, 2β} we get the required.\nWe want to show that if the value of gt gets “stuck”, then it recovered the value of the gate, multiplied by the correlation ci−1,j . We do this by observing the dynamics of 〈w(t)l ,p〉. In most cases, its value moves in the right direction, except for a small set that oscillates around zero. This set is the following:\nAt = { (l,p) : (i, j, p̃) /∈ P̃ ∧ γ̃(p)vlνj < 0 ∧ 〈w(t)l ,p〉 ≤ 4η ni ∧ ( γ̃(−p)vlνj < 0 ∨ (i, j,−p̃) ∈ P̃ )} We have the following simple observation:\nLemma 10. With the assumptions of Lemma 9, with probability at least 1− δ, for every t we have: At ⊆ At+1.\nProof. Fix some (l,p) ∈ At, and we need to show that 〈w(t+1)l ,p〉 ≤ 4η ni . If 〈w(t)l ,p〉 = 0 then4 〈w(t+1)l ,p〉 = 〈w (t) l ,p〉 ≤ 4η ni\nand we are done. If 〈w(t)l ,p〉 > 0 then, since (i, j, p̃) /∈ P̃ we have from Lemma 9, w.p at least 1− δ:\n−〈 ∂LΨ(S)\n∂w (t) l\n,p〉 < γ̃(p)vlνj √ 2ni min{∆, 2β} < 0\nWhere we use the fact that γ̃(p)vlνj < 0. Therefore, we get:\n〈w(t+1)l ,p〉 = 〈w (t) l ,p〉 − η〈\n∂LΨ(S)\n∂w (t) l\n,p〉 ≤ 〈w(t)l ,p〉 ≤ 4η\nni\nOtherwise, we have 〈w(t)l ,p〉 < 0, so:\n〈w(t+1)l ,p〉 = 〈w (t) l ,p〉 − η〈\n∂LΨ(S)\n∂w (t) l\n,p〉 ≤ 〈w(t)l ,p〉+ 4η ni ≤ 4η ni\nNow, we want to show that all 〈w(t)l ,p〉 with (l,p) /∈ At and (i, j, p̃) /∈ P̃ move in the direction of γ̃(p) · νj : Lemma 11. With the assumptions of Lemma 9, with probability at least 1 − δ, for every l,t and p ∈ {±1}2 such that 〈w(t)l ,p〉 > 0, (i, j, p̃) /∈ P̃ and (l,p) /∈ At, it holds that:(\nσ(〈w(t)l ,p〉)− σ(〈w (t−1) l ,p〉) ) · γ̃(p)vlνj ≥ 0\nProof. Assume the result of Lemma 9 holds (this happens with probability at least 1−δ). We cannot have 〈w(t−1)l ,p〉 = 0, since otherwise we would have 〈w (t) l ,p〉 = 0, contradicting the assumption. If 〈w(t−1)l ,p〉 > 0, since we require 〈w (t) l ,p〉 > 0 we get that:\nσ(〈w(t)l ,p〉)− σ(〈w (t−1) l ,p〉) = 〈w (t) l −w (t−1) l ,p〉 = −η〈\n∂LΨ(S) ∂w (t−1) l ,p〉\nand the required follows from Lemma 9. Otherwise, we have 〈w(t−1)l ,p〉 < 0. We observe the following cases:\n• If γ̃(p)vlνj ≥ 0 then we are done, since:( σ(〈w(t)l ,p〉)− σ(〈w (t−1) l ,p〉) ) · γ̃(p)νj = σ(〈w(t)l ,p〉) · γ̃(p)vlνj ≥ 0\n4Formally, there is no gradient, but we’ll just take the sub-gradient zero.\n• Otherwise, we have γ̃(p)vlνj < 0. We also have:\n〈w(t)l ,p〉 = 〈w (t−1) l ,p〉 − η〈\n∂LΨ(S)\n∂w (t) l\n,p〉 ≤ 〈w(t−1)l ,p〉+ 4η ni ≤ 4η ni\nSince we assume (l,p) /∈ At, we must have (i, j,−p̃) /∈ P̃ and γ̃(−p)vlνj ≥ 0. Therefore, from Lemma 9 we get:\n〈 ∂LΨ(S)\n∂w (t) l\n,−p〉 < −γ̃(−p)vlνj √ 2ni min{∆, 2β}\nAnd hence:\n0 < 〈w(t)l ,p〉 = 〈w (t−1) l ,p〉+ η〈\n∂LΨ(S) ∂w (t−1) l ,−p〉 ≤ −ηγ̃(−p)vlνj √ 2ni min{∆, 2β} < 0\nand we reach a contradiction.\nFrom the above, we get the following:\nCorollary 3. With the assumptions of Lemma 9, with probability at least 1 − δ, for every l,t and p ∈ {±1}2 such that 〈w(t)l ,p〉 > 0, (i, j, p̃) /∈ P̃ and (l,p) /∈ At, the following holds:(\nσ(〈w(t)l ,p〉)− σ(〈w (0) l ,p〉) ) · γ̃(p)vlνj ≥ 0\nProof. Notice that for every t′ ≤ t we have (l,p) /∈ At′ ⊆ At. Therefore, using the previous lemma:( σ(〈w(t)l ,p〉)− σ(〈w (0) l ,p〉) ) · γ̃(p)vlνj =\n∑ 1≤t′≤t ( σ(〈w(t)l ,p〉)− σ(〈w (t′) l ,p〉) ) · γ̃(p)vlνj ≥ 0\nFinally, we need to show that there are some “good” neurons, that are moving strictly away from zero:\nLemma 12. Fix δ > 0. Assume P(x,y)∼D [ x ∈ X̃ ′ ] <\n8 √ 2ni min{∆, 2β}. Assume we sample\nS ∼ D, with |S| > 128 2 min{∆,2β}2 log 4 δ . Assume that k ≥ log −1( 43 ) log( 4 δ ). Then with probability at least 1− 2δ, for every p ∈ {±1}2 such that (i, j, p̃) /∈ P̃ , there exists l ∈ [k] such that for every t with gt−1(p) ∈ (−1, 1), we have:\nσ(〈w(t)l ,p〉) · γ̃(p)vlνj ≥ ηt √ 2ni min{∆, 2β}\nProof. Assume the result of Lemma 9 holds (happens with probability at least 1 − δ). Fix some p ∈ {±1}2 such that (i, j, p̃) /∈ P̃ . For l ∈ [k], with probability 14 we have both vl = γ̃(p)νj and 〈w(0)l ,p〉 > 0. Therefore, the probability that there exists l ∈ [k] such that the above holds is 1 − ( 34 )\nk ≥ 1 − δ4 . Using the union bound, w.p at least 1 − δ, there exists such l ∈ [k] for every p ∈ {±1}2. In such case, we have 〈w(t)l ,p〉 ≥ ηt\n√ 2ni min{∆, 2β}, by induction:\n• For t = 0 this is true since 〈w(0)l ,p〉 > 0.\n• If the above holds for t − 1, then 〈w(t−1)l ,p〉 > 0, and therefore, using vl = γ̃(p)νj and Lemma 9:\n−〈 ∂LΨ(D)\n∂w (t) l\n,p〉 > γ̃(p)vlνj √ 2ni min{∆, 2β}\nAnd we get:\n〈w(t)l ,p〉 = 〈w (t−1) l ,p〉 − η〈\n∂LΨ(D)\n∂w (t) l\n,p〉\n> 〈w(t−1)l ,p〉+ ηγ̃(p)vlνj √ 2ni min{∆, 2β}\n≥ η(t− 1) √ 2ni min{∆, 2β}+ η √ 2ni min{∆, 2β}\nUsing the above results, we can analyze the behavior of gt(p): Lemma 13. Assume we initialize w(0)l such that ∥∥∥w(0)l ∥∥∥ ≤ 14k . Fix δ > 0. As-\nsume P(x,y)∼D [ x ∈ X̃ ′ ] <\n8 √ 2ni min{∆, 2β}. Assume we sample S ∼ D, with |S| >\n128 2 min{∆,2β}2 log 4 δ . Assume that k ≥ log −1( 43 ) log( 4 δ ). Then with probability at least 1 − 2δ,\nfor every p ∈ {±1}2 such that (i, j, p̃) /∈ P̃ , for t > 3ni√2η min{∆,2β} we have:\ngt(p) = γ̃(p)νj\nProof. Using Lemma 12, w.p at least 1 − 2δ, for every such p there exists lp ∈ [k] such that for every t with gt−1(p) ∈ (−1, 1):\nvlpσ(〈w (t) lp ,p〉) · γ̃(p)νj ≥ ηt √ 2ni min{∆, 2β}\nAssume this holds, and fix some p ∈ {±1}2 with (i, j, p̃) /∈ P̃ . Let t, such that gt−1(p) ∈ (−1, 1). Denote the set of indexes J = {l : 〈w(t)l ,p〉 > 0}. We have the following:\ngt(p) = ∑ l∈J vlσ(〈w(t)l ,p〉)\n= vlpσ(〈w (t) lp ,p〉) + ∑ l∈J\\{lp},(l,p)/∈At vlσ(〈w(t)l ,p〉) + ∑ l∈J\\{lp},(l,p)∈At vlσ(〈w(t)l ,p〉)\nFrom Corollary 3 we have: γ̃(p)νj · ∑\nl∈J\\{lp},(l,p)/∈At\nvlσ(〈w(t)l ,p〉) ≥ −kσ(〈w (0) l ,p〉) ≥ −\n1\n4\nBy definition of At and by our assumption on η we have: γ̃(p)νj · ∑\nl∈J\\{lp},(l,p)∈At\nvlσ(〈w(t)l ,p〉) ≥ −k 4η ni ≥ −1 4\nTherefore, we get:\nγ̃(p)νj · gt(p) ≥ ηt √ 2ni min{∆, 2β} − 1 2\nThis shows that for t > 3ni√ 2η min{∆,2β} we get the required.\nProof. of Lemma 6. Using the result of Lemma 13, with union bound over all choices of j ∈ [ni/2]. The required follows by the definition of γ̃(x2j−1, x2j) = γi−1,j(ξ2j−1x2j−1, ξ2jx2j), and using the definition of X̃\nProof. of Lemma 1. Fix some i ∈ [d], j ∈ [ni/2],p ∈ {±1}2, y′ ∈ {±1}, such that:\nP(x,y)∼D(i) [γi−1,j(x2j−1, x2j) = γi−1,j(p)] > 0\nAssume w.l.o.g. that j = 1. Denote by W the set of all possible choices for x3, . . . , xni , such that when (x1, x2) = p, the resulting label is y′. Formally:\nW := {(x3, . . . , xni) : Γi...d(p1, p2, x3, . . . , xni) = y′}\nThen we get:\nPD(i) [(x1, x2) = p, y = y′, γi−1,j(x1, x2) = γi−1,j(p)] = PD(i) [(x1, x2) = p, (x3, . . . , xni) ∈W,γi−1,j(x1, x2) = γi−1,j(p)] = PD(i) [(x1, x2) = p, γi−1,j(x1, x2) = γi−1,j(p)] · PD(i) [(x3, . . . , xni) ∈W ] = PD(i) [(x1, x2) = p|γi−1,j(x1, x2) = γi−1,j(p)] · PD(i) [γi−1,j(x1, x2) = γi−1,j(p), (x3, . . . , xni) ∈W ] = PD(i) [(x1, x2) = p|γi−1,j(x1, x2) = γi−1,j(p)] · PD(i) [y = y′, γi−1,j(x1, x2) = γi−1,j(p)]\nAnd dividing by PD(i) [γi−1,j(x1, x2) = γi−1,j(p)] gives the required." }, { "heading": "C PROOFS OF SECTION 5", "text": "Proof. of Lemma 2.\nFor every gate (i, j), let Ji,j be the subset of leaves in the binary tree whose root is the node (i, j). Namely, Ji,j := {(j − 1)2d−i + 1, . . . , j2d−i}. Then we show inductively that for an input x ∈ {±1}n, the (i, j) gate outputs: ∏ l∈I∩Ji,j xl:\n• For i = d− 1, this is immediate from the definition of the gate γd−1,j .\n• Assume the above is true for some i and we will show this for i − 1. By definition of the circuit, the output of the (i − 1, j) gate is a product of the output of its inputs from the previous layers, the gates (i, 2j − 1), (i, 2j). By the inductive assumption, we get that the output of the (i− 1, j) gate is therefore: ∏\nl∈Ji,2j−1∩I\nxl · ∏ l∈Ji,2j∩I xl = ∏ l∈(Ji,j2−1∪Ji,2j)∩I xl = ∏ l∈Ji−1,j xl\nFrom the above, the output of the target circuit is ∏ l∈J0,1∩I xl = ∏ l∈I xl, as required.\nProof. of Lemma 3.\nBy definition we have: ci,j = E(x,y)∼D [ Γ(i+1)...d(x)jy ] = E(x,y)∼D [ Γ(i+1)...d(x)jy ] = E(x,y)∼D [ Γ(i+1)...d(x)jx1 · · ·xk ] Since we require Ii,j 6= 0, then we cannot have Γ(i+1)...d(x)j ≡ 1. So, from what we showed previously, it follows that Γ(i+1)...d(x)j = ∏ j′∈I′ xj′ for some ∅ 6= I ′ ⊆ I . Therefore, we get that:\nci,j = ED ∏ j′∈I\\I′ xj′ = ∏ j′∈I\\I′ ED [xj′ ] = ∏ j′∈I\\I′ (2pj′ − 1)\nFurthermore, we have that:\nED [y] = ED ∏ j′∈I xj′ = ∏ j′∈I ED [xj′ ] = ∏ j′∈I (2pj′ − 1)\nAnd using the assumption on pj we get: |ci,j | − |ED [y]| = ∏\nj′∈[k]\\I′ |2pj′ − 1| − ∏ j′∈[k] |2pj′ − 1|\n= ∏ j′∈[k]\\I′ |2pj′ − 1| 1− ∏ j′∈I′ |2pj′ − 1| ≥\n ∏ j′∈[k]\\I′ |2pj′ − 1| (1− (1− 2ξ)|I′|) ≥ (2ξ)k−|I ′| (1− (1− 2ξ)) ≥ (2ξ)k\nNow, for the second result, we have: P(z,y)∼Γi...d(D) [zj = 1] = E(x,y)∼D [ 1{Γ(i+1)...d(x)j = 1} ] = E(x,y)∼D 1 2 ( ∏ j′∈I′ xj′ + 1)\n = 1\n2 ∏ j′∈I′ E(x,y)∼D [xj′ ] + 1 2\nAnd so we get: ∣∣∣∣P(z,y)∼Γi...d(D) [zj = 1]− 12 ∣∣∣∣ = 12 ∏\nj′∈I′ ∣∣E(x,y)∼D [xj′ ]∣∣ < 1\n2 (1− 2ξ)|I ′| ≤ 1 2 − ξ\nProof. of Lemma 4 For every i ∈ [d] and j ∈ [2i], denote the following:\np+i,j = P(x,y)∼D(i) [xj = 1|y = 1] , p − i,j = P(x,y)∼D(i) [xj = 1|y = −1]\nDenote D(i)|z the distribution D(i) conditioned on some fixed value z sampled from D(i−1). We prove by induction on i that |p+i,j − p − i,j | = ( 2 3 )i :\n• For i = 0 we have p+i,j = 1 and p − i,j = 0, so the required holds.\n• Assume the claim is true for i− 1, and notice that we have for every z ∈ {±1}2i−1 : P(x,y)∼D(i) [xj = 1|y = 1] = P(x,y)∼D(i)|z [ xj = 1|zdj/2e = 1 ] · P(z,y)∼D(i−1) [ zdj/2e = 1|y = 1 ] + P(x,y)∼D(i)|z [ xj = 1|zdj/2e = −1 ] · P(z,y)∼D(i−1) [ zdj/2e = −1|y = 1 ]\n= p+i−1,dj/2e + 1 3 (1− p + i−1,dj/2e) if γi−1,dj/2e = ∧ 2 3p + i−1,dj/2e if γi−1,dj/2e = ∨ 1 3p + i−1,dj/2e + (1− p + i−1,dj/2e) if γi−1,dj/2e = ¬∧\n2 3 (1− p + i−1,dj/2e) if γi−1,dj/2e = ¬∨\n= 2 3p + i−1,dj/2e − 1 3 if γi−1,dj/2e = ∧ 2 3p + i−1,dj/2e if γi−1,dj/2e = ∨ 1− 23p + i−1,dj/2e if γi−1,dj/2e = ¬∧\n2 3 − 2 3p + i−1,dj/2e if γi−1,dj/2e = ¬∨\nSimilarly, we get that:\nP(x,y)∼D(i) [xj = 1|y = −1] = 2 3p − i−1,dj/2e − 1 3 if γi−1,dj/2e = ∧ 2 3p − i−1,dj/2e if γi−1,dj/2e = ∨ 1− 23p − i−1,dj/2e if γi−1,dj/2e = ¬∧\n2 3 − 2 3p − i−1,dj/2e if γi−1,dj/2e = ¬∨\nTherefore, we get:\n|p+i,j − p − i,j | =\n2 3 |p+i−1,dj/2e − p − i−1,dj/2e| =\n( 2\n3 )i From this, we get:∣∣E(x,y)∼D(i) [xjy]∣∣ = ∣∣E(x,y)∼D(i) [(21{xj = 1} − 1)y]∣∣\n= ∣∣2E(x,y)∼D(i) [1{xj = 1}y]− E [y]∣∣ = |2 (PD(i) [xj = 1, y = 1]− PD(i) [xj = 1, y = −1])− E [y]| = ∣∣2 (p+i,jP [y = 1]− p−i,jP [y = −1])− E [y]∣∣\n= ∣∣∣∣2(12(p+i,j − p−i,j) + ξ(p+i,j + p−i,j) ) − E [y] ∣∣∣∣ ≥ ∣∣p+i,j − p−i,j∣∣− 2ξ ∣∣p+i,j + p−i,j∣∣− |E [y]|\n≥ ∣∣p+i,j − p−i,j∣∣− 6ξ > 12\n( 2\n3 )d And hence: ∣∣E(x,y)∼D(i) [xjy]∣∣− ∣∣E(x,y)∼D(i) [y]∣∣ ≥ 12 ( 2 3 )d − 2ξ > 1 3 ( 2 3 )d\nProof. of Lemma 5 Fix some z′ ∈ {±1}ni/2 and y′ ∈ {±1}. Then we have:\nP(x,y)∼Γi(D(i)) [(x, y) = (z ′, y′)] = P(x,y)∼D(i) [(Γi(x), y) = (z′, y′)] = P(x,y)∼D(i) [ ∀j γi−1,j(x2j−1, x2j) = z′j and y = y′ ] = P(z,y)∼D(i−1) [(z, y) = (z′, y′)]\nBy the definitions of D(i) and D(i−1)." } ]
2,019
null
SP:201070333d2ca3ad49d9f4783d190e2c1772afe9
[ "This paper introduces “stiffness”, a new metric to characterize generalization in neural networks. Stiffness is a pretty simple concept and is relatively straightforward to compute. The authors evaluate this metric on standard datasets using two relatively small neural networks. On the whole, the paper is written clearly and explains its methodology in simple language.", "This submission introduces a metric, termed stiffness, to evaluate the generalization capability of neural networks. The metric is novel and straightforward, it measures how stiff a network is by looking at how a small gradient step on one example affects the loss on another example. The authors study several configurations on three small datasets. They demonstrate that stiffness is a useful concept for diagnosing and characterizing generalization. " ]
We investigate neural network training and generalization using the concept of stiffness. We measure how stiff a network is by looking at how a small gradient step on one example affects the loss on another example. In particular, we study how stiffness depends on 1) class membership, 2) distance between data points in the input space, 3) training iteration, and 4) learning rate. We experiment on MNIST, FASHION MNIST, and CIFAR-10 using fully-connected and convolutional neural networks. Our results demonstrate that stiffness is a useful concept for diagnosing and characterizing generalization. We observe that small learning rates reliably lead to higher stiffness at a given epoch as well as at a given training loss. In addition, we measure how stiffness between two data points depends on their mutual input-space distance, and establish the concept of a dynamical critical length that characterizes the distance over which datapoints react similarly to gradient updates. The dynamical critical length decreases with training and the higher the learning rate, the smaller the critical length.
[]
[ { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "CoRR, abs/1802.06509,", "year": 2018 }, { "authors": [ "Devansh Arpit", "Stanislaw K. Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron C. Courville", "Yoshua Bengio", "Simon Lacoste-Julien" ], "title": "A closer look at memorization in deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "MCSS, 2:303–314,", "year": 1989 }, { "authors": [ "Simon S. Du", "Jason D. Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient Descent Finds Global Minima of Deep Neural Networks. arXiv:1811.03804 [cs, math, stat", "venue": "November 2018a. URL http://arxiv.org/abs/1811.03804", "year": 2018 }, { "authors": [ "Simon S. Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient Descent Provably Optimizes Over-parameterized Neural Networks. arXiv:1810.02054 [cs, math, stat", "venue": "October 2018b. URL http://arxiv.org/abs/1810.02054", "year": 2054 }, { "authors": [ "Stanislav Fort", "Stanislaw Jastrzebski" ], "title": "Large scale structure of neural network loss landscapes, 2019", "venue": null, "year": 2019 }, { "authors": [ "Stanislav Fort", "Adam Scherlis" ], "title": "The goldilocks zone: Towards better understanding of neural network loss landscapes", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Behrooz Ghorbani", "Shankar Krishnan", "Ying Xiao" ], "title": "An investigation into neural net optimization via hessian eigenvalue density, 2019", "venue": null, "year": 2019 }, { "authors": [ "K. Hornik", "M. Stinchcombe", "H. White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural Netw.,", "year": 1989 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Corinna Cortes. MNIST handwritten digit database." ], "title": "URL http://yann", "venue": "lecun.com/exdb/mnist/.", "year": 2010 }, { "authors": [ "Moshe Leshno", "Vladimir Ya. Lin", "Allan Pinkus", "Shimon Schocken" ], "title": "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function", "venue": "Neural Networks,", "year": 1993 }, { "authors": [ "Chunyuan Li", "Heerad Farkhoor", "Rosanne Liu", "Jason Yosinski" ], "title": "Measuring the intrinsic dimension of objective landscapes", "venue": "CoRR, abs/1804.08838,", "year": 2018 }, { "authors": [ "Guido Montúfar", "Razvan Pascanu", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "On the number of linear regions of deep neural networks", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Roman Novak", "Yasaman Bahri", "Daniel A. Abolafia", "Jeffrey Pennington", "Jascha Sohl-Dickstein" ], "title": "Sensitivity and generalization in neural networks: an empirical study", "venue": null, "year": 2018 }, { "authors": [ "Vardan Papyan" ], "title": "Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hessians", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Pratik Worah" ], "title": "Nonlinear random matrix theory for deep learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Ben Poole", "Subhaneil Lahiri", "Maithreyi Raghu", "Jascha Sohl-Dickstein", "Surya Ganguli" ], "title": "Exponential expressivity in deep neural networks through transient chaos", "venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Maithra Raghu", "Ben Poole", "Jon Kleinberg", "Surya Ganguli", "Jascha Sohl-Dickstein" ], "title": "On the expressive power of deep neural networks", "venue": "Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nasim Rahaman", "Aristide Baratin", "Devansh Arpit", "Felix Draxler", "Min Lin", "Fred A. Hamprecht", "Yoshua Bengio", "Aaron Courville" ], "title": "On the Spectral Bias of Neural Networks", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning", "venue": "algorithms. CoRR,", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks are a class of highly expressive function approximators that proved to be successful in approximating solutions to complex tasks across many domains such as vision, natural language understanding, and game-play. They have long been recognized as universal function approximators (Hornik et al., 1989; Cybenko, 1989; Leshno et al., 1993). The specific details that lead to their expressive power have recently been studied in Montúfar et al. (2014); Raghu et al. (2017); Poole et al. (2016). Empirically, neural networks have been extremely successful at generalizing to new data despite their over-parametrization for the task at hand, as well as their proven ability to fit arbitrary random data perfectly Zhang et al. (2016); Arpit et al. (2017).\nThe fact that gradient descent is able to find good solutions given the highly over-parametrized family of functions has been studied theoretically in Arora et al. (2018) and explored empirically in Li et al. (2018), where the effective low-dimensional nature of many common learning problems is shown. Fort & Scherlis (2019) extends the analysis in Li et al. (2018) to demonstrate the role of initialization on the effective dimensionality, and Fort & Jastrzebski (2019) use the result to build a phenomenological model of the loss landscape.\nDu et al. (2018a) and Du et al. (2018b) use a Gram matrix to study convergence in neural network empirical loss. Pennington & Worah (2017) study the concentration properties of a similar covariance matrix formed from the output of the network. Ghorbani et al. (2019) investigate the Hessian eigenspectrum and Papyan (2019) show how it is related to the gradient covariance. Both concepts are closely related to our definition of stiffness.\nTo explain the remarkable generalization properties of neural networks, it has been proposed (Rahaman et al., 2018) that the function family is biased towards low-frequency functions. The role of similarity between the neural network outputs to similar inputs has been studied in Schoenholz et al. (2016) for random initializations and explored empirically in Novak et al. (2018)." }, { "heading": "1.1 OUR CONTRIBUTION", "text": "In this paper, we study generalization through the lens of stiffness. We measure how stiff a neural network is by analyzing how a small gradient step based on one input example affects the loss\non another input example. Mathematically, if the gradient of the loss at point X1 with respect to the network weights is ∇WL(X1) = ~g1, and the gradient at point X2 is ~g2, we define stiffness ∝ ~g1 · ~g2. We specifically focus on the sign of ~g1 · ~g2 as well as the cosine between the two vectors cos(~g1, ~g2) = ĝ1 · ĝ2, where ĝ = ~g/|~g|, which both capture the resistance of the functional approximation learned to deformation by gradient steps. We find the concept of stiffness useful in diagnosing and characterizing generalization. As a corollary, we use stiffness to characterize the regularization power of learning rate, and show that higher learning rates bias the functions learned towards lower stiffness.\nWe show that stiffness is directly related to generalization, and in particular that it starts dropping sharply at the moment training and validation set losses stop evolving together. We explore the concept of stiffness for fully-connected (FC) and convolutional neural networks (CNN) on 3 classification datasets (MNIST, FASHION MNIST, CIFAR-10). We focus on how stiffness between data points depends on their 1) class membership, 2) distance between each other in the space of inputs, 3) training epoch, and 4) the choice of learning rate. We study stiffness between two images in the training set, one in the training and one in the validation set, and both images in the validation set.\nWe observed the stiffness based on class membership and noticed a clear evolution towards higher stiffness for images between different classes and towards lower stiffness within the same class. We diagnose and characterize the class-dependent stiffness matrix for fully-connected and convolutional neural networks on the datasets mentioned above in different stages of training. We observe the stiffness between inputs to regress to zero with the onset of overfitting, demonstrating the clear connection to generalization.\nThe choice of learning rate effects the stiffness properties of the learned function significantly. High learning rates induce functional approximations that are less stiff over larger distances (i.e. data points further apart stop responding similarly to gradient updates). We define the concept of dynamical critical length to capture the phenomenon.\nThis paper is structured as follows: we introduce the concept of stiffness and the relevant theory in Section 2. We describe our experimental setup in Section 3, and discuss their results in Section 4. We conclude with Section 5." }, { "heading": "2 THEORETICAL BACKGROUND", "text": "" }, { "heading": "2.1 STIFFNESS – DEFINITIONS", "text": "Let a functional approximation (e.g. a neural network) f be parametrized by tunable parameters W . Let us assume a classification task and let a data point X have the ground truth label y. A loss L(fW (X), y) gives us the amount of mismatch between the function’s output at input X and the ground truth y. The gradient of the loss with respect to the parameters\n~g = ∇WL(fW (X), y) (1)\nis the direction in which, if we were to change the parameters W , the loss would change the most rapidly (for infinitesimal step sizes). Gradient descent uses this step (the negative of it) to update the weights and gradually tune the functional approximation to better correspond to the desired outputs on the training dataset inputs. Let us consider two data points with their ground truth labels (X1, y1)\nand (X2, y2). We construct a gradient with respect to example 1 as ~g1 = ∇WL(fW (X1), y1) and\nask how do the losses on data points 1 and 2 change as the result of a small change of W in the direction −~g1, i.e. what is\n∆L1 = L(fW−ε~g1(X1), y1)− L(fW (X1), y1) , (2)\nwhich is equivalent to\n∆L1 = −ε∇εL(fW−ε~g1(X1), y1) = −ε~g1 · ~g1 . (3)\nThe change in loss on input 2 due to the same gradient step from input 1 becomes equivalently ∆L2 = −ε∇εL(fW−ε~g1(X2), y2) = −ε~g1 · ~g2 We are interested in the correlation in loss changes ∆L1 and ∆L2. We know that ∆L1 < 0 since we constructed the gradient update accordingly. We define positive stiffness to mean ∆L2 < 0 as well, i.e. that losses at both inputs went down. We assign the stiffness of 0 for ∆L2 = 0. If ∆L2 > 0, the two inputs would be anti-stiff (negative stiffness). The equations above show that this can equivalently be thought of as the overlap between the two gradients ~g1 · ~g2 being positive for positive stiffness, and negative for negative stiffness. We illustrate this in Figure 1.\nThe above indicate that what we initially conceived of as a change in loss due to the application of a small gradient update from one input to another is in fact equivalent to analyzing gradient alignment between different datapoints.\nWe will be using 2 different definitions of stiffness: the sign stiffness and the cosine stiffness. We define the sign stiffness to be the expected sign of ~g1 · ~g2 (or equivalently the expected sign of ∆L1∆L2) as Ssign((X1, y1), (X2, y2); f) = E [sign (~g1 · ~g2)] , (4) where stiffness depends on the dataset from which X1 and X2 are drawn. The cosine stiffness is\nScos((X1, y1), (X2, y2); f) = E [cos (~g1 · ~g2)] , (5)\nwhere cos (~g1 · ~g2) = (~g1/|~g1|) · (~g2/|~g2|). We use both versions of stiffness as they are suitable to highlight different phenomena – the sign stiffness shows the stiffness between classes clearer, while the cosines stiffness is more useful for within-class stiffness." }, { "heading": "2.2 TRAIN-TRAIN, TRAIN-VAL, AND VAL-VAL", "text": "When measuring stiffness between two datapoints, we have 3 options: 1) choosing both datapoints from the training set (we call this train-train), 2) choosing one from the training set and the other from the validation set (train-val), and 3) choosing both from the validation set (val-val). The trainval stiffness is directly related to generalization, as it corresponds to the amount of improvement on the training set transferring to the improvement of the validation set. We empirical observe that all 3 options are empirically remarkably similar, which gives us confidence that they all track generalization. This is further supported by observing their behavior is a function of epoch in Figure 2." }, { "heading": "2.3 STIFFNESS BASED ON CLASS MEMBERSHIP", "text": "A natural question to ask is whether a gradient taken with respect to an input X1 in class c1 will also decrease the loss for example X2 with true class c2. In particular, we define the class stiffness matrix\nC(ca, cb) = E X1∈ca,X2∈cb [S((X1, y1), (X2, y2))] . (6)\nThe on-diagonal elements of this matrix correspond to the suitability of the current gradient update to the members of a class itself. In particular, they correspond to within class generalizability. The off-diagonal elements, on the other hand, express the amount of improvement transferred from one class to another. They therefore directly diagnose the amount of generality the currently improved features have.\nA consistent summary of generalization between classes is the off-diagonal sum of the class stiffness matrix\nSbetween classes = 1 Nc(Nc − 1) ∑ c1 ∑ c2 6=c1 C(c1, c2) . (7)\nIn our experiments, we track this value as a function of learning rate once we reached a fixed loss. The quantity is related to how generally applicable the learned features are, i.e. how well they transfer from one class to another. For example, for CNNs learning good edge detectors in initial layers typically benefits all downstream tasks, regardless of the particular class in question. We do the equivalent for the within-class stiffness (= on diagonal elements). When the within-class stiffness starts going < 1, the generality of the features improved does not extend to even the class itself." }, { "heading": "2.4 STIFFNESS AS A FUNCTION OF DISTANCE", "text": "We investigate how stiff two inputs are based on how far away from each other they are. We can think of neural networks as a form of kernel learning and here we are investigating the particular form of the learned kernel. This links our results to the work on spectral bias (towards slowly-varying, low frequency functions) in Rahaman et al. (2018). We are able to directly measure the characteristic size of the stiff regions in neural networks trained on real tasks which we call dynamical critical length ξ. We work with data normalized to the unit sphere | ~X| = 1 and use their mutual cosine to define their distance as\ndistance( ~X1, ~X2) = 1− ~X1 · ~X2 | ~X1|| ~X2| . (8)\nwhich has the advantage of being bounded between 0 and 2. We track this threshold distance ξ as a function of training and learning rate to estimate the characteristic size of the stiff regions of a neural net." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "We ran a large number of experiments with fully-connected (FC) and convolutional neural networks (CNN) on 3 classification datasets: MNIST (LeCun & Cortes, 2010), FASHION MNIST Xiao et al. (2017), and CIFAR-10 Krizhevsky (2009). Using those experiments, we investigated the behavior of stiffness as a function of 1) training epoch, 2) the choice of learning rate, 3) class membership, and 4) the input space distance between images.\nFor experiments with fully-connected neural networks, we used a 3 layer ReLU network of the form X → 500 → 300 → 100 → y. For experiments with convolutional neural networks, we used a 3 layer network with filter size 3 and the numbers of channels being 16, 32, and 32 after the respective convolutional layer, each followed by 2 × 2 max pooling. The final layer was fully-connected. No batch normalization was used.\nWe pre-processed the network inputs to have zero mean and unit variance, and normalized all data to the unit sphere as | ~X| = 1. We used Adam with different (constant) learning rates as our optimizer and the default batch size of 32." }, { "heading": "3.2 TRAINING AND STIFFNESS EVALUATION", "text": "We evaluated stiffness properties between data points from the training set, one from the training and one from the validation set, and both from the validation set. We used the training set to train our model. The procedure was as follows: 1) Train for a number of steps on the training set and update the network weights accordingly. 2) For each of the modes { train-train, train-val, and valval}, go through tuples of images coming from the respective datasets. 3) For each tuple calculate the loss gradients g1 and g2, and compute check sign(~g1 ·~g2) and cos(~g1, ~g2). 4) Log the input space distance between the images as well as other relevant features. In our experiments, we used a fixed subset (typically of ≈ 500 images for experiments with 10 classes) of the training and validation sets to evaluate the stiffness properties on. We convinced ourselves that such a subset is sufficiently large to provide measurements with small enough statistical uncertainties, which we overlay in our figures." }, { "heading": "3.3 LEARNING RATE DEPENDENCE", "text": "We investigated how stiffness properties depend on the learning rate used in training. To be able to do that, we first looked at the dynamical critical scale ξ for each training step of our network, and then compared those based on the epoch and training loss at the time in order to be able to compare training runs with different learning rates fairly. The results are shown in Figures 4 and 6." }, { "heading": "4.1 STIFFNESS PROPERTIES BASED ON CLASS MEMBERSHIP", "text": "" }, { "heading": "4 RESULTS", "text": "We explored the stiffness properties based on class membership as a function of training epoch at 4 different stages of training – at initialization, very early, around epoch 1, and at the late stage. Our results are summarized in Figures 3 and Figures 7 and 8. The within-class (on diagonal) and between-classes results are summarized in Figure 4, which as an example of equivalent plots we generated for all our experiments. Initially, an improvement based on an input from a particular\nclass benefits only members of the same class. Intuitively, this could be due to some crude features shared within a class (such as the typical overall intensity, or the average color) being learned. There is no consistent stiffness between different classes at initialization. As training progresses, withinclass stiffness stays high. In addition, stiffness between classes increases as well, given the model were is powerful enough for the dataset. With the onset of overfitting, as shown in Figure 2, the model becomes increasingly less stiff until even stiffness for inputs within the same class is lost." }, { "heading": "4.2 STIFFNESS AS A FUNCTION OF DISTANCE BETWEEN DATAPOINTS", "text": "We investigated stiffness between two inputs as a function of their distance in the input space in order to measure how large the patches of the learned function that move together under gradient\nupdates are. We focused on examples from the same class. Examples of our results are shown in Figure 5 and Figures 9 and 10. Fitting a linear function to the data, we estimate the distance at which stiffness goes to 0, and called it the dynamical critical length ξ. Equivalent plots were generated for each epoch of training in each of our experiments in order to measure ξ used in Figure 6.\n4.3 THE DYNAMICAL CRITICAL LENGTH ξ, AND THE ROLE OF LEARNING RATE\nAt each epoch of training for each of our experiments, we analyzed the distribution of within-class stiffness between images based on their distance, and extracted the zero crossing which we call the critical dynamical scale ξ. In Figures 6 and 11 we summarize the dependence of ξ on the epoch of training as well as the training loss for 5 different learning rates. We use the training loss to make sure we are comparing runs with different learning rates at the equivalent stages of training. We see that the bigger the learning rate, the smaller the domain size ξ." }, { "heading": "4.4 STIFF DOMAIN SIZE AS THE CHARACTERISTIC LENGTH SCALE?", "text": "A natural question arises as to whether the characteristic distance between two input points at which stiffness reaches zero defines the typical scale of spatial variation of the learned function. Unfortunately, that is not necessarily the case, though it can be for some families of functions. The stiff domain sizes visible in e.g. Figure 5 represent the typical length scale over which neural networks react similarly to gradient inputs, rather than the typical length scale of variation of the function value itself.\nTo illustrate the difference, imagine a function that varies rapidly over input data, however, whose losses over the same data move in the same direction on application of a gradient step based on any of the data points. This function would have a small characteristic length scale of value variation, yet large stiff domain size." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "We explored the concept of neural network stiffness and used it to diagnose and characterize generalization. We studied stiffness for models trained on real datasets, and measured its variation with training iteration, class membership, distance between data points, and the choice of learning rate. We explored stiffness between pairs of data points coming both from the training set, one from the training and one from the validation set, and both from the validation set. Training-validation stiffness is directly related to the transfer of improvements on the training set to the validation set. We used two different stiffness metrics – the sign stiffness and the cosine stiffness – to highlight different phenomena.\nOn real data, we explored models trained on MNIST, FASHION MNIST, and CIFAR-10 through the lens of stiffness. In essence, stiffness measures the alignment of gradients taken at different input data points, which we show is equivalent to asking whether a weight update based on one input will benefit the loss on another. We demonstrate the connection between stiffness and generalization and show that with the onset of overfitting to the training data, stiffness decreases and eventually reaches 0, where even gradient updates taken with respect images of a particular class stop benefiting other members of the same class. This happens even within the training set itself, and therefore could potentially be used as a diagnostic for early stopping to prevent overfitting.\nHaving established the usefulness of stiffness as a diagnostic tool for generalization, we explored its dependence on class membership. We find that in general gradient updates with respect to a member of a class help to improve loss on data points in the same class, i.e. that members of the same class have high stiffness with respect to each other. This holds at initialization as well as throughout most of the training. The pattern breaks when the model starts overfitting to the training set, after which within-class stiffness eventually reaches 0. We observe this behavior with fully-connected and convolutional neural networks on MNIST, FASHION MNIST, and CIFAR-10.\nStiffness between inputs from different classes relates to the generality of the features being learned and within-task transfer of improvement from class to class. With the onset of overfitting, the stiffness between different classes regresses to 0, as does within-class stiffness.\nWe also investigated the characteristic size of stiff regions in our trained networks at different stages of training. By studying stiffness between two inputs and measuring their distance in the input space, we observed that the farther the datapoints and the higher the epoch of training, the less stiffness exists between them on average. This allowed us to define the dynamical critical scale ξ – an input space distance over which stiffness between input points decays to 0. ξ corresponds to the size of stiff regions – patches of the data space that can move together when a gradient update is applied, provided it were an infinitesimal gradient step. For finite step sizes, the matter becomes more complicated, as the linear regime in which we operate ceases to apply.\nWe investigated the effect of learning rate on stiffness by observing how ξ changes as a function of epoch and the training loss for different learning rates. We show that the higher the learning rate, the smaller the ξ, i.e. for high learning rates the patches of input space that are improved together are smaller. This holds both as a function of epoch and training loss, which we used in order to compare runs with different learning rates fairly. This points towards the regularization role of learning rate on the kind of function we learn. We observe significant differences in the characteristic size of regions of input space that react jointly to gradient updates based on the learning rate used to train them.\nIn this paper, all the experiments were conducted with two fixed architectures. One obvious extension to the concept of stiffness would be to ascertain the role stiffness might play in architecture search. For instance, we expect locality (as in CNN) to reflect in higher stiffness properties. It is quite possible that stiffness could be a guiding parameter for meta-learning and explorations in the space of architectures, however, this is beyond the scope of this paper and a potential avenue for future work.\nIn summary, we defined the concept of stiffness, showed its utility in providing a perspective to better understand generalization characteristics in a neural network and observed its variation with learning rate." }, { "heading": "A APPENDIX", "text": "A.1 ADDITIONAL CLASS STIFFNESS MATRIX RESULTS\nA.2 ADDITIONAL STIFFNESS AS A FUNCTION OF DATAPOINT SEPARATION RESULTS" } ]
2,019
null
SP:19318b52fa22d612f81c72457f9876d9abe7d701
[ "This paper studies how to generate transferable adversarial examples for black-box attacks. Two methods have been proposed, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). The first method adopts Nesterov optimizer instead of momentum optimizer to generate adversarial examples. And the second is a model-augmentation method to avoid \"overfitting\" of the adversarial examples. Experiments on ImageNet can prove the effectiveness of the proposed methods.", "In this paper, the authors apply the Nesterov Accelerated Gradient method to the adversarial attack task and achieve better transferability of the adversarial examples. Furthermore, the authors introduce a scale transformation method to provide the augmentation on the model, which also boosts the transferability of the attack method. Experiments are carried out to verify the scale-invariant property and the Nesterov Accelerated Gradient method on both single and ensemble of models. All experiments turn out to be a positive support to the authors' claim." ]
Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs. However, under the blackbox setting, most existing adversaries often have a poor transferability to attack other defense models. In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks so as to effectively look ahead and improve the transferability of adversarial examples. While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid “overfitting” on the white-box model being attacked and generate more transferable adversarial examples. NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models. Empirical results on ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks.
[ { "affiliations": [], "name": "ADVERSARIAL ATTACKS" }, { "affiliations": [], "name": "Jiadong Lin" }, { "affiliations": [], "name": "Chuanbiao Song" }, { "affiliations": [], "name": "Kun He" }, { "affiliations": [], "name": "Liwei Wang" } ]
[ { "authors": [ "Anurag Arnab", "Ondrej Miksik", "Philip HS Torr" ], "title": "On the robustness of semantic segmentation models to adversarial attacks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Tianyu Pang", "Hang Su", "Jun Zhu" ], "title": "Evading defenses to transferable adversarial examples by translation-invariant attacks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Xiaojun Jia", "Xingxing Wei", "Xiaochun Cao", "Hassan Foroosh" ], "title": "Comdefend: An efficient image compression model to defend adversarial examples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Fangzhou Liao", "Ming Liang", "Yinpeng Dong", "Tianyu Pang", "Xiaolin Hu", "Jun Zhu" ], "title": "Defense against adversarial attacks using high-level representation guided denoiser", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into transferable adversarial examples and black-box attacks", "venue": "arXiv preprint arXiv:1611.02770,", "year": 2016 }, { "authors": [ "Zihao Liu", "Qi Liu", "Tao Liu", "Nuo Xu", "Xue Lin", "Yanzhi Wang", "Wujie Wen" ], "title": "Feature distillation: Dnn-oriented jpeg compression against adversarial examples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yurii Nesterov" ], "title": "A method for unconstrained convex minimization problem with the rate of convergence o (1/kˆ 2)", "venue": "In Doklady AN USSR,", "year": 1983 }, { "authors": [ "Boris T Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Chuanbiao Song", "Kun He", "Liwei Wang", "John E. Hopcroft" ], "title": "Improving the generalization of adversarial training with domain adaptation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Chuanbiao Song", "Kun He", "Jiadong Lin", "Liwei Wang", "John E. Hopcroft" ], "title": "Robust local features for improving the generalization of adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Google Inc", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In International Conference on Learning Representations, Workshop Track,", "year": 2014 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A Alemi" ], "title": "Inception-v4, inception-resnet and the impact of residual connections on learning", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Florian Tramr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xiaosen Wang", "Kun He", "John E. Hopcroft" ], "title": "AT-GAN: A generative attack model for adversarial transferring on generative adversarial nets", "venue": null, "year": 1904 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan L Yuille" ], "title": "Improving transferability of adversarial examples with input diversity", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Runtian Zhai", "Tianle Cai", "Di He", "Chen Dan", "Kun He", "John E. Hopcroft", "Liwei Wang" ], "title": "Adversarially robust generalization just requires more unlabeled data", "venue": null, "year": 1906 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning models have been shown to be vulnerable to adversarial examples (Goodfellow et al., 2014; Szegedy et al., 2014), which are generated by applying human-imperceptible perturbations on benign input to result in the misclassification. In addition, adversarial examples have an intriguing property of transferability, where adversarial examples crafted by the current model can also fool other unknown models. As adversarial examples can help identify the robustness of models (Arnab et al., 2018), as well as improve the robustness of models by adversarial training (Goodfellow et al., 2014), learning how to generate adversarial examples with high transferability is important and has gained increasing attentions in the literature (Liu et al., 2016; Dong et al., 2018; Xie et al., 2019; Dong et al., 2019; Wang et al., 2019).\nSeveral gradient-based attacks have been proposed to generate adversarial examples, such as onestep attacks (Goodfellow et al., 2014) and iterative attacks (Kurakin et al., 2016; Dong et al., 2018). Under the white-box setting, with the knowledge of the current model, existing attacks can achieve high success rates. However, they often exhibit low success rates under the black-box setting, especially for models with defense mechanism, such as adversarial training (Madry et al., 2018; Song\n∗Corresponding author.\net al., 2019) and input modification (Liao et al., 2018; Xie et al., 2018). Under the black-box setting, most existing attacks fail to generate robust adversarial examples against defense models.\nIn this work, by regarding the adversarial example generation process as an optimization process, we propose two new methods to improve the transferability of adversarial examples: Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM).\n• Inspired by the fact that Nesterov accelerated gradient (Nesterov, 1983) is superior to momentum for conventionally optimization (Sutskever et al., 2013), we adapt Nesterov accelerated gradient into the iterative gradient-based attack, so as to effectively look ahead and improve the transferability of adversarial examples. We expect that NI-FGSM could replace the momentum iterative gradient-based method (Dong et al., 2018) in the gradient accumulating portion and yield higher performance.\n• Besides, we discover that deep learning models have the scale-invariant property, and propose a Scale-Invariant attack Method (SIM) to improve the transferability of adversarial examples by optimizing the adversarial perturbations over the scale copies of the input images. SIM can avoid “overfitting” on the white-box model being attacked and generate more transferable adversarial examples against other black-box models.\n• We found that combining our NI-FGSM and SIM with existing gradient-based attack methods (e.g., diverse input method (Xie et al., 2019)) can further boost the attack success rates of adversarial examples.\nExtensive experiments on the ImageNet dataset (Russakovsky et al., 2015) show that our methods attack both normally trained models and adversarially trained models with higher attack success rates than existing baseline attacks. Our best attack method, SI-NI-TI-DIM (Scale-Invariant Nesterov Iterative FGSM integrated with translation-invariant diverse input method), reaches an average success rate of 93.5% against adversarially trained models under the black-box setting. For further demonstration, we evaluate our methods by attacking the latest robust defense methods (Liao et al., 2018; Xie et al., 2018; Liu et al., 2019; Jia et al., 2019; Cohen et al., 2019). The results show that our attack methods can generate adversarial examples with higher transferability than state-of-theart gradient-based attacks." }, { "heading": "2 PRELIMINARY", "text": "" }, { "heading": "2.1 NOTATION", "text": "Let x and ytrue be a benign image and the corresponding true label, respectively. Let J(x, ytrue) be the loss function of the classifier (e.g. the cross-entropy loss). Let xadv be the adversarial example of the benign image x. The goal of the non-targeted adversaries is to search an adversarial example xadv to maximize the loss J(xadv, ytrue) in the `p norm bounded perturbations. To align with previous works, we focus on p = ∞ in this work to measure the distortion between xadv and x. That is\n∥∥xadv − x∥∥∞ ≤ , where is the magnitude of adversarial perturbations." }, { "heading": "2.2 ATTACK METHODS", "text": "Several attack methods have been proposed to generate adversarial examples. Here we provide a brief introduction.\nFast Gradient Sign Method (FGSM). FGSM (Goodfellow et al., 2014) generates an adversarial example xadv by maximizing the loss function J(xadv, ytrue) with one-step update as:\nxadv = x+ · sign(∇xJ(x, ytrue)), (1)\nwhere sign(·) function restricts the perturbation in the L∞ norm bound. Iterative Fast Gradient Sign Method (I-FGSM). Kurakin et al. (2016) extend FGSM to an iterative version by applying FGSM with a small step size α:\nx0 = x, x adv t+1 = Clip x{xadvt + α · sign(∇xJ(xadvt , ytrue))}, (2)\nwhere Clip x(·) function restricts generated adversarial examples to be within the -ball of x.\nProjected Gradient Descent (PGD). PGD attack (Madry et al., 2018) is a strong iterative variant of FGSM. It consists of a random start within the allowed norm ball and then follows by running several iterations of I-FGSM to generate adversarial examples.\nMomentum Iterative Fast Gradient Sign Method (MI-FGSM). Dong et al. (2018) integrate momentum into the iterative attack and lead to a higher transferability for adversarial examples. Their update procedure is formalized as follows:\ngt+1 = µ · gt + ∇xJ(xadvt , ytrue)∥∥∇xJ(xadvt , ytrue)∥∥1 , xadvt+1 = Clip x{xadvt + α · sign(gt+1)},\n(3)\nwhere gt is the accumulated gradient at iteration t, and µ is the decay factor of gt.\nDiverse Input Method (DIM). Xie et al. (2019) optimize the adversarial perturbations over the diverse transformation of the input image at each iteration. The transformations include the random resizing and the random padding. DIM can be naturally integrated into other gradient-based attacks to further improve the transferability of adversarial examples.\nTranslation-Invariant Method (TIM). Instead of optimizing the adversarial perturbations on a single image, Dong et al. (2019) use a set of translated images to optimize the adversarial perturbations. They further develop an efficient algorithm to calculate the gradients by convolving the gradient at untranslated images with a kernel matrix. TIM can also be naturally integrated with other gradientbased attack methods. The combination of TIM and DIM, namely TI-DIM, is the current strongest black-box attack method.\nCarlini & Wagner attack (C&W). C&W attack (Carlini & Wagner, 2017) is an optimization-based method which directly optimizes the distance between the benign examples and the adversarial examples by solving:\nargmin xadv\n∥∥xadv − x∥∥ p − c · J(xadv, ytrue). (4)\nIt is a powerful method to find adversarial examples while minimizing perturbations for white-box attacks, but it lacks the transferability for black-box attacks." }, { "heading": "2.3 DEFENSE METHODS", "text": "Various defense methods have been proposed to against adversarial examples, which can fall into the following two categories.\nAdversarial Training. One popular and promising defense method is adversarial training (Goodfellow et al., 2014; Szegedy et al., 2014; Zhai et al., 2019; Song et al., 2020), which augments the training data by the adversarial examples in the training process. Madry et al. (2018) develop a successful adversarial training method, which leverages the projected gradient descent (PGD) attack to generate adversarial examples. However, this method is difficult to scale to large-scale datasets (Kurakin et al., 2017). Tramr et al. (2018) propose ensemble adversarial training by augmenting the training data with perturbations transferred from various models , so as to further improve the robustness against the black-box attacks. Currently, adversarial training is still one of the best techniques to defend against adversarial attacks.\nInput Modification. The second category of defense methods aims to mitigate the effects of adversarial perturbations by modifying the input data. Guo et al. (2018) discover that there exists a range of image transformations, which have the potential to remove adversarial perturbations while preserving the visual information of the images. Xie et al. (2018) mitigate the adversarial effects through random transformations. Liao et al. (2018) propose high-level representation guided denoiser to purify the adversarial examples. Liu et al. (2019) propose a JPEG-based defensive compression framework to rectify adversarial examples without impacting classification accuracy on benign data. Jia et al. (2019) leverage an end-to-end image compression model to defend adversarial examples. Although these defense methods perform well in practice, they can not tell whether the model is truly robust to adversarial perturbations. Cohen et al. (2019) use randomized smoothing to obtain an ImageNet classifier with certified adversarial robustness." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 MOTIVATION", "text": "Similar with the process of training neural networks, the process of generating adversarial examples can also be viewed as an optimization problem. In the optimizing phase, the white-box model being attacked to optimize the adversarial examples can be viewed as the training data on the training process. And the adversarial examples can be viewed as the training parameters of the model. Then in the testing phase, the black-box models to evaluate the adversarial examples can be viewed as the testing data of the model.\nFrom the perspective of the optimization, the transferability of the adversarial examples is similar with the generalization ability of the trained models (Dong et al., 2018). Thus, we can migrate the methods used to improve the generalization of models to the generation of adversarial examples, so as to improving the transferability of adversarial examples.\nMany methods have been proposed to improve the generalization ability of the deep learning models, which can be split to two aspects: (1) better optimization algorithm, such as Adam optimizer(Kingma & Ba, 2014); (2) data augmentation (Simonyan & Zisserman, 2014). Correspondingly, the methods to improve the transferability of adversarial examples can also be split to two aspects: (1) better optimization algorithm, such as MI-FGSM, which applies the idea of momentum; (2) model augmentation (i.e., ensemble attack on multiple models), such as the work of Dong et al. (2018), which considers to attack multiple models simultaneously. Based on above analysis, we aim to improve the transferability of adversarial examples by applying the idea of Nesterov accelerated gradient for optimization and using a set of scaled images to achieve model augmentation." }, { "heading": "3.2 NESTEROV ITERATIVE FAST GRADIENT SIGN METHOD", "text": "Nesterov Accelerated Gradient (NAG) (Nesterov, 1983) is a slight variation of normal gradient descent, which can speed up the training process and improve the convergence significantly. NAG can be viewed as an improved momentum method, which can be expressed as:\nvt+1 = µ · vt +∇θtJ(θt − α · µ · vt), θt+1 = θt − α · vt+1.\n(5)\nTypical gradient-based iterative attacks (e.g., I-FGSM) greedily perturb the images in the direction of the sign of the gradient at each iteration, which usually falls into poor local maxima, and shows weak transferability than single-step attacks (e.g., FGSM). Dong et al. (2018) show that adopting momentum (Polyak, 1964) into attacks can stabilize the update directions, which helps to escape from poor local maxima and improve the transferability. Compared to momentum, beyond stabilize the update directions, the anticipatory update of NAG gives previous accumulated gradient a correction that helps to effectively look ahead. Such looking ahead property of NAG can help us escape from poor local maxima easier and faster, resulting in the improvement on transferability.\nWe integrate NAG into the iterative gradient-based attack to leverage the looking ahead property of NAG and build a robust adversarial attack, which we refer to as NI-FGSM (Nesterov Iterative Fast Gradient Sign Method). Specifically, we make a jump in the direction of previous accumulated gradients before computing the gradients in each iteration. Start with g0 = 0, the update procedure of NI-FGSM can be formalized as follows:\nxnest = x adv t + α · µ · gt, (6)\ngt+1 = µ · gt + ∇xJ(xnest , ytrue) ‖∇xJ(xnest , ytrue)‖1 , (7) xadvt+1 = Clip x{xadvt + α · sign(gt+1)}, (8)\nwhere gt denotes the accumulated gradients at the iteration t, and µ denotes the decay factor of gt." }, { "heading": "3.3 SCALE-INVARIANT ATTACK METHOD", "text": "Besides considering a better optimization algorithm for the adversaries, we can also improve the transferability of adversarial examples by model augmentation. We first introduce a formal definition of loss-preserving transformation and model augmentation as follows.\nDefinition 1 Loss-preserving Transformation. Given an input x with its ground-truth label ytrue and a classifier f(x) : x ∈ X → y ∈ Y with the cross-entropy loss J(x, y), if there exists an input transformation T (·) that satisfies J(T (x), ytrue) ≈ J(x, ytrue) for any x ∈ X , we say T (·) is a loss-preserving transformation.\nDefinition 2 Model Augmentation. Given an input x with its ground-truth label ytrue and a model f(x) : x ∈ X → y ∈ Y with the cross-entropy loss J(x, y), if there exists a loss-preserving transformation T (·), then we derive a new model by f ′(x) = f(T (x)) from the original model f . we define such derivation of models as model augmentation.\nIntuitively, similar to the generalization of models that can be improved by feeding more training data, the transferability of adversarial examples can be improved by attacking more models simultaneously. Dong et al. (2018) enhance the gradient-based attack by attacking an ensemble of models. However, their approach requires training a set of different models to attack, which has a large computational cost. Instead, in this work, we derive an ensemble of models from the original model by model augmentation, which is a simple way of obtaining multiple models via the loss-preserving transformation.\nTo get the loss-preserving transformation, we discover that deep neural networks might have the scale-invariant property, besides the translation invariance. Specifically, the loss values are similar for the original and the scaled images on the same model, which is empirically validated in Section 4.2. Thus, the scale transformation can be served as a model augmentation method. Driven by the above analysis, we propose a Scale-Invariant attack Method (SIM), which optimizes the adversarial perturbations over the scale copies of the input image:\nargmax xadv\n1\nm m∑ i=0 J(Si(x adv), ytrue),\ns.t. ∥∥xadv − x∥∥∞ ≤ ,\n(9)\nwhere Si(x) = x/2i denotes the scale copy of the input image x with the scale factor 1/2i, and m denotes the number of the scale copies. With SIM, instead of training a set of models to attack, we can effectively achieve ensemble attacks on multiple models by model augmentation. More importantly, it can help avoid “overfitting” on the white-box model being attacked and generate more transferable adversarial examples." }, { "heading": "3.4 ATTACK ALGORITHM", "text": "For the gradient processing of crafting adversarial examples, NI-FGSM introduces a better optimization algorithm to stabilize and correct the update directions at each iteration. For the ensemble attack of crafting adversarial examples, SIM introduces model augmentation to derive multiple models to attack from a single model. Thus, NI-FGSM and SIM can be naturally combined to build a stronger attack, which we refer to as SI-NI-FGSM (Scale-Invariant Nesterov Iterative Fast Gradient Sign Method). The algorithm of SI-NI-FGSM attack is summarized in Algorithm 1.\nIn addition, SI-NI-FGSM can be integrated with DIM (Diverse Input Method), TIM (TranslationInvariant Method) and TI-DIM (Translation-Invariant with Diverse Input Method) as SI-NI-DIM, SINI-TIM and SI-NI-TI-DIM, respectively, to further boost the transferability of adversarial examples. The detailed algorithms for these attack methods are provided in Appendix A." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "In this section, we provide experimental evidence on the advantage of the proposed methods. We first provide experimental setup, followed by the exploration of the scale-invariance property for deep learning models. We then compare the results of the proposed methods with baseline methods in Section 4.3 and 4.4 on both normally trained models and adversarially trained models. Beyond the defense models based on adversarial training, we also quantify the effectiveness of the proposed methods on other advanced defense in Section 4.5. Additional discussions, the comparison between NI-FGSM and MI-FGSM and the comparison with classic attacks, are in Section 4.6. Code is available at https://github.com/JHL-HUST/SI-NI-FGSM.\nAlgorithm 1 SI-NI-FGSM\nInput: A clean example x with ground-truth label ytrue; a classifier f with loss function J ; Input: Perturbation size ; maximum iterations T ; number of scale copies m and decay factor µ. Output: An adversarial example xadv\n1: α = /T 2: g0 = 0;xadv0 = x 3: for t = 0 to T − 1 do 4: g = 0 5: Get xnest by Eq.(6) . make a jump in the direction of previous accumulated gradients 6: for i = 0 to m− 1 do . sum the gradients over the scale copies of the input image 7: Get the gradients by∇xJ(Si(xnest ), ytrue) 8: Sum the gradients as g = g +∇xJ(Si(xnest ), ytrue) 9: Get average gradients as g = 1m · g\n10: Update gt+1 by gt+1 = µ · gt + g‖g‖1 11: Update xadvt+1 by Eq.(8) 12: return xadv = xadvT" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Dataset. We randomly choose 1000 images belonging to the 1000 categories from ILSVRC 2012 validation set, which are almost correctly classified by all the testing models.\nModels. For normally trained models, we consider Inception-v3 (Inc-v3) (Szegedy et al., 2016), Inception-v4 (Inc-v4), Inception-Resnet-v2 (IncRes-v2) (Szegedy et al., 2017) and Resnet-v2-101 (Res-101) (He et al., 2016). For adversarially trained models, we consider Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens (Tramr et al., 2018).\nAdditionally, we include other advanced defense models: high-level representation guided denoiser (HGD) (Liao et al., 2018), random resizing and padding (R&P) (Xie et al., 2018), NIPS-r31, feature distillation (FD) (Liu et al., 2019), purifying perturbations via image compression model (Comdefend) (Jia et al., 2019) and randomized smoothing (RS) (Cohen et al., 2019).\nBaselines. We integrate our methods with DIM (Xie et al., 2019), TIM, and TI-DIM (Dong et al., 2019), to show the performance improvement of SI-NI-FGSM over these baselines. Denote our SI-NI-FGSM integrated with other attacks as SI-NI-DIM, SI-NI-TIM, and SI-NI-TIM-DIM, respectively.\nHyper-parameters. For the hyper-parameters, we follow the settings in (Dong et al., 2018) with the maximum perturbation as = 16, number of iteration T = 10, and step size α = 1.6. For MI-FGSM, we adopt the default decay factor µ = 1.0. For DIM, the transformation probability is set to 0.5. For TIM, we adopt the Gaussian kernel and the size of the kernel is set to 7× 7. For our SI-NI-FGSM, the number of scale copies is set to m = 5." }, { "heading": "4.2 SCALE-INVARIANT PROPERTY", "text": "To validate the scale-invariant property of deep neural networks, we randomly choose 1,000 original images from ImageNet dataset and keep the scale size in the range of [0.1, 2.0] with a step size 0.1. Then we feed the scaled images into the testing models, including Inc-v3, Inc-v4, IncRes-2, and Res-101, to get the average loss over 1,000 images.\nAs shown in Figure 1, we can easily observe that the loss curves are smooth and stable when the scale size is in range [0.1, 1.3]. That is, the loss values are very similar for the original and scaled images. So we assume that the scale-invariant property of deep models is held within [0.1, 1.3], and we leverage the scale-invariant property to optimize the adversarial perturbations over the scale copies of the input images.\n1https://github.com/anlthms/nips-2017/tree/master/mmd" }, { "heading": "4.3 ATTACKING A SINGLE MODEL", "text": "In this subsection, we integrate our SI-NI-FGSM with TIM, DIM and TI-DIM, respectively, and compare the black-box attack success rates of our extensions with the baselines under single model setting. As shown in Table 1, our extension methods consistently outperform the baseline attacks by 10%∼ 35% under the black-box setting, and achieve nearly 100% success rates under the white-box setting. It indicates that SI-NI-FGSM can serve as a powerful approach to improve the transferability of adversarial examples." }, { "heading": "4.4 ATTACKING AN ENSEMBLE OF MODELS", "text": "Following the work of (Liu et al., 2016), we consider to show the performance of our methods by attacking multiple models simultaneously. Specifically, we attack an ensemble of normally trained models (including Inc-v3, Inc-v4, IncRes-v2 and Res-101) with equal ensemble weights using TIM, SI-NI-TIM, DIM, SI-NI-DIM, TI-DIM and SI-NI-TI-DIM, respectively.\nAs shown in Table 2, our methods improve the attack success rates across all experiments over the baselines. In general, our methods consistently outperform the baseline attacks by 10% ∼ 30% under the black-box setting. Especially, SI-NI-TI-DIM, the extension by combining SI-NI-FGSM with TI-DIM, can fool the adversarially trained models with a high average success rate of 93.5%. It indicates that these advanced adversarially trained models provide little robustness guarantee under the black-box attack of SI-NI-TI-DIM." }, { "heading": "4.5 ATTACKING OTHER ADVANCED DEFENSE MODELS", "text": "Besides normally trained models and adversarially trained models, we consider to quantify the effectiveness of our methods on other advanced defenses, including the top-3 defense solutions in the NIPS competition (high-level representation guided denoiser (HGD, rank-1) (Liao et al., 2018), random resizing and padding (R&P, rank-2) (Xie et al., 2018) and the rank-3 submission (NIPS-r3), and three recently proposed defense methods (feature distillation (FD) (Liu et al., 2019), purifying perturbations via image compression model (Comdefend) (Jia et al., 2019) and randomized smoothing (RS) (Cohen et al., 2019)).\nWe compare our SI-NI-TI-DIM with MI-FGSM (Dong et al., 2018), which is the top-1 attack solution in the NIPS 2017 competition, and TI-DIM (Dong et al., 2019), which is state-of-the-art attack. We first generate adversarial examples on the ensemble models, including Inc-v3, Inc-v4, IncResv2, and Res-101 by using MI-FGSM, TI-DIM, and SI-NI-TI-DIM, respectively. Then, we evaluate the adversarial examples by attacking these defenses.\nAs shown in Table 3, our method SI-NI-TI-DIM achieves an average attack success rate of 90.3%, surpassing state-of-the-art attacks by a large margin of 14.7%. By solely depending on the trans-\nferability of adversarial examples and attacking on the normally trained models, SI-NI-TI-DIM can fool the adversarially trained models and other advanced defense mechanism, raising a new security issue for the development of more robust deep learning models. Some adversarial examples generated by SI-NI-TI-DIM are shown in Appendix B." }, { "heading": "4.6 FURTHER ANALYSIS", "text": "NI-FGSM vs. MI-FGSM. We perform additional analysis for the difference between NI-FGSM with MI-FGSM (Dong et al., 2018). The adversarial examples are crafted on Inc-v3 with various number of iterations ranging from 4 to 16, and then transfer to attack Inc-v4 and IncRes-v2. As shown in Figure 2, NI-FGSM yields higher attack success rates than MI-FGSM with the same number of iterations. In another view, NI-FGSM needs fewer number of iterations to gain the same attack success rate of MI-FGSM. The results not only indicate that NI-FGSM has a better transferability, but also demonstrate that with the property of looking ahead, NI-FGSM can accelerate the generation of adversarial examples.\nComparison with classic attacks. We consider to make addition comparison with classic attacks, including FGSM (Goodfellow et al., 2014), I-FGSM (Kurakin et al., 2016), PGD (Madry et al., 2018) and C&W (Carlini & Wagner, 2017). As shown in Table 4, our methods achieve 100% attack success rate which is the same as C&W under the white-box setting, and significantly outperform other methods under the black-box setting." }, { "heading": "5 CONCLUSION AND FUTURE WORK", "text": "In this work, we propose two new attack methods, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM), to improve the transferability of adversarial examples. NI-FGSM aims to adopt Nesterov accelerated gradient method into the gradientbased attack, and SIM aims to achieve model augmentation by leveraging the scale-invariant property of models. NI-FGSM and SIM can be naturally combined to build a robust attack, namely SINI-FGSM. Moreover, by integrating SI-NI-FGSM with the baseline attacks, we can further improve the transferability of adversarial examples. Extensive experiments demonstrate that our methods not only yield higher success rates on adversarially trained models but also break other strong defense mechanism.\nOur work of NI-FGSM suggests that other momentum methods (e.g. Adam) may also be helpful to build a strong attack, which will be our future work, and the key is how to migrate the optimization method to the gradient-based iterative attack. Our work also shows that deep neural networks have the scale-invariant property, which we utilized to design the SIM to improve the attack transferability. However, it is not clear why the scale-invariant property holds. Possibly it is due to the batch normalization at each convolutional layer, that may mitigate the impact of the scale change. We will also explore the reason more thoroughly in our future work." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work is supported by the Fundamental Research Funds for the Central Universities (2019kfyXKJC021) and Microsoft Research Asia." }, { "heading": "A DETAILS OF THE ALGORITHMS", "text": "The algorithm of SI-NI-TI-DIM attack is summarized in Algorithm 2. We can get the SI-NI-DIM attack algorithm by removing Step 10 of Algorithm 2, and get the SI-NI-TIM attack algorithm by removing T (·; p) in Step 7 of Algorithm 2.\nAlgorithm 2 SI-NI-TI-DIM\nInput: A clean example x with ground-truth label ytrue; a classifier f with loss function J ; Input: Perturbation size ; maximum iterations T ; number of scale copies m and decay factor µ. Output: An adversarial example xadv\n1: α = /T 2: g0 = 0;xadv0 = x 3: for t = 0 to T − 1 do 4: g = 0 5: Get xnest by Eq.(6) . make a jump in the direction of previous accumulated gradients 6: for i = 0 to m− 1 do . sum the gradients over the scale copies of the input image 7: Get the gradients by∇xJ(T (Si(xnest ); p), ytrue) . apply random resizing and padding\nto the inputs with the probability p 8: Sum the gradients as g = g +∇xJ(T (Si(xnest ); p), ytrue) 9: Get average gradients as g = 1m · g\n10: Convolve the gradients by g = W ∗ g . convolve gradient with the pre-defined kernel W 11: Update gt+1 by gt+1 = µ · gt + g‖g‖1 12: Update xadvt+1 by Eq.(8) 13: return xadv = xadvT\nB VISUALIZATION OF ADVERSARIAL EXAMPLES\nWe visualize 12 randomly selected benign images and their corresponding adversarial images in Figure 3. The adversarial images are crafted on the ensemble models, including Inc-v3, Inc-v4, IncRes-v2 and Res-101, using the proposed SI-NI-TI-DIM. We see that these generated adversarial perturbations are human imperceptible." } ]
2,020
null
SP:42d41dec3695a319b32a212d33682ae15535f27c
[ "The paper is a nice piece of works which clearly articulates the objective and the subsequent discussion. The focus of the paper--i.e. disclose the difficulties of piano fingering data annotation and the proposal of automating this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision DNN-based algorithms —although not really mainstream, it does provide some practical insights using a couple of experimental settings (piano fingering model and prediction) to help the readers. ", "In this paper, the authors proposed an automatic piano fingering algorithm, that accepts YouTube videos and corresponding MIDI files and outputs fingering prediction for each note. The claimed contribution is two-fold: First, they proposed the algorithm, and second, they claim that the algorithm can be used to automatically generate large datasets for piano fingering problems. The motivation is clearly stated and convincing. The overall algorithm is mainly described. " ]
Automatic Piano Fingering is a hard task which computers can learn using data. As data collection is hard and expensive, we propose to automate this process by automatically extracting fingerings from public videos and MIDI files, using computer-vision techniques. Running this process on 90 videos results in the largest dataset for PIANO-FINGERING with more than 150K notes. We show that when running a previously proposed model for automatic PIANO-FINGERING on our dataset and then fine-tuning it on manually labeled piano fingering data, we achieve state-of-the-art results. In addition to the fingering extraction method, we also introduce a novel method for transferring deep-learning computer-vision models to work on out-of-domain data, by fine-tuning it on out-of-domain augmentation proposed by a Generative Adversarial Network (GAN). For demonstration, we anonymously release a visualization of the output of our process for a single video on https://youtu.be/Gfs1UWQhr5Q
[]
[ { "authors": [ "Matteo Balliauw", "Dorien Herremans", "Daniel Palhazi Cuervo", "Kenneth Sörensen" ], "title": "Generating fingerings for polyphonic piano music with a tabu search algorithm", "venue": "In International Conference on Mathematics and Computation in Music,", "year": 2015 }, { "authors": [ "Matteo Balliauw", "Dorien Herremans", "Daniel Palhazi Cuervo", "Kenneth Sörensen" ], "title": "A variable neighborhood search algorithm to generate piano fingerings for polyphonic sheet music", "venue": "International Transactions in Operational Research,", "year": 2017 }, { "authors": [ "Zhe Cao", "Tomas Simon", "Shih-En Wei", "Yaser Sheikh" ], "title": "Realtime multi-person 2d pose estimation using part affinity fields", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Shani Gamrian", "Yoav Goldberg" ], "title": "Transfer learning for related reinforcement learning tasks via image-to-image translation", "venue": "arXiv preprint arXiv:1806.07377,", "year": 2018 }, { "authors": [ "Fred Glover", "Manuel Laguna" ], "title": "Tabu search. In Handbook of combinatorial optimization, pp. 2093–2229", "venue": null, "year": 1998 }, { "authors": [ "Melanie Hart", "Robert Bosch", "Elbert Tsai" ], "title": "Finding optimal piano fingerings", "venue": "The UMAP Journal,", "year": 2000 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "J Pieter Jacobs" ], "title": "Refinements to the ergonomic model for keyboard fingering of parncutt, sloboda, clarke, raekallio, and desain", "venue": "Music Perception: An Interdisciplinary Journal,", "year": 2001 }, { "authors": [ "Alia Al Kasimi", "Eric Nichols", "Christopher Raphael" ], "title": "A simple algorithm for automatic generation of polyphonic piano fingerings", "venue": "In ISMIR,", "year": 2007 }, { "authors": [ "Mathias Kölsch", "Matthew Turk" ], "title": "Robust hand detection", "venue": "In FGR, pp", "year": 2004 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Nenad Mladenović", "Pierre Hansen" ], "title": "Variable neighborhood search", "venue": "Computers & operations research,", "year": 1997 }, { "authors": [ "Eita Nakamura", "Nobutaka Ono", "Shigeki Sagayama" ], "title": "Merged-output hmm for piano fingering of both hands", "venue": "In ISMIR, pp", "year": 2014 }, { "authors": [ "Eita Nakamura", "Yasuyuki Saito", "Kazuyoshi Yoshii" ], "title": "Statistical learning and estimation of piano fingering", "venue": "arXiv preprint arXiv:1904.10237,", "year": 2019 }, { "authors": [ "Richard Parncutt", "John A Sloboda", "Eric F Clarke", "Matti Raekallio", "Peter Desain" ], "title": "An ergonomic model of keyboard fingering for melodic fragments", "venue": "Music Perception: An Interdisciplinary Journal,", "year": 1997 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Fereshteh Sadeghi", "Alexander Toshev", "Eric Jang", "Sergey Levine" ], "title": "Sim2real viewpoint invariant visual servoing by recurrent control", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Tomas Simon", "Hanbyul Joo", "Iain Matthews", "Yaser Sheikh" ], "title": "Hand keypoint detection in single images using multiview bootstrapping", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Srinath Sridhar", "Franziska Mueller", "Antti Oulasvirta", "Christian Theobalt" ], "title": "Fast and robust hand tracking using detection-guided optimization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Yoshinari Takegawa", "Tsutomu Terada", "Shojiro Nishio" ], "title": "Design and implementation of a real-time fingering detection system for piano performance", "venue": "In ICMC,", "year": 2006 }, { "authors": [ "Jonathan Tompson", "Murphy Stein", "Yann Lecun", "Ken Perlin" ], "title": "Real-time continuous pose recovery of human hands using convolutional networks", "venue": "ACM Transactions on Graphics,", "year": 2014 }, { "authors": [ "Paul Viola", "Michael Jones" ], "title": "Rapid object detection using a boosted cascade of simple features", "venue": null, "year": 2001 }, { "authors": [ "Shih-En Wei", "Varun Ramakrishna", "Takeo Kanade", "Yaser Sheikh" ], "title": "Convolutional pose machines", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Yuichiro Yonebayashi", "Hirokazu Kameoka", "Shigeki Sagayama" ], "title": "Automatic decision of piano fingering based on a hidden markov models", "venue": "In IJCAI,", "year": 2007 }, { "authors": [ "Shanxin Yuan", "Qi Ye", "Bjorn Stenger", "Siddhant Jain", "Tae-Kyun Kim" ], "title": "Bighand2. 2m benchmark: Hand pose dataset and state of the art analysis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Yuanfeng Zhu", "Ajay Sundar Ramakrishnan", "Bernd Hamann", "Michael Neff" ], "title": "A system for automatic animation of piano performances", "venue": "Computer Animation and Virtual Worlds,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning to play the piano is a hard task taking years to master. One of the challenging aspects when learning a new piece is the fingering choice in which to play each note. While beginner booklets contain many fingering suggestions, advanced pieces often contain none or a select few. Automatic prediction of PIANO-FINGERING can be a useful addition to new piano learners, to ease the learning process of new pieces. As manually labeling fingering for different sheet music is an exhausting and expensive task1, In practice previous work (Parncutt et al., 1997; Hart et al., 2000; Jacobs, 2001; Kasimi et al., 2007; Nakamura et al., 2019) used very few tagged pieces for evaluation, with minimal or no training data.\nIn this paper, we propose an automatic, low-cost method for detecting PIANO-FINGERING from piano playing performances captured on videos which allows training modern - data-hungry - neural networks. We introduce a novel pipeline that adapts and combines several deep learning methods which lead to an automatic labeled PIANO-FINGERING dataset. Our method can serve two purposes: (1) an automatic “transcript” method that detects PIANO-FINGERING from video and MIDI files, when these are available, and (2) serve as a dataset for training models and then generalize to new pieces.\nGiven a video and a MIDI file, our system produces a probability distribution over the fingers for each played. Running this process on large corpora of piano pieces played by different artists, yields a total of 90 automatically finger-tagged pieces (containing 155,107 notes in total) and results in the first public large scale PIANO-FINGERING dataset, which we name APFD. This dataset will grow over time, as more videos are uploaded to YouTube. We provide empirical evidence that APFD is valuable, both by evaluating a model trained on it over manually labeled videos, as well as its usefulness by fine-tuning the model on a manually created dataset, which achieves state-of-the-art results.\nThe process of extracting PIANO-FINGERING from videos alone is a hard task as it needs to detect keyboard presses, which are often subtle even for the human eye. We, therefore, turn to MIDI files to obtain this information. The extraction steps are as follows: We begin by locating the keyboard and identify each key on the keyboard (§3.2). Then, we identify the playing hands on top of the keyboard\n1Nakamura et al. (2019) privately reported labeling time of 3-12 seconds per note.\n(§3.3), and detect the fingers given the hands bounding boxes (§3.4). Next, we align between the MIDI file and its corresponding video (§3.6) and finally assign for every pressed note, the finger which was most likely used to play it (§3.5). Albeit the expectation from steps like hand detection and pose estimation, which were extensively studied in the computer-vision literature, we find that in practice, state-of-the-art models do not excel in these tasks for our scenario. We therefore address these weaknesses by fine-tuning an object detection model §3.3 on a new dataset we introduce and train a CycleGAN (Zhu et al., 2017) to address the different lighting scenarios with the pose estimation model §3.4." }, { "heading": "2 BACKGROUND", "text": "PIANO-FINGERING was previously studied in multiple disciplines, such as music theory and computer animation (Parncutt et al., 1997; Hart et al., 2000; Jacobs, 2001; Kasimi et al., 2007; Zhu et al., 2013; Nakamura et al., 2019).\nThe fingering prediction task is formalized as follows: Given a sequence of notes, associate each note with a finger from the set {1, 2, 3, 4, 5} × {L,R}. This is subject to constraints such as the positions of each hand, anatomical plausibility of transitioning between two fingers, the hands’ size, etc. Each fingering sequence has a cost, which is derived from a transition cost between two fingers.\nEarly work modeled fingering prediction as a search problem, where the objective is to find the optimal fingering sequence to play all the notes with. A naive approach to finding the best sequence is to exhaustively evaluate all possible transitions between one note to another which is not computationally feasible. By defining a transition matrix corresponding to the probability or “difficulty” of transitioning from one note to another - one can calculate a cost function, which defines the predicted sequence likelihood. Using a search algorithm on top of the transitions allows finding a globally optimal solution. This solution is not practical as well, due to the exponential complexity, and therefore heuristics or pruning are employed to reduce the space complexity. The transition matrix can be manually defined by heuristics or personal estimation (Parncutt et al., 1997; Hart et al., 2000), or instead, not relying on a pre-defined set of rules, and use a Hidden Markov Model (HMM) to learn the transitions (Yonebayashi et al., 2007; Nakamura et al., 2014). In practice, (Yonebayashi et al., 2007) leaves the parameter learning to future work, and instead they manually fine-tune the transition matrix.\nOn top of the transition matrix, practitioners suggested using dynamic programming algorithms to solve the search (Hart et al., 2000). Another option to solve the huge search space is to use a\nsearch algorithm such as Tabu search (Glover & Laguna, 1998) or variable neighborhood search (Mladenović & Hansen, 1997), to find a global plausible solution (Balliauw et al., 2015; 2017). These works are either limited by the defined transition rules, or by making different assumptions to facilitate the search space. Such assumptions come in the form of limiting the predictions to a single hand, limiting the evaluation pieces to contain no chords, rests or substantial lengths during which player can take their hand off the keyboard. Furthermore, all of these works have very small evaluation sets, which in practice makes it hard to compare different approaches, and does not allow to use more advanced models such as neural networks.\nIn this work, we continue the transition of search-based methods that optimize a set of constraints with learning methods that try to imitate human behavior by the use of large datasets. In practice, these methods require lots of training data to achieve good performance, and fingering labeling is particularly expensive and hard to obtain. One way to automatically gather rich fingering data with full hand pose estimation is by using motion capture (MOCAP) gloves when playing the piano. Zhu et al. (2013) suggests a rule-based and data-based hybrid method, initially estimating fingering decisions using a Directed Acyclic Graph (DAG) based on rule-based comfort constraints which are smoothed using data recorded from limited playing sessions with motion capture gloves. As MOCAP requires special equipment and may affect the comfort of the player, other work, (Takegawa et al., 2006) tried to automatically detect piano fingering from video and MIDI files. The pianist’s fingernails were laid with colorful markers, which were detected by a computer vision program. As some occlusions can occur, they used some rules to correct the detected fingering. In practice, they implemented the system with a camera capturing only 2 octaves (out of 8) and performed a very limited evaluation. The rules they used are simple (such as: restricting one finger per played note, two successive notes cannot be played with the same finger), but far from capturing real-world scenarios.\nPrevious methods for automatically collecting data (Takegawa et al., 2006; Zhu et al., 2013) were costly, as apart of the equipment needed during the piece playing, and the data-collectors had to pay the participating pianists. In our work, we rely solely on videos from YouTube, meaning costs remain minimal with the ability to scale up to new videos.\nRecently, Nakamura et al. (2019) released a relatively large dataset of manually labeled PIANOFINGERING by one to six annotators, consisting of 150 pieces, with partially annotated scores (324 notes per piece on average) with a total of 48,726 notes matched with 100,044 tags from multiple annotators. This is the largest annotated PIANO-FINGERING corpus to date and a valuable resource for this field. The authors propose multiple methods for modeling the task of PIANO-FINGERING, including HMMs and neural networks, and report the best performance with an HMM-based model. In this work, we use their dataset as a gold dataset for comparison and adapt their model to compare to our automatically generated dataset." }, { "heading": "3 OUR APPROACH: EXTRACTING FINGERING FROM ONLINE VIDEOS", "text": "There is a genre of online videos in which people upload piano performances where both the piano and the hands are visible. On some channels, people not only include the video but also the MIDI file recorded while playing the piece. We propose to use machine learning techniques to extract fingering information from such videos, enabling the creation of a large dataset of pieces and their fingering information. This requires the orchestration and adaptation of several techniques, which we describe below.\nThe final output we produce is demonstrated in Figure 1, where we colored both the fingers and the played notes based on the pose-estimation model (§3.4) and the predicted fingers that played them (§3.5). Note that the ring fingers of both hands as well as the index finger of the left hand and the middle finger of the right hand do not press any note in this particular frame, but may play a note in others. We get the information of played notes from the MIDI events." }, { "heading": "3.1 DATA SOURCE", "text": "We extract videos from youtube.com, played by different piano players on a specific channel containing both video and MIDI files. In these videos, the piano is filmed in a horizontal angle\ndirectly to the keyboard, from which both the keyboard and hands are displayed (as can be seen in Figure 1).\nMIDI files A standard protocol for the interchange of musical information between musical instruments, synthesizers, and computers Musical Instrument Digital Interface (MIDI) is a standard format for the interchange of musical information between electronic musical instruments. It consists of a sequence of events describing actions to carry out, when, and allows for additional attributes. In the setup of piano recording, it records what note was played in what time for how long and its pressure strength (velocity). We only use videos that come along with a MIDI file, and use it as the source for the played notes and their timestamp." }, { "heading": "3.2 KEYBOARD AND BOUNDARIES DETECTION", "text": "To allow a correct fingering assignment, we first have to find the keyboard and the bounding boxes of the keys. We detect the keyboard as the largest continuous bright area in the video and identify key boundaries using standard image processing techniques, taking into account the expected number of keys and their predictable location and clear boundaries. For robustness and in order to handle the interfering hands that periodically hide parts of the piano, we combine information from multiple random frames by averaging the predictions from each frame." }, { "heading": "3.3 HAND DETECTION", "text": "A straightforward approach for getting fingers locations in an image is to use a pose estimation model directly on the entire image. In practice, common methods for full-body pose estimation such as OpenPose (Cao et al., 2017) containing hand pose estimation (Simon et al., 2017), make assumptions about the wrist and elbow locations to automatically approximate the hands’ locations. In the case of piano playing, the elbow does not appear in the videos, therefore these systems don’t work. We instead, turn to a pipeline approach where we start by detecting the hands, cropping them, and passing the cropped frames to a pose estimation model that expects the hand to be in the middle of the frame.\nObject Detection (Viola & Jones, 2001; Redmon et al., 2016; Lin et al., 2017a;b), and specifically Hand Detection (Simon et al., 2017; Kölsch & Turk, 2004; Sridhar et al., 2015) are well studied subjects. However, out of the published work providing source code, the code was either complicated to run (e.g. versioning, mobile-only builds, supporting specific GPU, etc.), containing private datasets, or only detecting hands with no distinction between left and right, which is important in our case.\nWe, therefore, created a small dataset with random frames from different videos, corresponding to 476 hands in total evenly split between left and right2. We then fine-tuned a pre-trained object detection model (Inception v2 (Ioffe & Szegedy, 2015), based on Faster R-CNN (Ren et al., 2015), trained on COCO challenge (Lin et al., 2014)) on our new dataset. The fine-tuned model works reasonably well and some hand detection bounding boxes are presented in Figure 1. We release this new dataset and the trained model alongside the rest of the resources developed in this work.\nHaving a working hand-detection model, we perform hand detection on every frame in the video. If more than two hands are detected, we take the 2 highest probability defections as the correct bounding boxes. If two hands are detected with the same label (“left-left” or “right-right”), we discard the model’s label, and instead choose the leftmost bounding box to have the label “left” and the other to have the label “right” - which is the most common position of hands on the piano." }, { "heading": "3.4 FINGER POSE ESTIMATION", "text": "Having the bounding box of each hand is not enough, as in order to assign fingers to notes we need the hand’s pose. How can we detect fingers that pressed the different keys? We turn to pose estimation models, a well-studied subject in computer vision and use standard models (Wei et al., 2016).\nUsing off-the-shelve pose estimation model, turned to often fail in our scenario. Some failure example are presented in Figure 2c where the first pose is estimated correctly, but the rest either have\n2The data was labeled by the first author using labelImg.\nwrong finger positions, shorter, or broken fingers. The videos we use contain visual effects (e.g. LED lights are turned on every time a key is pressed), and such the pose-estimation models exhibit two failures: (1) when the LED colors are warm (red, orange, yellow), the model sees the light as an extension to the finger, and as such poorly estimates where the keypoints are for each finger; (2) when the LED colors are cool (green, blue, white), the over-saturation seems to overwhelm the model, and it mistakenly estimates the hand’s pose as much shorter, considering the lit parts of the piano as part of the background. Some examples of these errors are presented in Figure 2c. Furthermore, the videos are usually very dark, high-contrast, and blurry due to motion blur, which standard datasets (Tompson et al., 2014; Yuan et al., 2017; Simon et al., 2017) and models trained on top of them rarely encounter.\nGiven the observation that pose-estimation works well on well lid-images, how can we augment the pose estimation model to other lighting situations? This scenario is similar to sim2real works (Sadeghi et al., 2018; Gupta & Booher; Gamrian & Goldberg, 2018) where one wants to transfer a model from simulations to the real-world. These works learn a mapping function G1 : T → S that transfer instances xi from the target domain T (the real-world) into the source domain S (the simulation), where the mapping is usually achieved by employing a CycleGan (Zhu et al., 2017). Then, models which are trained on the source domain are used on the transformation of the target domain G1(xi) and manage to generalize on the target domain. In our setup, we seek a mapping G2 : S → T that transforms the source domain (i.e the well-lid videos) into the target data (i.e the challenging lighted scenarios). After obtaining the transformation function G2, we employ the pose estimation model f on the source domain, use the transformation separately, and align the prediction to the new representation. This novel setup benefits from performance boost as we only use the transformation function offline, before training and avoid using it for every prediction. We also benefit of better generalization as we keep good performance on the source domain, and gain major performance on the target domain.\nWe manually detect videos and assign them into their group (well-lit or poorly-lit). Then, we automatically detect and crop hands from random frames, resulting in 21,747 well-lit hands, and 12,832 poorly lit hands. We then trained a CycleGAN for multiple epochs and chose 15 training checkpoints that produced different lighting scenarios (some examples can be seen in Figure 2). We then fine-tune a pose-estimation model on the original, well lit frames, and on the 15 transformed frames. This procedure results in a model that is robust to different lighting scenarios, as we show in Figures 2b and 2d, demonstrating its performance on different lighting scenarios." }, { "heading": "3.5 PRESSED FINGER ESTIMATION", "text": "Given that we know which notes were pressed in any given frame (see §3.6 below), there is still uncertainty as to which finger pressed them. This uncertainty either comes from imperfect pose estimation, or multiple fingers located on top of a single note. We model the finger press estimation by calculating the probability of a specific finger to have been used, given the hand’s pose and the pressed note:\nargmaxi,jP (fi, hj |nk)\nwhere i ∈ [1, 5] for the 5 fingers, hj ∈ {hl, hr} stands for the hand being used (left or right) and nk ∈ [1, 88] corresponding to the played key. We chose to estimate the pressed fingers as a Gaussian distribution N (µ, σ2), where µ and σ are defined as follows: σ(nk) = Xnk+1 −Xnk , µ(nk) = Xnk + 0.5 ∗ σ µ is the center of the key on the x axis and σ is its width. The score of each finger given a note in a specific frame is defined as:\ng(fi, hj |nk, frame) = f(Xfi,hj |frame|µ(nk), σ(nk) 2)\nThe probability of a given finger to have played a note given a note and a frame: (normalizing g for all fingers)\np(fi, hj |nk, frame) = g(fi, hj |nk, frame)\nΣ5n=1Σm∈{l,r}g(fn, hm|nk, frame)\nAs most keyboard presses last more than one frame, we make use of multiple frames to overcome some of the errors from previous steps and to estimate a more accurate prediction. For this reason, we aggregate the frames that were used during a key press. We treat the first frame as the main signal point, and assign each successive frame an exponentially declining weight\np(fi, hj |nk, framek1 , ...framekn) = Σnl=10.5 l ∗ p(fi, hj |nk, framekl) Σnl=10.5 l\nAs finger changes can occur in later frames. Finally, we normalize the weighted sum of probabilities to achieve a probability distribution for all frames.\nIn our dataset, we release all probabilities for each played note, along with the maximum likelihood finger estimation. We define the “confidence” score of the extraction from a single piece, as the product of the highest probability for a finger for each note. Figure 3 shows the precision and recall of the predictions based on a cutoff for the highest probability of the note. We see a positive correlation between confidence threshold and precision, and a negative correlation between confidence and recall, meaning we can get relatively high precision for a small number of notes, or relatively low precision for a high number of notes." }, { "heading": "3.6 VIDEO AND MIDI ALIGNMENT", "text": "We consider the MIDI and video files to be complementary, as they were recorded simultaneously. The MIDI files are the source to which keys were pressed, in what time, and for how long. The videos are the source for the piano, hands, and fingers locations. These two sources are not synchronized, but as they depict the same piano performance, a perfect alignment exist (up to the video frequency resolution).\nWe extract the audio track from the video, and treat the first audio peak as the beginning of the piece, clipping the prior part of the video and aligning it with the first event from the MIDI file. In practice,\nwe determine this starting point as the first point in the signal where the signal amplitude is higher than a fifth of the mean absolute value of the entire signal.\nThis heuristic achieves a reasonable alignment, but we observe some alignment mismatch of 80- 200ms. We tackle the misalignment by using a signal from the final system confidence (Section 3.5), where every piece gets a final score estimation after running the whole process, depicting the system confidence on the predicted notes. We look for an alignment that maximizes the system confidence over the entire piece:\nalignment(MIDI, V ideo) = argmaxiscore(MIDIt0 , V ideoti)\nwhere MIDIt0 is the starting time of the MIDI file, and V ideotj is the alignment time of the video. V ideot0 is obtained by the heuristic alignment described in the previous paragraph. We use the confidence score as a proxy of the alignment precision and search the alignment that maximizes the confidence score of the system. More specifically, given the initial offset from the audio-MIDI alignment, we take a window of 1 second in frames (usually 25) for each side and compute the score of the final system on the entire piece. We choose the offset that results in the best confidence score as the alignment offset." }, { "heading": "3.7 THE RESULTING DATASET: APFD", "text": "We follow the methods described in this section, and use it to label 90 piano pieces from 42 different composers with 155,107 notes in total. On average, each piece contains 1,723 notes." }, { "heading": "4 RESULTS", "text": "In this section, we present multiple evaluations of our overall system. We begin by evaluating the entire process where we assess how the overall system performs on predicting the pressed fingering. Next, we use the dataset we collected and train a PIANO-FINGERING model. We fine-tune this model on a previously manually-annotated dataset for PIANO-FINGERING and show using our data achieves better performance.\nAs piano pieces are usually played by two hands, we avoid modeling each hand separately, and instead use their symmetry property, and simply flip one hand’s notes, matching it to the piano scale, following previous work practice (Nakamura et al., 2019).\nFor evaluation, we use the match rate between the prediction and the ground truth. For cases where there is a single ground truth, this is equivalent to accuracy measurement. When more than one labeling is available, we simply average the accuracies with each labeling.3" }, { "heading": "4.1 FINGER PRESS ESTIMATION EVALUATION", "text": "As the pose estimation is one of the major components directly affecting our system’s performance, we isolate this part in order to estimate the gain from fine-tuning using the CycleGAN (Section 3.4).\nWe manually annotated five random pieces from our dataset by marking the pressing finger for each played note in the video. Then, by using our system (§3.5), we estimate what finger was used for each key. We use the confidence score produced by the model as a threshold to use or discard the key prediction of the model, and report precision, recall and F1 scores of multiple thresholds. Moreover, we compare the scores of these pieces with and without using the CycleGAN. We do so for the five annotated pieces and report these results in Table 1. When considering a high confidence score (>90%) both the pre-trained and fine-tuned models correctly mark all considered notes (which consist of between 34-36% of the data). However, when considering decreasing confidences, the fine-tuned model manages to achieve higher precision and higher recall, contributing to an overall higher f1 score. With no confidence threshold (i.e using all fingering predictions), the pre-trained model achieves 93% F1, while the fine-tuned one achieves 97% F1, a 57% error reduction." }, { "heading": "4.2 AUTOMATIC PIANO FINGERING PREDICTION", "text": "3This matches the general match rate evaluation metric in (Nakamura et al., 2019).\nIn order to assess the value of APFD, we seek to show its usefulness on the end task: Automatic Piano Fingering. To this end, we train a standard sequence tagging neural model using our dataset, evaluating on the subset of the videos we manually annotated. Then, we fine-tune this model on PIG (Nakamura et al., 2019), a manually labeled dataset, on which we achieve superior results than simply training on that dataset alone.\nWe model the PIANO-FINGERING as a sequence labeling task, where given a sequence of notes n1, n2, ..., nn we need to predict a sequence of fingering: y1, y2, ..., yn,\nwhere yi ∈ {1, 2, 3, 4, 5} corresponding to 5 fingers of one hand. We employ a standard sequence tagging technique, by embedding each note and using a BiLSTM on top of it. On every contextualized note we then use a Multi-Layer-Perceptron (MLP) to predict the label. The model is trained to minimize cross-entropy loss. This is the same model used in (Nakamura et al., 2019), referred to as DNN (LSTM)." }, { "heading": "4.2.1 PIANO FINGERING MODEL", "text": "Nakamura et al. (2019) didn’t use a development set, therefore in this work, we leave 1 piece from the training set and make it a development set. Our dataset—APFD—is composed of 90 pieces, which we split into 75/10 for training and development sets respectively, and use the 5 manually annotated pieces as a test set. We note that the development set is silver data (automatically annotated) and probably contains mistakes. The results are summarized in Table 1. We run the same architecture by (Nakamura et al., 2019) with some different hyperparameters and achieve 71.4%/64.1% on our and PIG’s test set respectively. To evaluate the usefulness of our data to PIG’s data, we use the model trained on our silver data and fine-tune it on PIG. This results in 66.8% accuracy, 2.3% above the previous state-of-the-art model which was achieved by an HMM (Nakamura et al., 2019). We attribute this gain in performance to our dataset, which both increases the number of training examples and allows to train bigger neural models which excel with more training examples. We also experiment in the opposite direction and fine-tune the model trained on PIG with our data, which result in 73.6% accuracy, which is better than training on our data alone, achieving 73.2% accuracy." }, { "heading": "5 DISCUSSION AND FUTURE WORK", "text": "In this work, we present an automatic method for detecting PIANO-FINGERING from MIDI and video files of a piano performance. We employ this method on a large set of videos, and create the first large scale PIANO-FINGERING dataset, containing 90 unique pieces, with 155,107 notes in total. We show this dataset—although being noisy–is valuable, by training a neural network model on it, fine-tuning on a gold dataset, where we achieve state-of-the-art results. In future work, we intend to improve the data collection by improving the pose-estimation model, better handling high speed movements and the proximity of the hands, which often cause errors in estimating their pose. Furthermore, we intend to design improved neural models that can take previous fingering predictions into account, in order to have a better global fingering transition." }, { "heading": "A DATASET SAMPLES", "text": "For every video our system extracts PIANO-FINGERING from it also outputs the video overlayed by the estimation of the piano keys, an indication of which notes are played, and what fingers are being used to play them (the key’s color), up to two bounding boxes, for both hands (where light blue means left hand, and light green means right hand), and the pose estimation for each hand.\nWe include the output videos for all of the pieces we manually annotated to visually demonstrate our system and its accuracy on different playing cases. The videos were uploaded to a new, anonymous YouTube channel, and each contain a link to the original video in the description.\nRiver Flows in You: https://youtu.be/Gfs1UWQhr5Q Faded: https://youtu.be/LU2ibOW6z7U\nMoonlight Sonata 1st Movement: https://youtu.be/wp8j239fs9o Rondo Alla Turca: https://youtu.be/KqTaPfoIuuE Nocturne in E Flat Major (Op. 9 No. 2): https://youtu.be/xXHUUzTa5vU" } ]
2,019
AT YOUR FINGERTIPS: AUTOMATIC PIANO FINGERING DETECTION
SP:f78edf237bfd944156163801e210e08fd16f8625
[ "This paper aims at revealing the relationship between the quality of deep representations and the attack susceptibility of deep classification models. To this end, they propose the zero-shot test to investigate the \"quality\" of learned representations for unknown classes. Specifically, they leverage two kinds of quality metrics on data of unknown classes. The first one is based on clustering named Davies-Bouldin Index which measures the compactness of intra-cluster. The second one is based on the difference of soft-label histogram distributions with/without unknown classes during training, which may describe the generalization for unknown classes or bias towards known classes of learned features. Finally, with these two metrics, they rank the quality of different models and compare such ranking results with the attack robustness obtained by different attack techniques on CIFAR-10 dataset.", "This paper proposes to evaluate the robustness of the neural networks by extrapolating to the unseen classes. However, the authors only include evaluation for non-robust trained models, without considering the robust trained model, such as Madry et al. [1]. The conclusion is not convincing that the authors studied the robustness using only non-robust models, because it is well known that the accuracy for attacking non-robust model can be 100% (for CIFAR-10). It is useful if the authors study whether their method can be used to measure the robustness of the robust trained models." ]
Neural networks have been shown vulnerable to adversarial samples. Slightly perturbed input images are able to change the classification of accurate models, showing that the representation learned is not as good as previously thought. To aid the development of better neural networks, it would be important to evaluate to what extent are current neural networks’ representations capturing the existing features. Here we propose a way to evaluate the representation quality of neural networks using a novel type of zero-shot test, entitled Raw Zero-Shot. The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features. To evaluate the soft-labels of unknown classes, two metrics are proposed. One is based on clustering validation techniques (Davies-Bouldin Index) and the other is based on soft-label distance of a given correct soft-label. Experiments show that such metrics are in accordance with the robustness to adversarial attacks and might serve as a guidance to build better models as well as be used in loss functions to create new types of neural networks. Interestingly, the results suggests that dynamic routing networks such as CapsNet have better representation while current deeper DNNs are trading off representation quality for accuracy.
[]
[ { "authors": [ "C. Agarwal", "A. Nguyen", "D. Schonfeld" ], "title": "Improving robustness to adversarial examples by encouraging discriminative features", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2019 }, { "authors": [ "Zeynep Akata", "Scott Reed", "Daniel Walter", "Honglak Lee", "Bernt Schiele" ], "title": "Evaluation of output embeddings for fine-grained image classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Anish Athalye", "Ilya Sutskever" ], "title": "Synthesizing robust adversarial examples", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": null, "year": 2018 }, { "authors": [ "Abhijit Bendale", "Terrance E Boult" ], "title": "Towards open set deep networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Maxime Bucher", "Stéphane Herbin", "Frédéric Jurie" ], "title": "Improving semantic embedding consistency by metric learning for zero-shot classiffication", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Yanwei Fu", "Yongxin Yang", "Tim Hospedales", "Tao Xiang", "Shaogang Gong" ], "title": "Transductive multi-label zero-shot learning", "venue": "arXiv preprint arXiv:1503.07790,", "year": 2015 }, { "authors": [ "Justin Gilmer", "Nicolas Ford", "Nicholas Carlini", "Ekin Cubuk" ], "title": "Adversarial examples are a natural consequence of test error in noise", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ruitong Huang", "Bing Xu", "Dale Schuurmans", "Csaba Szepesvári" ], "title": "Learning with a strong adversary", "venue": "arXiv preprint arXiv:1511.03034,", "year": 2015 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Christoph H Lampert", "Hannes Nickisch", "Stefan Harmeling" ], "title": "Learning to detect unseen object classes by between-class attribute transfer", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality", "venue": "arXiv preprint arXiv:1801.02613,", "year": 2018 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Omar Fawzi", "Pascal Frossard" ], "title": "Universal adversarial perturbations", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Anh Nguyen", "Jason Yosinski", "Jeff Clune" ], "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Mohammad Norouzi", "Tomas Mikolov", "Samy Bengio", "Yoram Singer", "Jonathon Shlens", "Andrea Frome", "Greg S Corrado", "Jeffrey Dean" ], "title": "Zero-shot learning by convex combination of semantic embeddings", "venue": "arXiv preprint arXiv:1312.5650,", "year": 2013 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Mahmood Sharif", "Sruti Bhagavatula", "Lujo Bauer", "Michael K Reiter" ], "title": "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "JT Springenberg", "A Dosovitskiy", "T Brox", "M Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "In ICLR (workshop track),", "year": 2015 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Sakurai Kouichi" ], "title": "One pixel attack for fooling deep neural networks", "venue": "arXiv preprint arXiv:1710.08864,", "year": 2017 }, { "authors": [ "Christian et al. Szegedy" ], "title": "Intriguing properties of neural networks", "venue": "In In ICLR. Citeseer,", "year": 2014 }, { "authors": [ "Joshua B Tenenbaum", "Vin De Silva", "John C Langford" ], "title": "A global geometric framework for nonlinear dimensionality reduction", "venue": null, "year": 2000 }, { "authors": [ "Laura Thesing", "Vegard Antun", "Anders C Hansen" ], "title": "What do ai algorithms actually learn?-on false structures in deep learning", "venue": null, "year": 1906 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jonathan Uesato", "Brendan O’Donoghue", "Pushmeet Kohli", "Aaron Oord" ], "title": "Adversarial risk and the dangers of evaluating against weak attacks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Danilo Vasconcellos Vargas", "Jiawei Su" ], "title": "Understanding the one-pixel attack: Propagation maps and locality analysis", "venue": "arXiv preprint arXiv:1902.02947,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Adversarial samples are slightly perturbed inputs that can make neural networks misclassify. They are carefully crafted by searching for variations in the input that, for example, could decrease the soft-labels of the correct class. Since they were discovered some years ago (Szegedy, 2014), the number of adversarial samples have grown in both number and types. Random noise was shown to be recognized with high confidence by neural networks (Nguyen et al., 2015), universal perturbations, that can be added to almost any image to generate an adversarial sample, were shown to exist (Moosavi-Dezfooli et al., 2017), and the addition of crafted patches was shown to cause networks to misclassify (Brown et al., 2017). Only one pixel is enough to make networks misclassify (Su et al., 2017). Such attacks can also be easily transferred to real-world scenarios (Kurakin et al., 2016),(Athalye & Sutskever, 2018), which confers a big issue as well as a security risk for current deep neural networks’ applications.\nAlbeit the existence of many defences, there is not any known learning algorithm or procedure that can defend against adversarial attacks consistently. Many works have tried to defend by hiding or modifying the gradients to make neural networks harder to attack. However, a recent paper shows that most of these defences fall into the class of obfuscated gradients which have their shortcomings (e.g., they can be easily bypassed by transferable attacks) (Athalye et al., 2018). Additionally, the use of an augmented dataset with adversarial samples (named adversarial training) is perhaps one of the most successful approaches to construct robust neural networks (Goodfellow et al., 2014),(Huang et al., 2015), (Madry et al., 2018). However, it is still vulnerable to attacks and has a strong bias in the type of adversarial samples used in training (Tramèr et al., 2018).\nThis shows that a deeper understanding of the issues is needed to enable more consistent defences to be created. Few works focused on understanding the reason behind such lack of robustness. In (Goodfellow et al., 2014), it is argued that Deep Neural Networks’s (DNN) linearity are one of the main reasons. Recent investigations reveal that attacks are changing where the algorithm is paying attention (Vargas & Su, 2019), other experiments show that deep learning neural networks learn false\nstructures that are easier to learn rather than the ones expected (Thesing et al., 2019) and an accuracy and robustness trade-off for models were shown to exist (Tsipras et al., 2019).\nIn this paper, we propose a methodology of how to evaluate the representation of machine learning methods. Based on these metrics, we reveal a link between deep representations’ quality and attack susceptibility. Specifically, we propose a test called Raw Zero-Shot and two metrics to evaluate DNN’s representations." }, { "heading": "1.1 RECENT ADVANCES IN ATTACKS AND DEFENSES", "text": "DNNs were shown vulnerable to many types of attacks. For example, the output high confidence results to noise images (Nguyen et al., 2015), universal perturbations in which a single perturbation can be added to almost any input to create an adversarial sample are possible (Moosavi-Dezfooli et al., 2017), the addition of image patches can also make them misclassify (Brown et al., 2017). Moreover, the vulnerability can be exploited even with a single pixel, i.e., changing a single pixel is often enough to make a DNNs misclassify (Su et al., 2017). Most of these attacks can be transformed into real-world attacks by merely printing the adversarial samples (Kurakin et al., 2016). Moreover, crafted glasses (Sharif et al., 2016) or even general 3d adversarial objects (Athalye & Sutskever, 2018) can be used as attacks.\nAlthough many defensive systems were proposed to tackle the current problems, there is still no consistent solution available. Defensive distillation in which a smaller neural network squeezes the content learned by the original DNN was proposed (Papernot et al., 2016). However, it was shown not to be robust enough (Carlini & Wagner, 2017). Adversarial training was also proposed as a defence, in which adversarial samples are used to augment the training dataset (Goodfellow et al., 2014),(Huang et al., 2015), (Madry et al., 2018). With adversarial training, DNNs increase slightly in robustness but not without a bias towards the adversarial samples used and while still being vulnerable to attacks in general (Tramèr et al., 2018). There are many recent variations of defenses in which the objective is to hide the gradients (obfuscated gradients) (Ma et al., 2018), (Guo et al., 2018) (Song et al., 2018). However, they can be bypassed by various types of attacks (such as attacks not using gradients, transfer of adversarial samples, etc.) (Athalye et al., 2018),(Uesato et al., 2018).\nThere are a couple of works which are trying to understand the reason behind such lack of robustness. To citep some, in (Goodfellow et al., 2014), it is argued that the main reason may lie in DNNs’ lack of non-linearity. Another work argues that the perturbations cause a change in the saliency of images which makes the model switch the attention to another part of it (Vargas & Su, 2019). False structures that are easier to learn were also shown related to the problem (Thesing et al., 2019). Moreover, in (Tsipras et al., 2019), the accuracy and robustness trade-off was shown to exist." }, { "heading": "1.2 ZERO-SHOT LEARNING", "text": "Zero-Shot learning is a method used to estimate unknown classes which do not appear in the training data. The motivation of Zero-Shot learning is to transfer knowledge from training classes to unknown classes. Existing methods approach the problem by estimating unknown classes from an attribute vector defined manually. Attribute vectors are annotated to both known and unknown classes, and for each class, whether an attribute, such as “colour” and “shape”, belongs to the class or not is represented by 1 or 0. For example, in (Lampert et al., 2009) the authors proposed Direct Attribute Prediction (DAP) model, which learns each parameter for estimating the attributes from the target data. It estimates an unknown class of the source data which is estimated from the target data by using these parameters. Based on this research, other zero-shot learning methods have been proposed which uses an embedded representation generated using a natural language processing algorithm instead of a manually created attribute vector (Zhang & Saligrama, 2016; Fu et al., 2015; Norouzi et al., 2013; Akata et al., 2015; Bucher et al., 2016).\nIn (Zhang & Saligrama, 2015), a different approach to estimate unknown classes is proposed. This method constructs the histogram of known classes distribution for an unknown class. In this approach, it is assumed that the unknown classes are the same if these histograms generated in the target domain and the source domain are similar. This perspective is similar to our approach because our method approach to represent an unknown class as the distribution of known classes. However, our objective\nis not estimating the unknown class, and we do not use the source domain. Our objective here is to analyze DNNs’ representation by using this distribution." }, { "heading": "2 ON THE LINKS BETWEEN ROBUSTNESS EVALUATION AND REPRESENTATION", "text": "In the canonical classification setting, the goal of a classifier is to achieve low expected loss:\nE (x,y)∼D [L(x, y; θ)] (1)\nRobustness against adversarial attacks is a slightly different setting. To achieve a high robustness in this setting, a classifier should have a lower adversarial loss in noise δ ∈ ∆ 1:\nE (x,y)∼D [L(x+ δ, y; θ)] . (2)\nConsidering Mean Squared Error (MSE), we have:\nE (x,y)∼D [L(x+ δ, y; θ)] = E (x,y)∼D [(f(x+ δ)− h(x+ δ))2] + E (x,y)∼D [(h(x+ δ)− ŷ(x+ δ))2],\nwhere h(x) = E[hD(x)] is the expected behavior of the prediction when averaged over many datasets, f(x) is the ground-truth and ŷ(x) = hD(x) is the output after learned on a given dataset D. For robustness to increase, adversarial training requires that datasets should have many noisy samples, i.e., x+δ ∈ D. However, the more noise is added to images the moreD becomes close to all possible images RM∗N :\nlim ∆→∞\nD = RM∗N (3)\nHowever, f(x) is undefined 2 for D ∈ RM∗N in which y /∈ C for the set of known classes C. Even a small amount of noise may be enough to cause y /∈ C and thus f(x) undefined. Would it be possible to evaluate the robustness and/or the quality of a model without a well defined y?\nTo answer this question we take into account an ideal representation z and the representation learned by the model ẑ:\nE[(f(x+ δ; z)− h(x+ δ; ẑ))2] + E[(h(x+ δ; ẑ)− ŷ(x+ δ; ẑ))2]\nInterestingly, although y is undefined, z represents the features learned and is well defined for any input. Moreover, by considering learned classes to be clustered in z space, unsupervised learning evaluation can be used to evaluate z even without a well defined y. We use here a famous clustering analysis index to evaluate clusters in z by their intracluster distance. In z, it is also possible to evaluate the representation of known and unknown classes which should share some features. Moreover, we hypothesize here that unknown classes should evaluate z with less bias because a direct map from input to output is inexistent. Any projection of the input in any of the feature maps or the output layer could be used as z. To take the entire projection into account, we use here z as the final projection of the input to the classes, i.e. z is the soft label array e." }, { "heading": "3 RAW ZERO-SHOT", "text": "In this paper, we propose to evaluate the representation learned by conducting experiments over the soft-labels of the image in unknown classes. This is based on the hypothesis that if a model is\n1Here we use the error in noise to be the adversarial loss instead of the worst-case error, for a discussion of the relationship between error in noise and adversarial samples please refer to (Gilmer et al., 2019)\n2Alternatively, f(x) could be defined for any noise if an additional unknown class is defined, such as with an OpenMax layer (Bendale & Boult, 2016).\ncapable of learning useful features, an unknown class would also trigger some of these features inside the model. We call this type of test over unknown classes and without any other information, Raw Zero-Shot (Figure 1).\nThe Raw Zero-Shot is a supervised learning test in which only n− 1 of the n classes are shown to the classifier during training. The classifier also has only n− 1 possible outputs. During testing, only unknown classes are presented to the classifier. The soft-labels outputted for the given unknown class is recorded, and the process is repeated for all possible n classes, removing a different class each time.\nTo evaluate the representation quality, metrics computed over the soft-labels are used. These metrics are based on a different hypothesis of what defines a feature or a class. In the same way that there are different types of robustness, there are also different types of representation quality. Therefore, metrics are somewhat complementary, each highlighting a different aspect of the whole. The following subsections define two of them." }, { "heading": "3.1 DAVIES-BOULDIN METRIC - CLUSTERING HYPOTHESIS", "text": "Soft labels of a classifier compose a space in which a given image would be classified as a weighted vector concerning the previous classes learned. Considering that a cluster in this space would constitute a class, we can use clustering validation techniques to evaluate the representation (Figure 2).\nHere we choose for simplicity one of the most used metric in internal cluster validation, DaviesBouldin Index (DBI). DBI is defined as follows:\nDBI = 1 ne ne∑ j=1 |ej − cn|2 1/2 , (4)\nin which cn is the centroid of the cluster, e is one soft-label and ne is the number of samples." }, { "heading": "3.2 AMALGAM METRIC - AMALGAM HYPOTHESIS", "text": "If DNNs can learn the features present in the classes, it would be reasonable to consider that the sof-labels also describe a given image as a combination of the previously learned classes. This is also true when an image contains an unknown class. Similar to a vector space in linear algebra, the soft-labels can be combined to describe unknown objects in this space. This is analogous to how children describe previously unseen objects as a combination of previously seen objects. Differently from the previous metric, here we are interested in the exact values of the soft-labels. However, what would constitute the correct soft-labels for a given unknown class needs to be determined.\nTo calculate the correct soft-label of a given unknown class (amalgam proportion) automatically, we use here the assumption that accurate classifiers should output a good approximation of the amalgam proportion already. Therefore, if a classifier is trained in the n classes, the soft-labels of the remaining n− 1 classes is the amalgam proportion (Figure 2 illustrates the concept). Consequently, the Amalgam Metric (AM) is defined as:\nh’i = ne∑ j=1 e’j ,hi = ne∑ j=1 ej\nAM =\n( 1\nn n∑ i=1 ‖h’i − hi‖1 n− 1\n) ,\n(5)\nin which, e’ is the normalized (such that they sum to one) soft-label from the classifier trained over n classes and e is the soft-labels from the classifier trained over n− 1 classes." }, { "heading": "4 RAW ZERO-SHOT EXPERIMENTS", "text": "Here, we conduct Raw Zero-Shot experiments to evaluate the representation of DNNs. To obtain results over a wide range of architectures, we chose to evaluate CapsNet (a recently proposed completely different architecture based on dynamic routing and capsules) (Sabour et al., 2017), ResNet (a state-of-the-art architecture based on skip connections)(He et al., 2016), Network in Network (NIN) (an architecture which uses micro neural networks instead of linear filters) (Lin et al., 2013), All Convolutional Network (AllConv) (an architecture without max pooling and fully connected layers)(Springenberg et al., 2015) and LeNet (a simpler architecture which is also a historical mark) (LeCun et al., 1998). All the experiments are run over the CIFAR dataset by using a training dataset with all the samples of one specific class removed. This process is repeated for all classes, removing the samples of a different class each time.\nTo analyze the correlation between representation metrics and robustness against adversarial attacks, we conducted adversarial attacks on all the architectures tested using the most well known algorithms such as Carlini (Carlini & Wagner, 2017), Fast Gradient Method (FGM) (Goodfellow et al., 2014), Basic Iterative Method (BIM )(Kurakin et al., 2016), DeepFool (Moosavi-Dezfooli et al., 2016), Projected Gradient Descent Method (PGDM) (Madry et al., 2018) (Table 1).\nFor all tests, ε is fixed to the corresponding value given in the table. However, different methods have different meanings for ε: (a) for pixel attacks, ε is the maximum number of pixels to be changed, (b) for threshold attacks, ε is the maximum amount of change allowed per pixel, (c) for FGM, BIM and PGDM, ε is the attack step size (input variation) and (d) for DeepFool, ε is the overshoot parameter.\nTable 2 shows the ranking based on the attack accuracy and their required perturbation (Table 1). In general, CapsNet is shown to be the most robust, AllConv and DenseNet follow with a proper\nplacement while the remaining networks vary depending on the perspective used to analyze (i.e., higher accuracy or lower L2). These robustness rankings will be used in the next sections to verify the relationship of robustness against adversarial samples and metrics to evaluate the representation quality." }, { "heading": "4.1 EXPERIMENTS ON DBI METRIC", "text": "Table 3 shows the results with the DBI metric (the smallest the better) and the respective ranking of each neural network. According to this metric, CapsNet possesses the best representation of all networks tested. LeNet is considered the second-best neural network regarding representation, followed by AllConv, NIN and then deeper neural networks. DBI metric matches exceptionally well with both accuracy and L2 ranking based on robustness against adversarial samples. Most differences in Hamming distance lies in the exact places in which both accuracy and L2 rankings differ. Notice that DBI metric does not use anything related to attacks and still arrives at similar rankings. To further demonstrate the correlation between DBI and adversarial attacks, a Pearson correlation of the DBI for each network is shown in Table 4. This table suggests that DBI and adversarial attacks have a statistical significant correlation.\nThe fact that LeNet and other relatively more uncomplicated networks achieve a high representation quality which is at odds with accuracy may seem extremely unlikely. However, as discussed in (Tsipras et al., 2019), accurate models can trade-off robustness for accuracy. DBI suggests that this trade-off happens because the representation quality has worsened. Interestingly, LeNet and other\nsimple networks are also easier to attack (low-rank accuracy) but need more perturbation to achieve the same accuracy (high rank L2). Therefore, LeNet and other simple networks might be easier to attack because the search space is less complicated (less obfuscation (Athalye et al., 2018)). However, this does not mean they are less robust. Alternatively, as DBI suggests, LeNet and other simple networks might have achieved relatively good representations but without high accuracy.\nTo enable visualization of this metric, we plotted in Figure 3 a projection into two dimensions of all the points in the decision space of unknown classes. All this is done while preserving the high-dimensional distance between the points. Here we use the Isomap (Tenenbaum et al., 2000) to achieve this effect. It can be easily observed that CapsNet’s results for unknown classes are more clustered and thus form a better-defined cluster than other architectures." }, { "heading": "4.2 EXPERIMENTS ON AMALGAM METRIC", "text": "In this section, the AM for all the networks is evaluated, which is based on the similarity of soft-labels for networks that were trained in all classes. The results shown in Table 3 reveal almost the same representation ranking as the robustness rankings related to L2. Interestingly, although both DBI and AM differ widely in concept and calculation procedure, the rankings are both similar and close to the L2. This further suggests that both metrics agree on what would be a good representation quality and can be used to evaluate representation in newer methods. A visualization as well as the Pearson correlation of the metric is included in the supplementary works." }, { "heading": "4.3 PARTS OF A WHOLE", "text": "In (Agarwal et al., 2019), adding a loss to force features to be close to the feature centroid was shown to be beneficial against adversarial attacks. This is consistent with the proposed DBI metric, demonstrating that both soft-label space as well as other feature spaces benefit from projections to nearby positions. At the same time, we further support the existence of a trade-off between accuracy and robustness for deeper DNNs first point out in (Tsipras et al., 2019). Other types of networks, such as CapsNet seem to avoid it to some extent. Therefore, the trade-off is shown to vary with the architecture and computing dynamics of a model. Lastly, we hypothesize that a representation bias may hold in which features learned are only invariant to the dataset and not unseen classes." }, { "heading": "5 CONCLUSIONS", "text": "Here we proposed the Raw Zero-Shot method to evaluate the representation of classifiers. In order to score the soft-labels, two metrics were formally defined based on a different hypothesis of representation quality. Results suggest that the evaluation of the representation of both metrics (DBI and AM) are linked with the robustness of neural networks. In other words, quickly attacked neural networks have a lower representation score. Interestingly, LeNet scores well in both metrics, albeit being the least accurate. LeNet is followed up by AllConv and NIN, which are less complicated/profound than other models which suggest that deeper architectures might be trading-off representation quality for accuracy. These results shown here further support the claim that there is a trade-off between accuracy and robustness in current deep learning (Tsipras et al., 2019).\nThus, the proposed Raw Zero-Shot was able to evaluate the representation quality of state-of-the-art DNNs and show their shortcomings concerning adversarial attacks, explaining many of the current problems. It also opens up new possibilities for both the evaluation (i.e. as a quality assessment) and the development (e.g., as a loss function) of neural networks." }, { "heading": "A THE POSSIBILITY OF A REPRESENTATION BIAS", "text": "The experiments show the possibility of representations to not work well for unknown classes. Many of these classes, however, share similar representations such as the dog and cat, truck and car. Thus, here we formulate a possible interpretation for the results.\nThe objective of a supervised learning algorithm is perhaps to map the input to output in such a way that the decision boundary reflects the real decision boundary. To achieve this, it is known that when dealing with complex problems, learning algorithms need first to learn a set of invariant features that are present throughout classes such that their recognition becomes robust against variations in the dataset.\nHuman beings, however, learn a set of invariant features that is not only able to solve current tasks or recognize current classes. We learn a set of features that can describe most if not all unseen classes and unknown tasks. Thus, we define representation bias as the bias towards invariant features that describe current seen classes or tasks but fail to describe unknown classes and tasks." }, { "heading": "B EXTENDED ANALYSIS OF DBI METRIC", "text": "Figure 4 shows a visualization of DBI’s results with t-Distributed Stochastic Neighbour Embedding (t-SNE) (Maaten & Hinton, 2008). DBI results are visualized using a projection into two dimensions while focusing on neighbour distances. The idea for having this visualisation is to investigate whether\nthe cluster can be split into one or more different classes. It is to be noted that IsoMap and t-SNE are two different visualisations for the same feature space.\nThe figure 4 shows that CapsNet forms pretty dense projections. Even if we form clusters to have different classes, the gaps between the classes will be too low relatively to other architectures. It also shows that in the high-dimensional space all the logits are moderately close to each other. While for the other architectures, there exists some points which can form their own cluster and be termed as different label. Hence for these architectures it can have one or more different classes for the same label which is contradicting to our hypothesis that there should exist only a single class for a single label. This inference can also be linked to adversarial attacks and supports the argument that CapsNet is relatively more robust than the other architectures." }, { "heading": "C EXTENDED ANALYSIS OF AMALGAM METRIC", "text": "To enable a visualisation of the metric, the computed histograms (h’i and hi from Equation 5) are plotted in Figure 5. It is interesting to note that in Figure 5, the histograms from CapsNet are different from the other ones, by the entirely different architecture employed by CapsNet. This reveals that this metric can capture such representation differences. Table 5 reveals that Amalgam metric is less reliable than DBI. Its p-values are also high enough to render its relationship less obvious. Recall, however, that the ranking based on Amalgam is very similar to the raking obtained with DBI metric.\nD = | h’i − hi |, where\nh’i = ne∑ j=1 e’j and hi = ne∑ j=1 ej (6)\nThe Amalgam Metric showed that both CapsNet and AllConv have the best scores which is in accordance with their top robustness score. Figure 6 shows a visualization of equation 6 which is part of the main equation of Amalgam Metric. On analysing further the phenomena about absolute difference between h’ and h in the Amalgam Metric or D in equation 6. It can be noted from the figure that for most labels of CapsNet and AllConv the difference is relatively low than the other architectures. This contributes to have CapsNet and AllConv to have best scores. Further investigations can be carried out to analyse the effect of a label in adversarial attack based on this figure. This can also provide insight on the labels which are robust to adversarial attacks. A further study can also be carried out to analyse the characteristics of the neural network’s representation which makes a label more robust than other labels." } ]
2,019
null
SP:3072194753b9f5af63f5d4c9a06b6d67f39e6b0b
[ "The paper studies the label noise problem with the motivation of without estimating the flip rate or transition matrix. This is an interesting direction for dealing with label noise. Most of the previous studies need either estimate the transition matrix or put restrictions on it, e.g., to be symmetric. A very related work to this paper: L_{DMI}: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise, where no restrictions have been made on the class-dependent transition matrix and the proposed method does not need to estimate the transition matrix. The authors may need to discuss the paper.", "This paper studies the problem of learning classifiers from noisy data without specifying the noise rates. Inspired by the literature of peer prediction, the authors propose peer loss. First, a scoring function is introduced, minimizing which we can elicit the Bayes optimal classifier f*. Then the authors use the setting of CA to induces a scoring matrix, and then the peer loss. Moreover, this paper explores the theoretical properties of peer loss when p=0.5. In particular, the authors propose \\alpha weighted peer loss to provide strong theoretical guarantees of the proposed ERM framework. The calibration and generalization abilities are also discussed in section 4.3. Finally, empirical studies show that the propose peer loss indeed remedies the difficulty of determining the noise rates in noisy label learning." ]
Learning with noisy labels is a common problem in supervised learning. Existing approaches require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem. The specifications are either assumed to be given or estimated using additional approaches. In this work, we introduce a technique to learn from noisy labels that does not require a priori specification of the noise rates. In particular, we introduce a new family of loss functions that we name as peer loss functions. Our approach then uses a standard empirical risk minimization (ERM) framework with peer loss functions. Peer loss functions associate each training sample with a certain form of “peer” samples, which evaluate a classifier’ predictions jointly. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near optimal classifier as if performing ERM over the clean training data, which we do not have access to. To our best knowledge, this is the first result on “learning with noisy labels without knowing noise rates” with theoretical guarantees. We pair our results with an extensive set of experiments, where we compare with state-of-the-art techniques of learning with noisy labels. Our results show that peer loss functions based method consistently outperforms the baseline benchmarks, as well as some recent new results. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.
[ { "affiliations": [], "name": "NOISY LA" }, { "affiliations": [], "name": "BELS WITHOUT" }, { "affiliations": [], "name": "KNOWING NOISE RATES" } ]
[ { "authors": [ "Peter L Bartlett", "Michael I Jordan", "Jon D McAuliffe" ], "title": "Convexity, classification, and risk bounds", "venue": "Journal of the American Statistical Association,", "year": 2006 }, { "authors": [ "Shai Ben-David", "Dávid Pál", "Shai Shalev-Shwartz" ], "title": "Agnostic online learning", "venue": "COLT", "year": 2009 }, { "authors": [ "Tom Bylander" ], "title": "Learning linear threshold functions in the presence of classification noise", "venue": "In Proceedings of the seventh annual conference on Computational learning theory,", "year": 1994 }, { "authors": [ "Nicolo Cesa-Bianchi", "Eli Dichterman", "Paul Fischer", "Eli Shamir", "Hans Ulrich Simon" ], "title": "Sampleefficient strategies for learning in the presence of noise", "venue": "Journal of the ACM (JACM),", "year": 1999 }, { "authors": [ "Nicolo Cesa-Bianchi", "Shai Shalev-Shwartz", "Ohad Shamir" ], "title": "Online learning of noisy data", "venue": "IEEE Transactions on Information Theory,", "year": 2011 }, { "authors": [ "Nontawat Charoenphakdee", "Jongyeong Lee", "Masashi Sugiyama" ], "title": "On symmetric losses for learning from corrupted labels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Anirban Dasgupta", "Arpita Ghosh" ], "title": "Crowdsourced judgement elicitation with endogenous proficiency", "venue": "In Proceedings of the 22nd international conference on World Wide Web, pp. 319–330. International World Wide Web Conferences Steering Committee,", "year": 2013 }, { "authors": [ "Marthinus Christoffel Du Plessis", "Gang Niu", "Masashi Sugiyama" ], "title": "Clustering unclustered data: Unsupervised binary labeling of two datasets having different class balances", "venue": "In 2013 Conference on Technologies and Applications of Artificial Intelligence,", "year": 2013 }, { "authors": [ "Benoı̂t Frénay", "Michel Verleysen" ], "title": "Classification in the presence of label noise: a survey", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2014 }, { "authors": [ "Aritra Ghosh", "Naresh Manwani", "PS Sastry" ], "title": "Making risk minimization tolerant to label", "venue": "noise. Neurocomputing,", "year": 2015 }, { "authors": [ "Aritra Ghosh", "Himanshu Kumar", "PS Sastry" ], "title": "Robust loss functions under label noise for deep neural networks", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Tilmann Gneiting", "Adrian E. Raftery" ], "title": "Strictly proper scoring rules, prediction, and estimation", "venue": "Journal of the American Statistical Association,", "year": 2007 }, { "authors": [ "Bo Han", "Quanming Yao", "Xingrui Yu", "Gang Niu", "Miao Xu", "Weihua Hu", "Ivor Tsang", "Masashi Sugiyama" ], "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Roni Khardon", "Gabriel Wachman" ], "title": "Noise tolerant variants of the perceptron algorithm", "venue": "J. Mach. Learn. Res.,", "year": 2007 }, { "authors": [ "Yuqing Kong", "Grant Schoenebeck" ], "title": "Water from two rocks: Maximizing the mutual information", "venue": "In Proceedings of the 2018 ACM Conference on Economics and Computation,", "year": 2018 }, { "authors": [ "Bing Liu", "Yang Dai", "Xiaoli Li", "Wee Sun Lee", "Philip S. Yu" ], "title": "Building text classifiers using positive and unlabeled examples", "venue": "In Proceedings of the Third IEEE International Conference on Data Mining,", "year": 2003 }, { "authors": [ "Tongliang Liu", "Dacheng Tao" ], "title": "Classification with noisy labels by importance reweighting", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2016 }, { "authors": [ "Yang Liu", "Yiling Chen" ], "title": "Machine Learning aided Peer Prediction", "venue": "ACM EC,", "year": 2017 }, { "authors": [ "Nan Lu", "Gang Niu", "Aditya K Menon", "Masashi Sugiyama" ], "title": "On the minimal supervision for training any binary classifier from only unlabeled data", "venue": "arXiv preprint arXiv:1808.10585,", "year": 2018 }, { "authors": [ "Naresh Manwani", "PS Sastry" ], "title": "Noise tolerance under risk minimization", "venue": "IEEE transactions on cybernetics,", "year": 2013 }, { "authors": [ "Aditya Menon", "Brendan Van Rooyen", "Cheng Soon Ong", "Bob Williamson" ], "title": "Learning from corrupted binary labels via class-probability estimation", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Nolan Miller", "Paul Resnick", "Richard Zeckhauser" ], "title": "Eliciting informative feedback: The peerprediction method", "venue": "Management Science,", "year": 2005 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Giorgio Patrini", "Alessandro Rozza", "Aditya Krishna Menon", "Richard Nock", "Lizhen Qu" ], "title": "Making deep neural networks robust to label noise: A loss correction approach", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "D. Prelec" ], "title": "A bayesian truth serum for subjective data", "venue": "Science, 306(5695):462–466,", "year": 2004 }, { "authors": [ "G. Radanovic", "B. Faltings" ], "title": "A robust bayesian truth serum for non-binary signals", "venue": "In Proceedings of the 27th AAAI Conference on Artificial Intelligence,", "year": 2013 }, { "authors": [ "Goran Radanovic", "Boi Faltings", "Radu Jurca" ], "title": "Incentives for effort in crowdsourcing using the peer truth serum", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2016 }, { "authors": [ "Clayton Scott" ], "title": "A rate of convergence for mixture proportion estimation, with application to learning from noisy labels", "venue": "In AISTATS,", "year": 2015 }, { "authors": [ "Clayton Scott", "Gilles Blanchard", "Gregory Handy", "Sara Pozzi", "Marek Flaska" ], "title": "Classification with asymmetric label noise: Consistency and maximal denoising", "venue": "In COLT, pp", "year": 2013 }, { "authors": [ "V. Shnayder", "A. Agarwal", "R. Frongillo", "D.C. Parkes" ], "title": "Informed Truthfulness in Multi-Task Peer Prediction", "venue": "ACM EC,", "year": 2016 }, { "authors": [ "Victor Shnayder", "Arpit Agarwal", "Rafael Frongillo", "David C Parkes" ], "title": "Informed truthfulness in multi-task peer prediction", "venue": "In Proceedings of the 2016 ACM Conference on Economics and Computation,", "year": 2016 }, { "authors": [ "Hwanjun Song", "Minseok Kim", "Jae-Gil Lee" ], "title": "Selfie: Refurbishing unclean samples for robust deep learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Guillaume Stempfel", "Liva Ralaivola" ], "title": "Learning svms from sloppily labeled data", "venue": "In International Conference on Artificial Neural Networks,", "year": 2009 }, { "authors": [ "Sainbayar Sukhbaatar", "Rob Fergus" ], "title": "Learning from noisy labels with deep neural networks", "venue": "arXiv preprint arXiv:1406.2080,", "year": 2014 }, { "authors": [ "Brendan Van Rooyen", "Aditya Menon", "Robert C Williamson" ], "title": "Learning with symmetric label noise: The importance of being unhinged", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Brendan van Rooyen", "Aditya Krishna Menon", "Robert C Williamson" ], "title": "An average classification algorithm", "venue": "arXiv preprint arXiv:1506.01520,", "year": 2015 }, { "authors": [ "J. Witkowski", "D. Parkes" ], "title": "A robust bayesian truth serum for small populations", "venue": "In Proceedings of the 26th AAAI Conference on Artificial Intelligence,", "year": 2012 }, { "authors": [ "Jens Witkowski", "Yoram Bachrach", "Peter Key", "David C. Parkes" ], "title": "Dwelling on the Negative: Incentivizing Effort in Peer Prediction", "venue": "In Proceedings of the 1st AAAI Conference on Human Computation and Crowdsourcing (HCOMP’13),", "year": 2013 }, { "authors": [ "Tong Xiao", "Tian Xia", "Yi Yang", "Chang Huang", "Xiaogang Wang" ], "title": "Learning from massive noisy labeled data for image classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Yilun Xu", "Peng Cao", "Yuqing Kong", "Yizhou Wang. L" ], "title": "dmi: An information-theoretic noise-robust loss function", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Zhilu Zhang", "Mert R. Sabuncu" ], "title": "Generalized cross entropy loss for training deep neural networks with noisy labels, 2018", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The quality of supervised learning models depends on the training data {(xn, yn)}Nn=1. In practice, label noise can arise due to a host of reasons. For instance, the observed labels ỹns may represent human observations of a ground truth label. In this case, human annotators may observe the label imperfectly due to differing degrees of expertise or measurement error, see e.g., medical examples such as labeling MRI images from patients. Many prior approaches to this problem in the machine learning literature aim to develop algorithms to learn models that are robust to label noise (Bylander, 1994; Cesa-Bianchi et al., 1999; 2011; Ben-David et al.; Scott et al., 2013; Natarajan et al., 2013; Scott, 2015). Typical approaches require a priori knowledge of noise rates, i.e., a set of parameters that control the severity of label noise. Working with unknown noise rates is difficult in practice: Often, one must estimate the noise rates from data, which may require additional data collection (Natarajan et al., 2013; Scott, 2015; Van Rooyen et al., 2015) (e.g., be a redundant set of noisy labels for each sample point, or a set of ground truth labels for tuning these parameters) and may introduce estimation error that can affect the final model in less predictable ways. Our main goal is to provide an alternative that does not require the specification of the noise rates, nor an additional estimation step for the noises. This target solution might help when the practitioner does not have access to reliable estimates of the noise rates (e.g., when the training data has limited size for the estimation tasks, or when the training data is already collected in a form that makes the estimation hard to perform).\nIn this paper, we introduce a new family of loss functions, peer loss functions, to empirical risk minimization (ERM), for a broad class of learning with noisy labels problems. Peer loss functions operate under different noise rates without requiring either a priori knowledge of the embedded noise rates, or an estimation procedure. This family of loss functions builds on approaches developed in the peer prediction literature (Miller et al., 2005; Dasgupta & Ghosh, 2013; Shnayder et al., 2016),\nwhich studies how to elicit information from self-interested agents without verification. Typical approaches in the peer prediction literature design scoring functions to score each reported data using another noisy reference answer, without accessing ground truth information. We borrow this idea and the associated scoring functions via making a connection through treating each classifier’s prediction as an agent’s private information to be elicited and evaluated, and the noisy label as an imperfect reference from a “noisy label agent”. The peer loss takes a form of evaluating classifiers’ prediction using noisy labels on both the targeted samples and a particular form of constructed “peer” samples. The evaluation on the constructed peer sample encodes implicitly the information about the noises as well as the underlying true labels, which helps us offset the effects of label noises. The peer sample evaluation returns us a favorable property that expected risk of peer loss turns to be an affine transformation of the true risk of the classifier defined on the clean distribution. In other words, peer loss is invariant to label noises when optimizing with it. This effect helps us get rid of the estimation of noise rates.\nThe main contributions of this work are:\n1. We propose a new family of loss functions that can easily adapt to existing ERM framework that i) is robust to asymmetric label noises with formal theoretical guarantees and ii) requires no prior knowledge or estimation of the noise rates (no need for specifying noise rates). We believe having the second feature above is a non-trivial progress, and it features a promising solution to deploy in an unknown noisy training environment.\n2. We present formal results showing that performing ERM with a peer loss function can recover an optimal, or a near optimal classifier f∗ as if performing ERM on the clean data (Theorem 2, 3, 4). We also provide analysis for peer loss functions’ risk guarantees (Theorem 5 and 7).\n3. We present extensive experimental results to validate the usefulness of peer loss (Section 5 and Appendix). This result is encouraging as it is able to remove the long-standing requirement of learning error rates of noises (or estimating transition matrices as used in many relevant papers) before many of the existing methods can be applied. We also provide preliminary results on how peer loss generalizes to multi-class classification problems.\n4. We will contribute to the community by publishing our codes and implementations." }, { "heading": "1.1 RELATED WORK", "text": "Learning from Noisy Labels Our work fits within a stream of research on learning with noisy labels. A large stream of research on this topic works with the random classification noise (RCN) model, where observed labels are flipped independently with probability e ∈ [0, 12 ] (Bylander, 1994; Cesa-Bianchi et al., 1999; 2011; Ben-David et al.). Recently, learning with asymmetric noisy data (or also referred as class-conditional random classification noise (CCN)) for binary classification problems has been rigorously studied in (Stempfel & Ralaivola, 2009; Scott et al., 2013; Natarajan et al., 2013; Scott, 2015; Van Rooyen et al., 2015; Menon et al., 2015). For a more thorough survey of classical results on learning with noisy data, please refer to (Frénay & Verleysen, 2014).\nSymmetric loss For RCN, where the noise parameters are symmetric, there exists works that show symmetric loss functions (Manwani & Sastry, 2013; Ghosh et al., 2015; 2017; Van Rooyen et al., 2015) are robust to the underlying noises, without specifying the noise rates. It was also shown that under certain conditions, the proposed loss functions are able to handle asymmetric noises. Our focus departs from this line of works, and we will exclusively focus on asymmetric noise setting, and study the possibility of an approach that can ignore the knowledge of noise rates.\nFollow-up works (Du Plessis et al., 2013; van Rooyen et al., 2015; Menon et al., 2015; Charoenphakdee et al., 2019) have looked into leveraging symmetric conditions and 0-1 loss with asymmetric noises, and with more evaluation metrics, such as balanced error rate and AUROC. In particular, experimental evidences are reported in (Charoenphakdee et al., 2019) on the importance of symmetricity when learning with noisy labels.\nMore recent works More recent developments include an importance re-weighting algorithm (Liu & Tao, 2016), a noisy deep neural network learning setting (Sukhbaatar & Fergus, 2014; Han et al., 2018; Song et al., 2019), and learning from massive noisy data for image classification (Xiao et al., 2015), robust cross entropy loss for neural network (Zhang & Sabuncu, 2018), loss correction (Pa-\ntrini et al., 2017), among many others. Loss or sample correction has also been studied in the context of learning with unlabeled data with weak supervisions (Lu et al., 2018). Most of above works either lacks theoretical guarantee of the proposed method against asymmetric noise rates ((Sukhbaatar & Fergus, 2014; Zhang & Sabuncu, 2018)), or requires estimating the noise rate (or transition matrix between noisy and true labels, (Liu & Tao, 2016; Xiao et al., 2015; Patrini et al., 2017; Lu et al., 2018)). A good number of the recent works can be viewed as derivatives or extention of the unbiased surrogate loss function idea introduced in (Natarajan et al., 2013), therefore they would naturally require the knowledge of the noise rates or transition matrix. We do provide thorough comparisons between peer loss and the unbiased surrogate loss methods.\nMostly relevant to us is a recent work (Xu et al., 2019) that proposes an information theoretical loss (an idea adapted from an earlier theoretical contribution (Kong & Schoenebeck, 2018)) that is also robust to asymmetric noises rate. We aimed for a simple-to-optimize loss function that can easily adapt to existing ERM solutions. (Xu et al., 2019) involves estimating a joint distribution matrix between classifiers and noisy labels, and then invokes computing a certain information theoretical measure based on this matrix. Therefore, its sample complexity requirement and the sensitivity to noises in this estimation are not entirely clear to us (not provided in the paper either). We do provide calibration guarantees and generalization bounds. We provide conditions when the loss functions are convex. In general, we do think computationally peer loss functions are easy to optimize with, in comparing to information theoretical measures. Experiments comparing with (Xu et al., 2019) are also given in Section 5.\nPeer Prediction Our work also builds on the literature for peer prediction (Prelec, 2004; Miller et al., 2005; Witkowski & Parkes, 2012; Radanovic & Faltings, 2013; Witkowski et al., 2013; Dasgupta & Ghosh, 2013; Shnayder et al., 2016; Liu & Chen, 2017). (Miller et al., 2005) established that strictly proper scoring rule (Gneiting & Raftery, 2007) could be adopted to elicit truthful reports from self-interested agents. Follow-up works that have been done to relax the assumptions imposed (Witkowski & Parkes, 2012; Radanovic & Faltings, 2013; Witkowski et al., 2013; Radanovic et al., 2016; Liu & Chen, 2017). Most relevant to us is (Dasgupta & Ghosh, 2013; Shnayder et al., 2016) where a correlated agreement (CA) type of mechanism was proposed. CA evaluates a report’s correlations with another reference agent - its specific form inspired our peer loss." }, { "heading": "2 PRELIMINARIES", "text": "Notations and preliminaries: For positive integer n, denote by [n] := {1, 2, ..., n}. Suppose (X,Y ) ∈ X × Y are drawn from a joint distribution D, with their marginal distributions denoted as PX ,PY respectively. We assume X ⊆ Rd, and Y = {−1,+1}, that is we consider a binary classification problem. Denote by p := P(Y = +1) ∈ (0, 1). There are N training samples (x1, y1), ..., (xN , yN ) drawn i.i.d. from D. Instead of observing yns, the learner can only collect a noisy set of training labels ỹns, generated according to yns and a certain error rate model, that is we observe a dataset {(xn, ỹn)}Nn=1. We assume a uniform error model for all the training samples we collect, in that errors in ỹns follow the same error rate model: denoting the random variable for noisy labels as Ỹ and we denote e+1 := P(Ỹ = −1|Y = +1), e−1 := P(Ỹ = +1|Y = −1) such that 0 ≤ e+1 + e−1 < 1. e−1 + e+1 < 1 is not unlike the condition imposed in the existing learning literature (Natarajan et al., 2013), and it simply implies that the noisy labels are positively correlating with the true labels (informative about the true labels). Label noises are conditional independent from the features, that is the error rate is uniform across xns: P(Ỹ = y′|Y = y) = P(Ỹ = y′|X,Y = y),∀y, y′ ∈ {−1,+1}. Denote the distribution of the noisy data (X, Ỹ ) as D̃. f : X → R is a real-valued decision function, and its risk w.r.t. the 0-1 loss is defined as E(X,Y )∼D[1(f(X), Y )]. The Bayes optimal classifier f∗ is the one that minimizes the 0-1 risk: f∗ = argminf E(X,Y )∼D[1(f(X), Y )]. Denote this optimal risk as R∗. Instead of minimizing the above 0-1 risk, the learner often uses a surrogate loss function ` : R× {−1,+1} → R+, and find a f ∈ F that minimizes the following error: E(X,Y )∼D[`(f(X), Y )]. Denote the following measures:\nRD(f) = E(X,Y )∼D[1(f(X), Y )], R`,D(f) = E(X,Y )∼D[`(f(X), Y )].\nWhen there is no confusion, we will also short-hand E(X,Y )∼D[`(f(X), Y )] as ED[`(f(X), Y )]. Using D to denote a dataset collected from distribution D (correspondingly D̃ := {(xn, ỹn)}Nn=1 for D̃), the empirical risk measure for f is defined as R̂`,D(f) = 1|D| ∑ (x,y)∈D `(f(x), y) ." }, { "heading": "2.1 LEARNING WITH NOISY LABELS", "text": "Typical methods for learning with noisy labels include developing bias removal surrogates loss function methods to learn with noisy data (Natarajan et al., 2013). For instance, Natarajan et al. (2013) tackle this problem by defining an “un-biased” surrogate loss functions over ` to help “remove” noise, when e−1 + e+1 < 1: ˜̀(t, y) :=\n(1−e−y)·`(t,y)−ey·`(t,−y) 1−e−1−e+1 ,∀t, y. ˜̀is identified such that when a prediction is evaluated against a noisy label using this surrogate loss function, the prediction is as if evaluated against the ground-truth label using ` in expectation. Hence the loss of the prediction is “unbiased”, that is ∀ prediction t, EỸ |y[˜̀(t, Ỹ )] = `(t, y) [Lemma 1, (Natarajan et al., 2013)].\nOne important note to make is most, if not all, existing solutions require the knowledge of error rates e−1, e+1. Previous works either assumed the knowledge of it, or needed additional clean labels or redundant noisy labels to estimate them. This becomes the bottleneck of applying these great techniques in practice. Our work is also motivated by the desire to remove this limitation." }, { "heading": "2.2 PEER PREDICTION: INFORMATION ELICITATION WITHOUT VERIFICATION", "text": "Peer prediction is a technique developed to truthfully elicit information when there is no ground truth verification. Suppose we are interested in eliciting private observations about a binary event y ∈ {−1,+1} generated according to a random variable Y . There are K agents indexed by [K]. Each of them holds a noisy observation of y, denoted as y(i) ∈ {−1,+1}, i ∈ [K]. We would like to elicit the y(i)s, but they are completely private and we won’t observe y to evaluate agents’ reports. Denote by r(i) the reported data from each agent i. It is completely possible that r(i) 6= y(i) if agents are not compensated properly for their information. Results in peer prediction have proposed scoring or reward functions that evaluate an agent’s report using the reports of other peer agents. For example, a peer prediction mechanism may reward agent i for her report r(i) using S(r(i), r(j)) where r(j) is the report of a randomly selected reference agent j ∈ [K]\\{i}. The scoring function S is designed so that truth-telling is a strict Bayesian Nash Equilibrium (implying other agents truthfully report their y(j)), that is, ∀i Ey(j)[S(y(i), y(j))|y(i)] > Ey(j)[S(r(i), y(j))|y(i)], ∀r(i) 6= y(i). Correlated Agreement (Shnayder et al., 2016; Dasgupta & Ghosh, 2013) (CA) is a recently established peer prediction mechanism for a multi-task setting 1. CA is also the core and the focus of our subsequent sections on developing peer prediction based loss functions. This mechanism builds on a ∆ matrix that captures the stochastic correlation between the two sources of predictions y(i) and y(j). Denote the following mapping function: g(1) = −1, g(2) = +1, ∆ ∈ R2×2 is then defined as a squared matrix with its entries defined as follows:\n∆(k, l) = P ( y(i) = g(k), y(j) = g(l) ) − P ( y(i) = g(k) ) P ( y(j) = g(l) ) , k, l = 1, 2\nThe intuition of above ∆ matrix is that each (i, j) entry of ∆ captures the marginal correlation between the two predictions. M ∈ R2×2 is defined as the sign matrix of ∆: M := Sgn(∆), where Sgn(x) = 1, x > 0; Sgn(x) = 0, o.w. Define the following score matrix\nMS : {−1,+1} × {−1,+1} → {0, 1} : MS(y, y′) =: M(g−1(y), g−1(y′)), (1)\nwhere g−1 is the inverse function of g. CA requires each agent i to perform multiple tasks: denote agent i’s observations for the N tasks as y1(i), ..., yN (i). Ultimately the scoring function S(·) for each task k that is shared between i, j is defined as follows: randomly draw two other tasks kp1 , k p 2 ,\nS ( yk(i), yk(j) ) :=MS ( yk(i), yk(j) ) −MS ( ykp1 (i), yk p 2 (j) ) , kp1 6= k p 2 6= k\nNote a key difference between the first and second MS terms is that the second term is defined for two independent peer tasks kp1 , k p 2 (as the reference answers). It was established in (Shnayder et al.,\n1We provide other examples of peer prediction functions in the Appendix.\n2016) that CA is truthful and proper (Theorem 5.2, Shnayder et al. (2016).) 2; in particular, if y(j) is categorical w.r.t. y(i): P(y(j) = y′|y(i) = y) < P(y(j) = y′),∀i, j ∈ [K], y′ 6= y then S(·) is strictly truthful (Theorem 4.4, Shnayder et al. (2016))." }, { "heading": "3 LEARNING WITH NOISY DATA: A PEER PREDICTION APPROACH", "text": "In this section, we show that peer prediction scoring functions, when specified properly, will adopt Bayes optimal classifier as their maximizers (or minimizers for the corresponding loss form)." }, { "heading": "3.1 LEARNING WITH NOISY DATA AS AN ELICITATION PROBLEM", "text": "We first state our problem of learning with noisy labels as a peer prediction problem. The connection is made by firstly rephrasing the two data sources, the classifiers and the noisy labels, from agents’ perspective. For a task y ∈ {−1,+1}, say +1 for example, denote the noisy labels Ỹ as r(X), X ∼ PX|Y=1. In general, r(X) can be interpreted as the agent that observes ỹ1, ..., ỹN for a set of randomly drawn feature vectors x1, ..., xN : ỹn ∼ r(X). Suppose the agent’s observations are defined as follows (similar to the definition of e+1, e−): P(r(X) = −1|Y = +1) = e+1, P(r(X) = +1|Y = −1) = e−1. Denote another agent whose observations “mimic” the Bayes optimal classifier f∗. Again denote this optimal classifier agent as r∗(X) := f∗(X):\nPX(r∗(X) = −1|Y = +1) = e∗+1, PX(r∗(X) = +1|Y = −1) = e∗−1\nSuppose we would like to elicit predictions from the optimal classifier agent r∗, while the reports from the noisy label agent r will serve as the reference reports. Both r and r∗ are randomly assigned a task x, and each of them observes a signal r(x) and r∗(x) respectively. Denote the report from agent r∗ as r̃∗. A scoring function S : R × R → R is called to induce strictly truthfulness if the following fact holds: EX [ S ( r∗(X), r(X) )] >\nEX [ S ( r̃∗, r(X) )] , ∀r̃∗ 6= r∗(X). Taking the negative of S(·) (changing a reward score one aims to maximize to a loss to minimize) we also have EX [ −S ( r∗(X), r(X) )] < EX [ −S (\nr̃∗, r(X) )] , ∀r̃∗ 6= r∗(X), implying when taking −S(·) as the loss function, minimizing −S(·) w.r.t. R will return us the Bayes optimal classifier f∗. Our idea can be summarized easily using Fig. 1." }, { "heading": "3.2 “PROPER” PEER PREDICTION FUNCTION INDUCED BAYES OPTIMAL CLASSIFIER", "text": "When there is no ambiguity, we will shorthand r(X), r∗(X) as r, r∗, with keeping in mind that r, r∗ encode the randomness in X . Suppose S(·) is able to elicit the Bayes optimal classifier f∗ (agent r∗) using r, we have the following theorem formally:\nTheorem 1. f∗ = argminf E(X,Ỹ )∼D̃ [ −S(f(X), r) ] .\nThis proof can be done via showing that any non-optimal Bayes classifier corresponds to a misreporting strategy, thus establishing its non-optimality. We emphasize that it is not super restrictive to have a strictly truthful peer prediction scoring function S. We provide discussions in Appendix.\nTheorem 1 provides a conceptual connection and can serve as an anchor point when connecting a peer prediction score function to the problem of learning with noisy labels. So far we have not discussed about a specific form of how we construct a loss function using ideas from peer prediction, and have not mentioned the requirement of knowing the noise rates. We will provide the detail about a particular peer loss in next section, and explain its independence of noise rates.\n2To be precise, it is an informed truthfulness. We refer interested readers to (Shnayder et al., 2016) for the detailed differences." }, { "heading": "4 PEER LOSS FUNCTION", "text": "We now present peer loss, a family of loss functions inspired by a particular peer prediction mechanism, the correlated agreement (CA), as presented in Section 2.2. We are going to show that peer loss is able to induce the minimizer of a concept class F , under a broad set of non-restrictive conditions. In this Section, we do not restrict to Bayes optimal classifiers, nor do we impose any restrictions on the loss functions’ elicitation power." }, { "heading": "4.1 PREPARATION: EXPLAINING CA IN OUR CLASSIFICATION PROBLEM", "text": "To give a gentle start, we repeat the setting of CA for our classification problem.\n∆ and scoring matrix First recall that ∆ ∈ R2×2 is a squared matrix with entries defined between r∗ (the f∗) and r (i.e., the noisy labels Ỹ ):\n∆(k, l) = P ( f∗(X) = g(k), Ỹ = g(l) ) − P ( f∗(x) = g(k) ) P ( Ỹ = g(l) ) , k, l = 1, 2\nRecall g(·) is simply a mapping function: g(1) = −1, g(2) = +1. ∆ characterizes the “marginal” correlations between the optimal classifier’ prediction and the noisy label Ỹ . Then the following scoring matrix M ∈ Rn×n, sign matrix of ∆, M := Sgn(∆) is computed. Example 1. Consider a binary class label case: P(Y = −1) = 0.4,P(Y = +1) = 0.6, the noises in the labels are e−1 = 0.3, e+1 = 0.4 and e∗−1 = 0.2, e ∗ +1 = 0.3. Then we have ∆(1, 1) = 0.036, ∆(1, 2) = −0.036, ∆(2, 1) = −0.036, ∆(2, 2) = 0.036. And:\n∆ = [ 0.036 −0.036 −0.036 0.036 ] ⇒M = Sgn(∆) = [ 1 0 0 1 ] .\nPeer samples For each sample (xi, ỹi), randomly draw another two samples (xip1 , ỹip1 ), (xip2 , ỹip2 ) such that ip1 6= i p 2 and i p 1, i p 2 6= i. We will name (xip1 , ỹip1 ), (xip2 , ỹip2 ) as i’s peer samples. After pairing xip1 with ỹip2 (two independent tasks), the scoring function S(·) for each sample point xi is defined as follows: S(f(xi), ỹi)) = MS ( f(xi), ỹi ) −MS ( f(xip1 ), ỹi p 2 ) . Recall MS(·) is a sign score matrix defined for ∆ (Eqn. (1)). Define loss function ˜̀(·) as the negative of S(·):\n(Generic Peer Loss) ˜̀ ( f(xi), ỹi ) := ( 1−MS ( f(xi), ỹi )) − ( 1−MS ( f(xip1 ), ỹi p 2 )) . (2)\nThe first term above evaluates the classifier’s prediction on xi using noisy label ỹi, and the second “peer” term defined on two independent tasks ip1, i p 2 “punishes” the classifier from overly agreeing with the noisy labels. We will see this effect more clearly. According to Theorem 1, minimizing ˜̀(·) is going to find the Bayes optimal classifier, if Ỹ and f∗ are categorical, which is easily satisfied:\nLemma 1. When e−1 + e+1 < 1 and e∗−1 + e∗+1 < 1, r and r∗ (Ỹ and f∗) are categorical.\ne∗−1 + e ∗ +1 < 1 means that the optimal classifier is at least informative ((Liu & Chen, 2017)) - if otherwise, we can flip the classifier’s output to obtain one." }, { "heading": "4.2 PEER LOSS", "text": "We need to know Sgn(∆) in order to specify MS and ˜̀, which requires certain information about f∗ and Ỹ . We show that for the cases that the literature is broadly interested in, Sgn(∆) is simply the identify matrix (same condition as stated in Lemma 1): Lemma 2. If e−1 + e+1 < 1, e∗−1 + e∗+1 < 1, then Sgn(∆) = I2×2, i.e., the identity matrix.\nThis is basically stating that for ∆(k, k), k = 1, 2, f∗ and Ỹ are positively correlating, so the marginal correlation is positive; while for off-diagonal entries, they are negatively correlating.\nPeer loss When Sgn(∆) = I2×2, MS(y, y′) = 1 if y = y′, and 0 otherwise. ˜̀(·) defined in Eqn. (2) reduces to the following form:\n1peer(f(xi), ỹi) = 1(f(xi), ỹi)− 1(f(xip1 ), ỹip2 ) (3)\nTo see this, for instance 1−MS ( f(xi) = +1, ỹi = +1 ) = 1−M(2, 2) = 1− 1 = 0 = 1(f(xi) = −1, ỹi = +1). Replacing 1(·) with any generic loss `(·) we define: (Peer Loss) : `peer(f(xi), ỹi) = `(f(xi), ỹi)− `(f(xip1 ), ỹip2 ) (4)\nWe name above loss as peer loss. This strikingly simple form of `peer(f(xi), ỹi) implies that knowing e−1 + e+1 < 1, e ∗ −1 + e ∗ +1 < 1 hold is all we need to specify `peer.\nWhy do we not need the knowledge of noise rates explicitly? Both of the terms 1(f(xi), ỹi) and 1(f(xip1 ), ỹip2 ) encoded the knowledge of noise rates implicitly. The carefully constructed form as presented in Eqn. 3 allows peer loss to be invariant against noises (Lemma 3, a property we will explain later). For a preview, for example if we take expectation of 1peer(f(xi) = +1, ỹi = +1) we will have E [1peer(f(xi) = +1, ỹi = +1)] = P(f(X) = +1, Ỹ = +1) − P(f(X) = +1) · P(Ỹ = +1), the marginal correlation between f and Ỹ , which is exactly capturing the entries of ∆ defined between f and Ỹ ! The second term above is a product of marginals because of the independence of peer samples ip1, i p 2. Using the sign of ∆ is all we need to recover this information measure in expectation. In other words, both the joint and marginal distribution terms encode the noise rate information in an implicit way. Later we will show this measure is invariant under label noises, which gives us the property of peer loss being invariant to label noises and the ability of dropping the requirement of knowing noise rates. We will instantiate this argument formally with Lemma 3 and establish a link between the above measure and the true risk of a classifier on the clean distribution. The rest of presentation focuses on `peer (Eqn. (4)), but `peer recovers 1peer via replacing ` with 1.\nERM with peer loss f̂∗`peer = arg minf∈F R̂`peer,D̃(f) = arg minf∈F 1 N ∑N n=1 `peer(f(xn), ỹn). Note again that the definition of `peer does not require the knowledge of either e+1, e−1 or e∗+1, e ∗ −1." }, { "heading": "4.3 PROPERTY OF PEER LOSS", "text": "We now present a key property of peer loss, which shows that its risk over the noisy labels is simply an affine transformation of its true risk on clean data. We denote by ED[`peer(f(X), Y )] the expected peer loss of f when (X,Y ), as well as its peer samples, are drawn i.i.d. from distribution D. Lemma 3. ED̃[`peer(f(X), Ỹ )] = (1− e−1 − e+1) · ED[`peer(f(X), Y )].\nThe above Lemma states that peer loss is invariant to label noises in expectation. We have also empirically observed this effect in our experiment. Therefore minimizing it over noisy labels is equivalent to minimizing over the true distribution. The Theorems below establish the connection between ED[`peer(f(X), Y )], the expected peer loss over clean data, with the true risk: Denote f̃∗ 1peer = arg minf∈F R1peer,D̃(f). With Lemma 3, we can easily prove the following:\nTheorem 2. [Optimality guarantee with equal prior] When p = 0.5, f̃∗ 1peer ∈ arg minf∈F RD(f).\nThe above theorem states that for a class-balanced dataset with p = 0.5, peer loss induces the same minimizer as the one that minimizes the 0-1 loss on the clean data. Removing the constraint of F , i.e., f̃∗\n1peer = arg minf R1peer,D̃(f) ⇒ f̃ ∗ 1peer = f∗. In practice we can balance the dataset s.t. p→ 0.5. When p 6= 0.5, denote ∆p = P(Y = +1)− P(Y = −1), we have the following theorem: Theorem 3. [Approximate optimality guarantee with unequal prior] When p 6= 0.5, suppose the following conditions hold: (1) e−1, e+1 < 0.5; (2) (1−e)·e−1+e·e+1 > e; (3) (1−e)·e+1+e·e−1 > e, where e := 12 − |∆p| . Then |RD(f̃ ∗ 1peer\n) − minf∈F RD(f)| ≤ 2 (¯̀− `),∀ ≤ |∆p|/2, if ` is bounded with ¯̀, ` denoting its max and min.\nCondition (1) is a well-adopted assumption in the literature of learning with noisy labels. When e+1, e−1 > e, we have conditions (2) and (3) hold: (1 − e) · e−1 + e · e+1 > (1 − e) · e + e · e = e, (1− e) · e+1 + e · e−1 > (1− e) · e+ e · e = e. When |∆p| is small, i.e., p is closer to 0.5, this condition becomes weaker, as we will afford to have a small but also a small e.\nMulti-class extension Our results in this section are largely generalizable to multi-class setting. Suppose we have K classes of labels, denoting as {1, 2, ...,K}. We denote by Q a transition matrix\nthat characterizes the relationships between noisy label Ỹ and the true label Y . The (i, j) entry of Q is defined as Qij = P(Ỹ = j|Y = i). We write Qij = qij . For many classes of noise matrices, the M(·) matrix is simply a diagonal matrix. Consider the following case: suppose the noisy labels have uniform probability of flipping to a wrong class, that is, we pose the following conditions: qij = qik, for all j 6= k 6= i. This condition allows us to define K new quantities ei = qij for all i 6= j, and qii = 1− ∑ j 6=i ej . We show that M(·) is a diagonal matrix when ∑K j=1 ej < 1, a similar condition as e−1 + e+1 < 1. Adapting from our proof for Lemma 3, we also have (derivation provided in Appendix) ED̃[1peer(f(X), Ỹ )] = (1− ∑K j=1 ej) · ED[1peer(f(X), Y )]. The above again will help us reach the conclusion that minimizing peer loss leads to the same minimizer on the clean data. We provide experiment results for peer loss with multi-class labels in Section 5.\n4.4 α-WEIGHTED PEER LOSS\nWe take a further look at the case with p 6= 0.5. Denote by R+1(f) = P(f(X) = −1|y = +1), R−1(f) = P(f(X) = +1|y = −1). It is easy to prove: Lemma 4. Minimizing E[1peer(f(X), Ỹ )] is equivalent to minimizing R−1(f) +R+1(f).\nHowever, minimizing the true riskRD(f) is equivalent to minimizing p·R+1(f)+(1−p)·R−1(f), a weighted sum ofR+1(f) andR−1(f). The above observation and the failure to reproduce the strong theoretical guarantee when p 6= 0.5 motivated us to study a α-weighted version of peer loss, to make it robust to the case p 6= 0.5. We propose the following α-weighted peer loss via adding a weight α ≥ 0 to the second term, the peer term:\n(α-Peer Loss) : `α-peer ( f(xi), ỹi ) = `(f(xi), ỹi)− α · `(f(xip1 ), ỹip2 ) (5)\nDenote 1α-peer as `α-peer when replacing ` with 1, f̃∗1α-peer = arg minf∈F R1α-peer,D̃(f) as the optimal classifier under 1α-peer, and ∆p̃ = P(Ỹ = +1)− P(Ỹ = −1). Then we have: Theorem 4. Let α = 1− (1− e−1 − e+1) · ∆p∆p̃ . Then f̃ ∗ 1α-peer ∈ arg minf∈F RD(f).\nDenote α∗ := 1 − (1 − e−1 − e+1) · ∆p∆p̃ . Several remarks follow: (1) When p = 0.5, we have α∗ = 1, we recover the earlier definition of `peer. (2) When e−1 = e+1, α∗ = 0, we recover ` for the clean learning setting. (3) When the signs of P(Y = 1)−P(Y = −1) and P(Ỹ = 1)−P(Ỹ = −1) are the same, α∗ < 1. Otherwise, α∗ > 1. In other words, when the noise changes the relative quantitative relationship of P(Y = 1) and P(Y = −1), α∗ > 1 and vice versa. (4) Knowing α∗ requires certain knowledge of e+1, e−1 when p 6= 0.5. Though we do not claim this knowledge, this result implies tuning α∗ (using validation data) may improve the performance.\nTheorem 2 and 4 imply that performing ERM with 1α∗-peer: f̂∗1α∗ -peer = arg minf R̂1α∗ -peer,D̃(f) will lead to a classifier converging to f∗:\nTheorem 5. With probability at least 1− δ, RD(f̂∗1α∗ -peer)−R ∗ ≤ 2(1+α ∗) 1−e−1−e+1\n√ log 2/δ\n2N ." }, { "heading": "4.5 CALIBRATION AND GENERALIZATION", "text": "So far our results focused on minimizing 0-1 losses, which is hard in practice. We provide evidences of `peer’s, and `α-peer’s in general, calibration and convexity with a generic and differentiable calibrated loss. We consider a ` that is classification calibrated, convex and L-Liptchitz.\nClassification calibration describes the property that the convergence to optimality using a loss function ` would also guarantee the convergence to optimality with 0-1 loss: Definition 1. ` is classification calibrated if there ∃ a convex, invertible, nondecreasing transformation Ψ` with Ψ`(0) = 0 s.t. Ψ`(RD(f̃)−R∗) ≤ R`,D(f̃)−minf R`,D(f).\nDenote f∗` ∈ arg minf R`,D(f). Below we provide sufficient conditions for `α-peer to be calibrated. Theorem 6. `α-peer is classification calibrated when either of the following two conditions holds: (1) α = 1 (i.e., `α-peer = `peer), p = 0.5, and f∗` satisfies the following: E[`(f∗` (X),−Y )] ≥ E[`(f(X),−Y )], ∀f. (2) α < 1,max{e+1, e−1} < 0.5, and `′′(t, y) = `′′(t,−y).\n(1) states that f∗` not only achieves the smallest risk over (X,Y ) but also performs the worst on the “opposite” distribution with flipped labels (X,−Y ). (2) `′′(t, y) = `′′(t,−y) is satisfied by some common loss function, such as square losses and logistic losses, as noted in (Natarajan et al., 2013),\nUnder the calibration condition, and denote the corresponding calibration function for `α-peer as Ψ`α-peer . Denote by f̂ ∗ `α-peer = arg minf∈F R̂`α-peer,D̃(f) := 1 N ∑N n=1 `α-peer(f(xn), ỹn). We have the following generalization bound: Theorem 7. The following generalization bound holds for `α∗-peer with probability at least 1− δ:\nRD(f̂ ∗ `α∗ -peer )−R∗ ≤ 1 1− e−1 − e+1 ·Ψ−1`α∗ -peer ( min f∈F R`α∗ -peer,D̃ (f)−min f R`α∗ -peer,D̃ (f)\n+ 2(1 + α∗)L · <(F) + 2 √ log 4/δ\n2N\n( 1 + (1 + α∗)(¯̀− `) )) ,\nwhere <(F) is Rademacher complexity of F . Convexity In experiments, we use neural networks which are more robust to non-convex loss functions. We provide sufficient conditions for R`α-peer,D̃(f) to be convex in Appendix (Lemma 8)." }, { "heading": "5 EXPERIMENTS", "text": "We implemented a two-layer ReLU Multi-Layer Perceptron (MLP) for classification tasks on 10 UCI Benchmarks and applied our peer loss to update their parameters. We show the robustness of peer loss with increasing rates of label noises on 10 real-world datasets. We compare the performance of our peer loss based method with surrogate loss method (Natarajan et al., 2013) (unbiased loss correction with known error rates), symmetric loss method (Ghosh et al., 2015), DMI (Xu et al., 2019), C-SVM (Liu et al., 2003) and PAM (Khardon & Wachman, 2007), which are state-of-the-art methods for dealing with random binary-classification noises, as well as a neural network solution with binary cross entropy loss (NN). We use a cross-validation set to tune the parameters specific to the algorithms. For surrogate loss, we use the true error rates e−1 and e+1 instead of learning them on the validation set. Thus, surrogate loss could be considered a favored and advantaged baseline method. Accuracy of a classification algorithm is defined as the fraction of examples in the test set classified correctly with respect to the clean and true label. For given noise rates e+1 and e−1, labels of the training data are flipped accordingly.\nA subset of the experiment results are shown in Table 1. A full table with all details can be found in Appendix. Equalized Prior means that we pre-sample the dataset to guarantee p = 0.5. For this case we used `peer without α (or rather α = 1 as in `α-peer). For p 6= 0.5, we use validation dataset (using noisy labels) to tune α. Our method is competitive across all datasets and is even able to outperform the surrogate loss method with access to the true error rates in a number of datasets, as well as symmetric loss functions (which does not require the knowledge of noise rates when error rates are symmetric) and the recently proposed information theoretical loss (Xu et al., 2019). Fig. 2 shows that our method can prevent over-fitting when facing noisy labels.\nPreliminary results on multi-class classification We now provide some preliminary results on CIFAR-10 in Table 2. We followed the setup in (Xu et al., 2019) and used ResNet (He et al., 2016) as the underlying optimization solution. However, different from (Xu et al., 2019) whose noise only exists between specific class pairs, our noise is universal. For each class, we flip the label to any other label with a probability of /9, where is the error rate and 9 is the number of other classes. We do show peer loss is competitive against Cross Entropy and DMI (Xu et al., 2019). More results and complete details are available in the Appendix.\nConclusion This paper introduces peer loss, a family of loss functions that enables training a classifier over noisy labels, but without using explicit knowledge of the noise rates of labels. We provide both theoretical justifications and extensive experimental evidences." }, { "heading": "ILLUSTRATION OF OUR IMPLEMENTATION OF PEER LOSS", "text": "" }, { "heading": "OTHER PEER PREDICTION FUNCTIONS", "text": "Other notable examples include quadratic and logarithmic scoring function, defined as follows:\nExample 2. Quadratic scoring function: S ( r(i), r(j) ) := 2P ( y(j) = r(j)|y(i) = r(i) ) − ∑ s∈{−1,+1} P ( y(j) = s|y(i) = r(i) )2 ,\nExample 3. Logarithmic scoring function: S ( r(i), r(j) ) := logP ( y(j) = r(j)|y(i) = r(i) ) .\nWe know the following is true:\nLemma 5 (Miller et al. (2005)). S defined in Example 1 & 2 induce strict truthfulness when y(i) and y(j) are stochastically relevant.\nwith defining stochastic relevance as follows:\nDefinition 2. y(i) and y(j) are stochastically relevant if ∃ s ∈ {−1,+1} s.t. P ( y(j) = s|y(i) = +1 ) 6= P ( y(j) = s|y(i) = −1 ) .\nSimilarly we conclude that when r and r∗ are stochastic relevant, the correlated agreement scoring rule, quadratic scoring rule and logarithmic scoring rule are strictly truthful. This stochastic relevance condition essentially states that the optimal classifier is statistically different from the noisy data source r on some signals. Stochastic relevance is further satisfied in the binary classification setting when e∗−1 + e ∗ +1 6= 1, under the assumption that e−1 + e+1 < 1, as similarly imposed in learning with noisy labels literature (Scott et al., 2013; Natarajan et al., 2013; Scott, 2015).\nLemma 6. r and r∗ are stochastically relevant if and only if e∗−1 + e∗+1 6= 1.\nProof. Since r∗ can be written as a function of X and Y , due to conditional independence between r and X (conditional on Y ), by chain rule\nP(r∗ = −1, r = +1) = P(Y = +1)(1− e+1)e∗+1 + P(Y = −1)e−1 · (1− e∗−1)\nSince\nP(r = +1) = P(Y = +1)(1− e+1) + P(Y = −1) · e−1 P(r∗ = +1) = P(Y = +1)(1− e∗+1) + P(Y = −1) · e∗−1\nWe have\nP(r∗ = +1, r = −1)− P(r∗ = +1)P(r = −1) =− P(Y = +1)P(Y = −1)(1− e+1 − e−1)(1− e∗+1 − e∗−1) (6)\nFor the binary signal case, the condition for stochastic relevance writes as follows:\nP(r = +1|r∗ = +1) 6= P(r = +1|r∗ = −1)\n⇔P(r = +1, r ∗ = +1) P(r∗ = +1) 6= P(r = +1, r ∗ = −1) P(r∗ = −1) ⇔P(r = +1, r∗ = +1)P(r∗ = −1) 6= P(r = +1, r∗ = −1)P(r∗ = +1) ⇔P(r = +1, r∗ = −1) 6= P(r = +1) · P(r∗ = −1) ⇔P(r = +1, r∗ = −1) 6= P(r = +1) · P(r∗ = −1) ⇔e∗−1 + e∗+1 6= 1,\nwhere the last step is a consequence of Eqn.(6)." }, { "heading": "PROOF FOR THEOREM 1", "text": "Proof. It is equivalent to prove f∗ = argmaxf E(X,Ỹ )∼D̃ [ S(f(X), r) ] . First S(·) is able to elicit the Bayes optimal classifier f∗ (r∗) using r implies that:\nED̃|Y=+1 [ S(r∗, r) ] > ED̃|Y=+1 [ S ( r̃∗, r )] , ∀r̃∗ 6= r∗\nED̃|Y=−1 [ S(r∗, r) ] > ED̃|Y=−1 [ S(r̃∗, r) ] , ∀r̃∗ 6= r∗\nFirst note that the expected score of a classifier over the data distribution further writes as follows: ED̃ [ S(f(X), r) ] = p · ED̃|Y=+1 [ S(f(X), r) ] + (1− p) · ED̃|Y=−1 [ S(f(X), r] ) Denote by f ′ a sub-optimal classifier that disagrees with f∗ on set X+dis = {x|Y = +1 : f ′(x) 6= f∗(x)}. By sub-optimality of f ′ we know that := PX(X ∈ X+dis) > 0, as a zero measure X + dis does not affect its optimality. Construct the following reporting strategy that\nr̃∗ = { r∗, w.p. 1− −r∗, w.p.\nNot hard to check that ED̃|Y=+1 [ S(f ′(X), r) ] = ED̃|Y=+1 [ S(r̃∗, r) ] Yet we have the following fact that\nED̃|Y=+1 [ S(r̃∗, r) ] =(1− ) · ED̃|Y=+1 [ S(f∗(X), r)\n] + · ED̃|Y=+1 [ S(−f∗(X), r)\n] <ED̃|Y=+1 [ S(f∗(X), r) ] (7)\nwhere the inequality is due to strict truthfulness of S and the fact that > 0. We similarly conclude that\nED̃|Y=−1 [ S(r̃∗, r) ] < ED̃|Y=−1 [ S(f∗(X), r) ] (8)\nCombine Eqn. (7) and (8) we conclude the proof." }, { "heading": "PROOF FOR LEMMA 1", "text": "Proof. Being categorical means P(r = −y|r∗ = y) < P(r = −y), y ∈ {−1,+1}\nwhich further implies P(r = −y, r∗ = y) < P(r = −y)P(r∗ = y), y ∈ {−1,+1}\nand P(r = y, r∗ = y) > P(r = y)P(r∗ = y), y ∈ {−1,+1}.\nConsider the following fact P(r = +1, r∗ = +1)\n=P(Y = +1)P(r = +1, r∗ = +1|Y = +1) + P(Y = −1)P(r = +1, r∗ = +1|Y = −1) =P(Y = +1)P(r = +1|r∗ = +1, Y = +1) · P(r∗ = +1|Y = +1) +P(Y = −1)P(r = +1|r∗ = +1, Y = −1) · P(r∗ = +1|Y = −1)\nSince r∗ can be written as a function of X and Y , due to conditional independence between r and X (conditional on Y ) we have\nP(r = +1|r∗ = +1, Y = +1) = P(r = +1|Y = +1) = 1− e+1, P(r = +1|r∗ = +1, Y = −1) = P(r = +1|Y = −1) = e−1\nTherefore P(r = +1, r∗ = +1) = P(Y = +1)(1− e+1)(1− e∗+1) + P(Y = −1) · e−1 · e∗−1 We also have P(r = +1) = P(Y = +1)(1− e+1) + P(Y = −1) · e−1 P(r∗ = +1) = P(Y = +1)(1− e∗+1) + P(Y = −1) · e∗−1 Then we have P(r = +1, r∗ = +1)− P(r = +1)P(r∗ = +1)\n=P(Y = +1)P(Y = −1)(1− e+1 − e−1)(1− e∗+1 − e∗−1) >0\nwhen 1 > e∗+1 + e ∗ −1." }, { "heading": "PROOF FOR LEMMA 2", "text": "Proof. Again recall that P(r∗ = +1, r = +1) = P(Y = +1)(1− e+1)(1− e∗+1) + P(Y = −1)e−1 · e∗−1\nP(r = +1) = P(Y = +1)(1− e+1) + P(Y = −1) · e−1 P(r∗ = +1) = P(Y = +1)(1− e∗+1) + P(Y = −1) · e∗−1\nThen we have P(r∗ = +1, r = +1)− P(r∗ = +1)P(r = +1)\n=P(Y = +1)P(Y = −1)(1− e+1 − e−1)(1− e∗+1 − e∗−1) >0\nwhen 1 − e+1 − e−1 > 0, 1 − e∗+1 − e∗−1 > 0. Interestingly this coincides with the condition imposed in (Natarajan et al., 2013). Similarly we can prove that\nP(r∗ = +1, r = −1)− P(r∗ = +1)P(r = −1) =− P(Y = +1)P(Y = −1)(1− e+1 − e−1)(1− e∗+1 − e∗−1) <0\nThe other entries for P(r∗ = −1, r = −1) − P(r∗ = −1)P(r = −1) and P(r∗ = −1, r = +1) − P(r∗ = −1)P(r = +1) are symmetric. Therefore the sign matrix of above score matrix is exactly the diagonal matrix." }, { "heading": "PROOF FOR LEMMA 3", "text": "Proof. We denote by Xip1 , Ỹip2 the random variable corresponding to the peer samples xip1 , ỹip2 .\nFirst we have E[`peer(f(X), Ỹ )] = E[`(f(X), Ỹ )]− E[`(f(Xip1 ), Ỹip2 )] Consider the two terms on the RHS separately.\nE[`(f(X), Ỹ )] =EX,Y=−1 [ P(Ỹ = −1|Y = −1) · `(f(X),−1) + P(Ỹ = +1|Y = −1) · `(f(X),+1) ] + EX,Y=+1 [ P(Ỹ = +1|Y = +1) · `(f(X),+1) + P(Ỹ = −1|Y = +1) · `(f(X),−1)\n] =EX,Y=−1 [ (1− e−1) · `(f(X),−1) + e−1 · `(f(X),+1)\n] + EX,Y=+1 [ (1− e+1) · `(f(X),+1) + e+1 · `(f(X),−1)\n] =EX,Y=−1 [ (1− e−1 − e+1) · `(f(X),−1) + e+1 · `(f(X),−1) + e−1 · `(f(X),+1)\n] + EX,Y=+1 [ (1− e−1 − e+1) · `(f(X),+1) + e−1 · `(f(X),+1) + e+1 · `(f(X),−1)\n] =(1− e−1 − e+1) · EX,Y [ `(f(X), y) ] + EX [ e+1 · `(f(X),−1) + e−1 · `(f(X),+1)\n] And consider the second term:\nE[`(f(Xip1 ), Ỹip2 )]\n=EX [`(f(X),−1)] · P(Ỹ = −1) + EX [`(f(X),+1)] · P(Ỹ = +1) =EX [ (e+1p+ (1− e−1)(1− p)) · `(f(X),−1) + ((1− e+1)p+ e−1(1− p)) · `(f(X),+1) ] =EX [ (1− e−1 − e+1)(1− p) · `(f(X),−1) + (1− e−1 − e+1)p · `(f(X),+1)\n] + EX [ (e+1p+ e+1(1− p)) · `(f(X),−1) + (e−1(1− p) + e−1p) · `(f(X),+1)\n] =(1− e−1 − e+1) · EX [`(f(Xj), Ỹk)] + EX [ e+1 · `(f(X),−1) + e−1 · `(f(X),+1)\n] Thus, E[`peer(f(X), Ỹ )] = E[`(f(X), Ỹ )]− E[`(f(Xj), Ỹk)] = (1− e−1 − e+1) · E[`peer(f(X), Y )]\nMulti-class extension Notice the following facts: E[1(f(X), Ỹ )]− E[1(f(Xip1 ), Ỹip2 )] = P(1(f(X) = Ỹ ))− P(f(Xip1 ) = Ỹip2 )\nand K∑ k=1 P(Y = k)qjk = P(Y = j)(1− ∑ k 6=j ek) + (1− P(Y = j))ej = (1− ∑ k ek)P(Y = j) + ej\nP(1(f(X) = Ỹ ))\n= K∑ k=1 P(Y = k) K∑ j=1 P(f(X) = j|Y = k)qjk\n= K∑ j=1 K∑ k=1 P(f(X) = j|Y = k)P(Y = k)qjk\n= K∑ j=1 P(f(X) = j|Y = j)P(Y = j)(1− ∑ k 6=j ek) + K∑ j=1 ∑ k 6=j P(f(X) = j|Y = k)P(Y = k)ej\n= K∑ j=1 P(f(X) = j|Y = j)P(Y = j)(1− ∑ k 6=j ek) + K∑ j=1 ej (P(f(X) = j)− P(f(X) = j|Y = j)P(Y = j)) =(1− ∑ k ek) K∑ j=1 P(f(X) = j|Y = j)P(Y = j) + K∑ j=1 ejP(f(X) = j)\nNow consider the following\nP(f(Xip1 ) = Ỹip2 )\n= K∑ j=1 P(f(X) = j)P(Ỹ = j)\n= K∑ j=1 P(f(X) = j) K∑ k=1 P(Y = k)qjk\n= K∑ j=1 P(f(X) = j)\n( (1−\n∑ k ek)P(Y = j) + ej ) Therefore\nE[1(f(X), Ỹ )]− E[1(f(Xip1 ), Ỹip2 )]\n=P(1(f(X) = Ỹ ))− P(f(Xip1 ) = Ỹip2 )\n=(1− ∑ k ek) K∑ j=1 (P(f(X) = j|Y = j)P(Y = j)− P(f(X) = j)P(Y = j))\nFor clean labels we have\nE[1(f(X), Y )] = K∑ j=1 P(f(X) = j|Y = j)P(Y = j)\nFor the second term we have\nE[1(f(Xip1 ), Yip2 )] = K∑ j=1 P(f(X) = j)P(Y = j)\nTherefore\nE[1(f(X), Y )]− E[1(f(Xip1 ), Yip2 )]\n= K∑ j=1 P(Y = k) (P(f(X) = j|Y = j)P(Y = j)− P(f(X) = j)P(Y = j))\nWe finish the proof." }, { "heading": "PROOF FOR THEOREM 2", "text": "Proof. From Lemma 3 we know\nE[`peer(f(X), Ỹ )] =(1− e−1 − e+1) · E[`peer(f(X), Y )]\n=(1− e−1 − e+1) · ( E[`(f(X), Y )]− E[`(f(Xip1 ), Yip2 )] ) =(1− e−1 − e+1) · E [ `(f(X), Y )]− 0.5 · EX [`(f(X),−1)]− 0.5 · EX [`(f(X),+1)]\n) When ` is the 0-1 loss we have `(f(X),−1) + `(f(X),+1) = 1,∀x, and therefore\nE[`peer(f(X), Ỹ )] = (1− e−1 − e+1) · ( E[`(f(X), Y )]− 1 ) With above we proved f̃∗\n1peer ∈ arg minf∈F RD(f)." }, { "heading": "PROOF FOR THEOREM 3", "text": "Proof. Our proof is inspired by our argument for p = 0.5. We ask the following question: if it is possible to show that Ỹ corresponds an error-flipped distribution of another distribution Ŷ whose marginals p̃Y is close to or equal to 0.5. Observe the following: randomly flipping Y with probability e uniformly, we will have a new distribution of labels Ŷ that satisfies:\np̃Y := P(Ŷ = +1) = P(Y = +1) · (1− e) + P(Y = −1) · e = p(1− 2e) + e.\nDenote by the tolerance of p̃Y : = |p̃Y − 0.5|. When e sets to be: 1 − 2e = |∆p| , we have |p̃Y − 0.5| = . The next question we ask: is it possible to find parameters ê−1, ê+1:\nP(Ỹ = +1|Ŷ = −1) = ê−1, P(Ỹ = −1|Ŷ = +1) = ê+1 Note that\nP(Ỹ = −1|Y = +1) =P(Ỹ = −1|Ŷ = +1) · P(Ŷ = +1|Y = +1)\n+ P(Ỹ = −1|Ŷ = −1) · P(Ŷ = −1|Y = +1) =(1− e) · ê+1 + e · (1− ê−1)\nSimilarly P(Ỹ = +1|Y = −1) = (1 − e) · ê−1 + e · (1 − ê+1). Jointly we need the following equations to hold:\n(1− e) · ê+1 + e · (1− ê−1) = e+1 (1− e) · ê−1 + e · (1− ê+1) = e−1\nSolving above equations we have\nê−1 = (1− e) · e−1 + e · e+1 1− 2e − e\n1− 2e For a feasible solution to ê−1, ê+1, the conditions need to satisfy that (1) ê−1, ê+1 ≥ 0 and (2) ê−1 + ê+1 < 1. First of all, from (2) we have\ne · ( 1− (ê−1 + ê+1) ) = e−1 − ê−1\nThen a necessary condition for ê−1 + ê+1 < 1 is\ne−1 − ê−1 > 0⇔ e−1 < 1\n2 + e−1 2(1− 2e)\nThis condition holds as long as e−1, e+1 < 0.5. From ê−1, ê+1 ≥ 0 we have\n(1− e) · e−1 + e · e+1 > e, (1− e) · e+1 + e · e−1 > e (9)\nThis above jointly proves that R`α-peer,D̃(f) is equivalent to a peer loss defined over the noisy distribution of ŷ with error parameters ê−1, e+1.\nDenote by f∗F ∈ arg minf∈F RD(f). From the optimality of f̃∗1peer we have\nRD(f̃ ∗ 1peer )− p̃Y · EX [`(f̃∗1peer(X),+1)]− (1− p̃Y ) · EX [`(f̃ ∗ 1peer (X),+1)]\n≤ RD(f∗F )− p̃Y · EX [`(f∗F (X),+1)]− (1− p̃Y ) · EX [`(f∗F (X),+1)] (10)\nNote ∀f : ∣∣p̃Y · EX [`(f(X),+1)] + (1− p̃Y ) · EX [`(f(X),+1)] (11) − 0.5 · EX [`(f(X),+1)]− 0.5 · EX [`(f(X),−1)]\n∣∣ =|p̃Y − 0.5| ·\n∣∣EX [`(f(X),+1)]− EX [`(f(X),−1)]∣∣ ≤ (¯̀− `) (12)\nNotice that RD(f̃ ∗ 1peer )− p̃Y · EX [`(f̃∗1peer(X),+1)]− (1− p̃Y ) · EX [`(f̃ ∗ 1peer (X),+1)]\n≤ RD(f∗F )− p̃Y · EX [`(f∗F (X),+1)]− (1− p̃Y ) · EX [`(f∗F (X),+1)] ≤ RD(f∗F )− 0.5 · EX [`(f∗F (X),+1)]− 0.5 · EX [`(f∗F (X),+1)] + (¯̀− `) (13)\nCombining Eqn. (10, 12, 13) we have\nRD(f̃ ∗ 1peer )−RD(f∗F ) ≤p̃Y · EX [`(f(X),+1)] + (1− p̃Y ) · EX [`(f(X),+1)] − 0.5 · EX [`(f(X),+1)]− 0.5 · EX [`(f(X),−1)] + (¯̀− `) ≤2 (¯̀− `)" }, { "heading": "PROOF FOR LEMMA 4", "text": "" }, { "heading": "Proof.", "text": "E[1peer(f(X), Ỹ )] =(1− e−1 − e+1) · (P(f(X) = −1, Y = +1) + P(f(X) = +1, Y = −1)\n− P(f(X) = −1)P(Y = +1)− P(f(X) = +1)P(Y = −1)) =(1− e−1 − e+1) · (pR+1 + (1− p)R−1\n− p · P(f(X) = 1)− (1− p) · P(f(X) = −1)) =(1− e−1 − e+1) · (pR+1 + (1− p)R−1\n− p · ( pR+1 + (1− p)(1−R−1) ) − (1− p) · ( p(1−R+1) + (1− p)R−1) ) =2(1− e−1 − e+1) · p(1− p) · (R−1 +R+1 − 1)" }, { "heading": "PROOF FOR THEOREM 4", "text": "" }, { "heading": "Proof.", "text": "E[1α-peer(f(X), Ỹ )] =E[1(f(X), Ỹ )]− α · E[1(f(Xip1 ), Ỹip2 )]\n=E[1peer(f(X), Ỹ )] + (1− α) · E[1(f(Xip1 ), Ỹip2 )]− 1 =E[1peer(f(X), Ỹ )] + (1− α) · ( P(f(X) = −1) · P(Ỹ = −1) + P(f(X) = +1) · P(Ỹ = +1) ) − 1\n=E[1peer(f(X), Ỹ )] + (1− α) · (( p · (1−R+1) + (1− p) ·R−1 ) · P(Ỹ = −1)\n+ ( pR+1 + (1− p)(1−R−1) ) · P(Ỹ = +1) ) − 1\n=E[1peer(f(X), Ỹ )] + (1− α) · (P(Ỹ = +1)− P(Ỹ = −1)) · (pR+1 − (1− p)R−1) + C =2(1− e−1 − e+1) · p(1− p) · (R−1 +R+1 − 1)\n+ (1− α) · (P(Ỹ = +1)− P(Ỹ = −1)) · ( pR+1 − (1− p)R−1 ) + C\n=R+1 · ( 2(1− e−1 − e+1) · p(1− p) + (1− α)p · (P(Ỹ = +1)− P(Ỹ = −1)) )\n+R−1 · ( 2(1− e−1 − e+1) · p(1− p)− (1− α)(1− p) · (P(Ỹ = +1)− P(Ỹ = −1)) ) + C ′,\nwhere C,C ′ are constants: C = (1− α) · ( (1− p) · P(Ỹ = +1) + p · P(Ỹ = −1) ) − 1\nC ′ = C − 2(1− e−1 − e+1) · p(1− p)\nLet\np\n1− p = 2(1− e−1 − e+1) · p(1− p) + (1− α) · p · (P(Ỹ = +1)− P(Ỹ = −1)) 2(1− e−1 − e+1) · p(1− p)− (1− α) · (1− p) · (P(Ỹ = +1)− P(Ỹ = −1)) .\nthat α = 1− (1− e−1 − e+1) ·\n∆p ∆p̃ .\nwe obtain that\nE[1α-peer(f(X), Ỹ )] = (1− e−1 − e+1)E[1(f(X), Y )] + C ′, (14)\nconcluding our proof. The last equation Eqn.(14) also implies the following proposition:\nProposition 8. For any f, f ′, we have\nED̃[1α-peer(f(X), Ỹ )]−ED̃[1α-peer(f ′(X), Ỹ )] = (1−e−1−e+1)\n( E[1(f(X), Y )]−E[1(f ′(X), Y )] ) ." }, { "heading": "PROOF FOR THEOREM 5", "text": "Proof. ∀f , using Hoeffding’s inequality with probability at least 1− δ\n|R̂ 1α-peer,D̃ (f)−R 1α-peer,D̃(f)| ≤ √ log 2/δ\n2N (1α−peer − 1α−peer) ≤(1 + α) √ log 2/δ\n2N\nNote we also have the following:\nR 1α-peer,D̃ (f̂∗ 1α-peer )−R 1α-peer,D̃ (f∗ 1α-peer )\n≤R̂ 1α-peer,D̃ (f̂∗ 1α-peer )− R̂ 1α-peer,D̃ (f∗ 1α-peer ) + (R1α-peer,D(f̂ ∗ 1α-peer )− R̂ 1α-peer,D̃ (f̂∗ 1α-peer ))\n+ (R̂ 1α-peer,D̃ (f∗ 1α-peer )−R 1α-peer,D̃ (f∗ 1α-peer ))\n≤0 + 2 max f |R̂ 1α-peer,D̃ (f)−R 1α-peer,D̃(f)|\nNow we show\nRD(f̂ ∗ 1α∗ -peer )−R∗\n=RD(f̂ ∗ 1α∗ -peer )−RD(f∗1α∗ -peer) (Theorem 4)\n= 1 1− e−1 − e+1 ( R 1α∗ -peer,D̃ (f̂∗ 1α∗ -peer )−R 1α∗ -peer,D̃ (f∗ 1α∗ -peer ) ) (Proposition 8)\n≤ 2 1− e−1 − e+1 max f |R̂ 1α∗ -peer,D̃ (f)−R 1α∗ -peer,D̃ (f)|\n≤ 2(1 + α ∗)\n1− e−1 − e+1\n√ log 2/δ\n2N .\nWe conclude the proof." }, { "heading": "PROOF FOR THEOREM 6", "text": "Proof. We start with condition (1). From Lemma 3, E[`peer(f(X), Ỹ )] =(1− e−1 − e+1) · ( E[`(f(X), Y )]− 0.5 · E[`(f(X),−1)]− 0.5 · E[`(f(X),+1)] )\nThe above further derives as\nE[`peer(f(X), Ỹ )] =(1− e−1 − e+1) · ( E[`(f(X), Y )]− 0.5 · E[`(f(X), Y )]− 0.5 · E[`(f(X),−Y )] ) =\n1− e−1 − e+1 2\n· ( E[`(f(X), Y )]− E[`(f(X),−Y )] ) Denote by c := 21−e−1−e+1 we have\nE[`(f(X), Y )] = c · E[`peer(f(X), Ỹ )] + E[`(f(X),−Y )]\nThen\nE[`(f(X), Y )]− E[`(f∗` (X), Y )]− (E[`(f(X),−Y )]− E[`(f∗` (Y ),−Y ))] =c · (E[`peer(f(X), Ỹ )]− E[`peer(f∗` (X), Ỹ )]) ≤c · (E[`peer(f(X), Ỹ )]− E[`peer(f∗`peer(X), Ỹ )])\nFurther by our conditions we know\nE[`(f(X), Y )]−E[`(f∗` (X), Y )]− (E[`(f(X),−Y )]− E[`(f∗` (Y ),−Y ))] ≥ E[`(f(X), Y )]− E[`(f∗` (X), Y )].\nTherefore we have proved\nE[`peer(f(X), Ỹ )]− E[`peer(f∗`peer(X), Ỹ )] ≥ 1\nc\n( E[`(f(X), Y )]− E[`(f∗` (X), Y )] ) .\nSince `(·) is calibrated, and according to Proposition 8 and Theorem 2:\nED̃[1α-peer(f(X), Ỹ )]− ED̃[1α-peer(f ∗ ` (X), Ỹ )] =(1− e−1 − e+1) ( E[1(f(X), Y )]− E[1(f∗` (X), Y )] ) ≤(1− e−1 − e+1) ·Ψ−1` (E[`(f(X), Y )]− E[`(f ∗ ` (X), Y )]) ≤(1− e−1 − e+1) ·Ψ−1` (c · (E[`peer(f(X), Ỹ )]− E[`peer(f ∗ `peer(X), Ỹ )])).\nTherefore Ψ`peer(x) = 1 cΨ`( x 1−e−1−e+1 ). It’s straight-forward to verify that Ψ`peer(x) satisfies the conditions in Definition 1. We conclude the proof.\nNow we check condition (2). Again, from previously, we know the following holds for a certain p̂y = py(1− ey) + (1− py)e−y where p+1 = p, p−1 = 1− p:\nE[`α-peer(f(X), Ỹ )] =E[`(f(X), Ỹ )− α · `(f(X), Ỹk)]\n=E [ (1− eY )`(f(X), Y ) + eY `(f(X),−Y )− α · p̂Y `(f(X), Y )− α · (1− p̂Y )`(f(X),−Y ) ] =E [ (1− eY − αp̂Y )`(f(X), Y ) + (eY − α · (1− p̂Y ))`(f(X),−Y )\n] Let φ(f(X) · Y ) := `(f(X), Y ), we have\nE[`α-peer(f(X), Ỹ )) =E [ (1− eY − αp̂Y )φ(f(X) · Y ) + (eY − α · (1− p̂Y ))φ(−f(X) · Y ) ] :=E[ϕ(f(X) · Y )]\nWe first introduce a Theorem:\nTheorem 9 (Theorem 6, (Bartlett et al., 2006)). Let ϕ be convex. Then ϕ is classification-calibrated if and only if it is differentiable at 0 and ϕ′ < 0.\nWe now show that ϕ is convex:\nϕ′′(β) =(1− eY − αp̂Y ) · φ′′(β) + (eY − α · (1− p̂Y ))φ′′(−β) =(1− eY − αp̂Y ) · φ′′(β) + (eY − α · (1− p̂Y ))φ′′(β) =(1− eY − αp̂Y + eY − α · (1− p̂Y ))φ′′(β) =(1− α)φ′′(β) > 0\nwhen α < 1. The last inequality is due to the fact that ` is convex.\nSecondly we show the first derivative of ϕ is negative at 0: ϕ′(0) < 0:\nϕ′(0) =(1− eY − αp̂Y ) · φ′(0)− (eY − α · (1− p̂Y ))φ′(0) =(1− 2eY + α(1− 2p̂Y ))φ′(0) (15)\nNote that\np̂y = py(1− ey) + (1− py)e−y\nPlug back to Eqn. (15) we have\nϕ′(0) =(1− eY − αp̂Y ) · φ′(0)− (eY − α · (1− p̂Y ))φ′(0) = ( 1− 2eY + α(1− 2p̂Y ) ) φ′(0)\n= ( (1− αpy)(1− 2ey) + α(1− py)(1− e−y) ) φ′(0) (16)\nSince (1 − αpy)(1 − 2ey) + α(1 − py)(1 − e−y) > 0 and φ′(0) < 0 (due to calibration property of `, Theorem 6 of Bartlett et al. (2006)), we proved that ϕ′(0) < 0. Then based on Theorem 6 of Bartlett et al. (2006), we know ``α-peer is classification calibrated." }, { "heading": "PROOF FOR THEOREM 7", "text": "Proof. We first prove the following Rademacher complexity bound\nLemma 7. Let <(F) denote the Rademacher complexity of F . L denote the Lipschitz constant of `. Then with probability at least 1 − δ, maxf∈F |R̂`α-peer,D̃(f) − R`α-peer,D̃(f)| ≤ (1 + α)L · <(F) +√\nlog 4/δ 2N (1 + `α−peer − `α−peer).\nNote we also have the following ∀α:\nR`α-peer,D̃(f̂ ∗ `α-peer)−R`α-peer,D̃(f ∗ `α-peer)\n≤R̂`α-peer,D̃(f̂ ∗ `α-peer)− R̂`α-peer,D̃(f ∗ `α-peer)\n+ (R`α-peer,D̃(f̂ ∗ `α-peer)− R̂`α-peer,D̃(f̂ ∗ `α-peer))\n+ (R̂`α-peer,D̃(f ∗ `α-peer)−R`α-peer,D̃(f ∗ `α-peer))\n≤0 + 2 max f∈F |R̂`α-peer,D̃(f)−R`α-peer,D̃(f)|\nThen apply the calibration condition we have\nRD(f̂ ∗ `α∗ -peer )−R∗\n= 1 1− e−1 − e+1 ( R 1α∗ -peer,D̃ (f̂∗`α∗ -peer)−R1α∗ -peer,D̃(f ∗) )\n(Proposition 8)\n= 1 1− e−1 − e+1 ( R 1α∗ -peer,D̃ (f̂∗`α∗ -peer)−R1α∗ -peer,D̃(f̃ ∗ 1α∗ -peer ) )\n(Theorem 3)\n≤ 1 1− e−1 − e+1 Ψ−1`α∗ -peer ( min f∈F R`α∗ -peer,D̃(f)−minf R`α∗ -peer,D̃(f) (Calibration of 1α ∗-peer)\n+R`α∗ -peer,D̃(f̂ ∗ `α∗-peer )−R`α∗ -peer,D̃(f ∗ `α∗ -peer ) ) ≤ 1\n1− e−1 − e+1 Ψ−1`α∗ -peer ( min f∈F R`α∗ -peer,D̃(f)−minf R`α∗ -peer,D̃(f)\n+ 2 max f∈F |R̂`α∗ -peer,D̃(f)−R`α∗ -peer,D̃(f)|\n≤ 1 1− e−1 − e+1 Ψ−1`α∗ -peer ( min f∈F R`α∗ -peer,D̃(f)−minf R`α∗ -peer,D̃(f) (Lemma 7)\n+ 2(1 + α∗)L · <(F) + 2 √ log 4/δ\n2N (1 + `α∗-peer − `α∗-peer)\n) ,\nwith probability at least 1− δ." }, { "heading": "PROOF FOR LEMMA 7", "text": "Proof. Due to the random sampling, via Hoeffding inequality we first have there exists some p̂ỹn ∈ (0, 1), with probability at least 1− δ,\n∣∣∣∣ 1N N∑ n=1 `α-peer(f(xn), ỹn)− 1 N N∑ n=1 (`(f(xn), ỹn)\n− α · p̂ỹn`(f(xn), ỹn)− α · (1− p̂ỹn)`(f(xn),−ỹn)) ∣∣∣∣\n≤ √ log 2/δ\n2N · (`α−peer − `α−peer)\nDefine the following loss function:\n˜̀(xn, ỹn) := `(f(xn), ỹn)− α · p̂ỹn`(f(xn), ỹn)− α · 1− p̂ỹn)`(f(xn),−ỹn)\nVia Rademacher bound on the maximal deviation we have with probability at least 1− δ\nmax f∈F\n∣∣R̂˜̀,D̃(f)−R˜̀,D̃(f)∣∣ ≤ 2 · <(˜̀◦ F) + √ log 1/δ\n2N (17)\nSince ` is L-Lipschitz, due to the linear combination, ˜̀is (1+α)L-Lipschitz. Based on the Lipschitz composition of Rademacher averages, we have\n<(˜̀◦ F) ≤ (1 + α)L · <(F)\nTherefore, via union bound, we know with probability at least 1− 2δ:∣∣∣∣ 1N N∑ n=1 `α-peer(f(xn), ỹn)−R`α-peer,D̃(f) ∣∣∣∣\n= ∣∣∣∣ 1N N∑ n=1 `α-peer(f(xn), ỹn)− R̂˜̀,D̃(f) + R̂˜̀,D̃(f)−R`α-peer,D̃(f) ∣∣∣∣\n≤ ∣∣∣∣ 1N N∑ n=1 `α-peer(f(xn), ỹn)− R̂˜̀,D̃(f) ∣∣∣∣+ ∣∣∣∣R̂˜̀,D̃(f)−R`α-peer,D̃(f)∣∣∣∣\n≤ √ log 2/δ\n2N · (`α−peer − `α−peer) + |R̂˜̀,D̃(f)−R˜̀,D̃(f) ∣∣ ≤ √ log 2/δ\n2N · (`α−peer − `α−peer) + (1 + α)L · <(F) +\n√ log 1/δ\n2N ≤(1 + α)L · <(F) + √ log 2/δ\n2N · ( 1 + `α−peer − `α−peer ) In aboveR`α-peer,D̃(f) = Rϕ,D̃(f) because `α-peer and ` share the same expected risk by construction. Plug in the fact that `α-peer is linear in ` and an easy consequence that\n`α−peer − `α−peer ≤ (1 + α)(¯̀− `),\nlet δ := δ/2, we conclude the proof." }, { "heading": "PROOF FOR LEMMA 8", "text": "Nonetheless, despite the fact that `α-peer(·) is not convex in general, [Lemma 5, (Natarajan et al., 2013)] informs us that as long as R̂`α-peer,D̃(f) is close to some convex function, mirror gradient type of algorithms will converge to a small neighborhood of the optimal point when performing ERM with `α-peer. A natural candidate for this convex function is the expectation of R̂`α-peer,D̃(f) as R̂`α-peer,D̃(f)→ R`α-peer,D̃(f) when N →∞. Lemma 8. When α < 1,max{e+1, e−1} < 0.5, and `′′(t, y) = `′′(t,−y), R`α-peer,D̃(f) is convex.\nProof. This was proved in the proof for Theorem 6, when proving the classification calibration property of `α-peer under condition (2)." }, { "heading": "EXPERIMENT", "text": "" }, { "heading": "IMPLEMENTATION DETAILS", "text": "We implemented neural networks (LeCun et al., 2015) for classification on 10 UCI Benchmarks and applied our peer loss to update their parameters. For surrogate loss, we use the true error rates e−1 and e+1 instead of learning them on the validation set. Thus, surrogate loss could be considered a favored and advantaged baseline method. On each benchmark, we use the same hyper-parameters for all neural network based methods. For C-SVM, we fix one of the weights to 1, and tune the other. For PAM, we tune the margin." }, { "heading": "RESULTS", "text": "The full experiment results are shown in Table.??. Equalized Prior indicates that in the corresponding experiments, we resample to make sure P(Y = +1) = P(Y = −1) and we fix α = 1 in these experiments. Our method is competitive in all the datasets and even able to outperform the surrogate\nloss method with access to the true error rates in most of them. C-SVM is also robust when error rates are symmetric, and is competitive in 8 datasets.\nFrom Figure.4, we can see our peer loss can prevent over-fitting, which is also part of the reason of its achieved high robustness across different datasets and error rates." } ]
2,019
null
SP:da2ce3fdc90fc70d3f51e3e26fb844b4b1759af5
[ "The paper builds a privacy-preserving training framework within a Trusted Execution Environment (TEE) such as Intel SGX. The work is heavily inspired from Slalom, which does privacy-preserving inference in TEEs. The main drawbacks of Slalom when extending to training are (1) weight quantization needs to be dynamics as they change during training, and (2) pre-processing step of Slalom to compare u = f(r) isn't effective as the weights change, and running this within TEE is no better than running the full DNN within TEE. In addition, Goten also makes the weights private as opposed to Slalom. Overall, this is a very important contribution towards privacy preserving training and the paper takes a strong practical and implementation-focused approach by considering issues arising due to memory limitations in TEE and the performance implications of default Linux paging.", "The paper proposes a method for privacy-preserving training and evaluation of DNNs. The method is based on a combination of hardware support from a trusted execution enclave (Intel SGX) and an algorithm for offloading intensive computation to unsecure GPU devices and communicating with the trusted environment without losing security guarantees during communication. Compared to related work on a similar system (Slalom), the proposed system enables secure training in addition to inference. The approach is based on the use of additive secret sharing to relegate chunks of computation to independent GPU servers." ]
Before we saw worldwide collaborative efforts in training machine-learning models or widespread deployments of prediction-as-a-service, we need to devise an efficient privacy-preserving mechanism which guarantees the privacy of all stakeholders (data contributors, model owner, and queriers). Slalom (ICLR ’19) preserves privacy only for prediction by leveraging both trusted environment (e.g., Intel SGX) and untrusted GPU. The challenges for enabling private training are explicitly left open – its pre-computation technique does not hide the model weights and fails to support dynamic quantization corresponding to the large changes in weight magnitudes during training. Moreover, it is not a truly outsourcing solution since (offline) pre-computation for a job takes as much time as computing the job locally by SGX, i.e., it only works before all pre-computations are exhausted. We propose Goten, a privacy-preserving framework supporting both training and prediction. We tackle all the above challenges by proposing a secure outsourcing protocol which 1) supports dynamic quantization, 2) hides the model weight from GPU, and 3) performs better than a pure-SGX solution even if we perform the precomputation online. Our solution leverages a non-colluding assumption which is often employed by cryptographic solutions aiming for practical efficiency (IEEE SP ’13, Usenix Security ’17, PoPETs ’19). We use three servers, which can be reduced to two if the pre-computation is done offline. Furthermore, we implement our tailor-made memory-aware measures for minimizing the overhead when the SGX memory limit is exceeded (cf., EuroSys ’17, Usenix ATC ’19). Compared to a pure-SGX solution, our experiments show that Goten can speed up linear-layer computations in VGG up to 40×, and overall speed up by 8.64× on VGG11.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard", "Manjunath Kudlur", "Josh Levenberg", "Rajat Monga", "Sherry Moore", "Derek Gordon Murray", "Benoit Steiner", "Paul A. Tucker", "Vijay Vasudevan", "Pete Warden", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "Tensorflow: A system for large-scale machine learning", "venue": "In OSDI, pp. 265–283,", "year": 2016 }, { "authors": [ "Martı́n Abadi", "Andy Chu", "Ian J. Goodfellow", "H. Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In CCS,", "year": 2016 }, { "authors": [ "Ahmad Abdelfattah", "Azzam Haidar", "Stanimire Tomov", "Jack J. Dongarra" ], "title": "Performance, design, and autotuning of batched GEMM for GPUs", "venue": "In ISC,", "year": 2016 }, { "authors": [ "Sergei Arnautov", "Bohdan Trach", "Franz Gregor", "Thomas Knauth", "André Martin", "Christian Priebe", "Joshua Lind", "Divya Muthukumaran", "Dan O’Keeffe", "Mark Stillwell", "David Goltzsche", "David M. Eyers", "Rüdiger Kapitza", "Peter R. Pietzuch", "Christof Fetzer" ], "title": "SCONE: secure linux containers with intel SGX", "venue": "In OSDI,", "year": 2016 }, { "authors": [ "Raad Bahmani", "Manuel Barbosa", "Ferdinand Brasser", "Bernardo Portela", "Ahmad-Reza Sadeghi", "Guillaume Scerri", "Bogdan Warinschi" ], "title": "Secure multiparty computation from SGX", "venue": "In Financial Crypt.,", "year": 2017 }, { "authors": [ "Donald Beaver" ], "title": "Efficient multiparty protocols using circuit randomization", "venue": "In CRYPTO,", "year": 1991 }, { "authors": [ "Raphael Bost", "Raluca Ada Popa", "Stephen Tu", "Shafi Goldwasser" ], "title": "Machine learning classification over encrypted data", "venue": "In NDSS,", "year": 2015 }, { "authors": [ "Florian Bourse", "Michele Minelli", "Matthias Minihold", "Pascal Paillier" ], "title": "Fast homomorphic evaluation of deep discretized neural networks", "venue": "In CRYPTO,", "year": 2018 }, { "authors": [ "Ferdinand Brasser", "Urs Müller", "Alexandra Dmitrienko", "Kari Kostiainen", "Srdjan Capkun", "Ahmad-Reza Sadeghi" ], "title": "Software grand exposure: SGX cache attacks are practical", "venue": "In USENIX Workshop on Offensive Technologies,", "year": 2017 }, { "authors": [ "Stefan Brenner", "Colin Wulf", "David Goltzsche", "Nico Weichbrodt", "Matthias Lorenz", "Christof Fetzer", "Peter R. Pietzuch", "Rüdiger Kapitza" ], "title": "SecureKeeper: Confidential zookeeper using Intel SGX", "venue": "In Middleware,", "year": 2016 }, { "authors": [ "Paul Bunn", "Rafail Ostrovsky" ], "title": "Secure two-party k-means clustering", "venue": "In CCS, pp", "year": 2007 }, { "authors": [ "Somnath Chakrabarti" ], "title": "SGX memory oversubscription, 2017", "venue": "http://caslab.csl.yale", "year": 2017 }, { "authors": [ "Raymond Cheng", "Fan Zhang", "Jernej Kos", "Warren He", "Nicholas Hynes", "Noah M. Johnson", "Ari Juels", "Andrew Miller", "Dawn Song" ], "title": "Ekiden: A platform for confidentiality-preserving, trustworthy, and performant smart contracts", "venue": "In IEEE EuroS&P,", "year": 2019 }, { "authors": [ "Sherman S.M. Chow", "Jie-Han Lee", "Lakshminarayanan Subramanian" ], "title": "Two-party computation model for privacy-preserving queries over distributed databases", "venue": "In NDSS. ISOC,", "year": 2009 }, { "authors": [ "Daniel Demmler", "Thomas Schneider", "Michael Zohner" ], "title": "ABY - A framework for efficient mixedprotocol secure two-party computation", "venue": "In NDSS. ISOC,", "year": 2015 }, { "authors": [ "Cynthia Dwork" ], "title": "Differential privacy", "venue": "In ICALP, pp. 1–12. Springer,", "year": 2006 }, { "authors": [ "Cynthia Dwork", "Krishnaram Kenthapadi", "Frank McSherry", "Ilya Mironov", "Moni Naor" ], "title": "Our data, ourselves: Privacy via distributed noise generation", "venue": "In EUROCRYPT,", "year": 2006 }, { "authors": [ "Cynthia Dwork", "Frank McSherry", "Kobbi Nissim", "Adam D. Smith" ], "title": "Calibrating noise to sensitivity in private data analysis", "venue": "In TCC,", "year": 2006 }, { "authors": [ "C. Feng" ], "title": "SGX protected memory limit in SGX, 2017", "venue": "https://software.intel.com/ en-us/forums/intel-software-guard-extensions-intel-sgx/topic/", "year": 2017 }, { "authors": [ "Matt Fredrikson", "Somesh Jha", "Thomas Ristenpart" ], "title": "Model inversion attacks that exploit confidence information and basic countermeasures", "venue": "In CCS,", "year": 2015 }, { "authors": [ "Ran Gilad-Bachrach", "Nathan Dowlin", "Kim Laine", "Kristin E. Lauter", "Michael Naehrig", "John Wernsing" ], "title": "CryptoNets: Applying neural networks to encrypted data with high throughput and accuracy", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Ian J. Goodfellow", "Yoshua Bengio", "Aaron C. Courville" ], "title": "Deep Learning. Adaptive computation and machine learning", "venue": null, "year": 2016 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "In ICML, pp", "year": 2015 }, { "authors": [ "Lucjan Hanzlik", "Yang Zhang", "Kathrin Grosse", "Ahmed Salem", "Max Augustin", "Michael Backes", "Mario Fritz" ], "title": "MLCapsule: Guarded offline deployment of machine learning as a service", "venue": "CoRR abs/1808.00590,", "year": 2018 }, { "authors": [ "Susan Hohenberger", "Anna Lysyanskaya" ], "title": "How to securely outsource cryptographic computations", "venue": "In TCC,", "year": 2005 }, { "authors": [ "Tyler Hunt", "Congzheng Song", "Reza Shokri", "Vitaly Shmatikov", "Emmett Witchel" ], "title": "Chiron: Privacy-preserving machine learning as a service", "venue": "CoRR abs/1803.05961,", "year": 2018 }, { "authors": [ "Geetha Jagannathan", "Rebecca N. Wright" ], "title": "Privacy-preserving distributed k-means clustering over arbitrarily partitioned data", "venue": "In SIGKDD,", "year": 2005 }, { "authors": [ "Yangqing Jia" ], "title": "Learning Semantic Image Representations at a Large Scale", "venue": "PhD thesis,", "year": 2014 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross B. Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "In ACM International Conference on Multimedia,", "year": 2014 }, { "authors": [ "Chiraag Juvekar", "Vinod Vaikuntanathan", "Anantha Chandrakasan" ], "title": "GAZELLE: A low latency framework for secure neural network inference", "venue": "In USENIX Security,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Commun. ACM,", "year": 2017 }, { "authors": [ "Roland Kunkel", "Do Le Quoc", "Franz Gregor", "Sergei Arnautov", "Pramod Bhatotia", "Christof Fetzer" ], "title": "Tensorscone: A secure tensorflow framework using intel SGX", "venue": "CoRR abs/1902.04413,", "year": 2019 }, { "authors": [ "Jian Liu", "Mika Juuti", "Yao Lu", "N. Asokan" ], "title": "Oblivious neural network predictions via MiniONN transformations", "venue": "In CCS,", "year": 2017 }, { "authors": [ "Chi-Keung Luk", "Robert Cohn", "Robert Muth", "Harish Patil", "Artur Klauser", "Geoff Lowney", "Steven Wallace", "Vijay Janapa Reddi", "Kim Hazelwood" ], "title": "Pin: building customized program analysis tools with dynamic instrumentation", "venue": "In PLDI,", "year": 2005 }, { "authors": [ "Payman Mohassel", "Yupeng Zhang" ], "title": "SecureML: A system for scalable privacy-preserving machine learning", "venue": "In IEEE S&P,", "year": 2017 }, { "authors": [ "Valeria Nikolaenko", "Stratis Ioannidis", "Udi Weinsberg", "Marc Joye", "Nina Taft", "Dan Boneh" ], "title": "Privacy-preserving matrix factorization", "venue": "In CCS,", "year": 2013 }, { "authors": [ "Valeria Nikolaenko", "Udi Weinsberg", "Stratis Ioannidis", "Marc Joye", "Dan Boneh", "Nina Taft" ], "title": "Privacy-preserving ridge regression on hundreds of millions of records", "venue": "In IEEE S&P,", "year": 2013 }, { "authors": [ "Olga Ohrimenko", "Felix Schuster", "Cédric Fournet", "Aastha Mehta", "Sebastian Nowozin", "Kapil Vaswani", "Manuel Costa" ], "title": "Oblivious multi-party machine learning on trusted processors", "venue": "In USENIX Security,", "year": 2016 }, { "authors": [ "Meni Orenbach", "Pavel Lifshits", "Marina Minkin", "Mark Silberstein" ], "title": "Eleos: ExitLess OS services for SGX enclaves", "venue": "In EuroSys,", "year": 2017 }, { "authors": [ "Meni Orenbach", "Yan Michalevsky", "Christof Fetzer", "Mark Silberstein" ], "title": "Cosmix: A compilerbased system for secure memory instrumentation and execution in enclaves", "venue": "In USENIX ATC,", "year": 2019 }, { "authors": [ "Le Trieu Phong", "Yoshinori Aono", "Takuya Hayashi", "Lihua Wang", "Shiho Moriai" ], "title": "Privacypreserving deep learning via additively homomorphic encryption", "venue": "IEEE Trans. Information Forensics and Security,", "year": 2018 }, { "authors": [ "Fahad Shaon", "Murat Kantarcioglu", "Zhiqiang Lin", "Latifur Khan" ], "title": "SGX-BigMatrix: A practical encrypted data analytic framework with trusted processors", "venue": "In CCS,", "year": 2017 }, { "authors": [ "Reza Shokri", "Vitaly Shmatikov" ], "title": "Privacy-preserving deep learning", "venue": "In CCS, pp. 1310–1321", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Rohit Sinha", "Manuel Costa", "Akash Lal", "Nuno P. Lopes", "Sriram K. Rajamani", "Sanjit A. Seshia", "Kapil Vaswani" ], "title": "A design and verification methodology for secure isolated regions", "venue": "In PLDI,", "year": 2016 }, { "authors": [ "Aleksandra B. Slavkovic", "Yuval Nardi", "Matthew M. Tibbits" ], "title": "Secure logistic regression of horizontally and vertically partitioned distributed databases", "venue": "In ICDM,", "year": 2007 }, { "authors": [ "Raymond K.H. Tai", "Jack P.K. Ma", "Yongjun Zhao", "Sherman S.M. Chow" ], "title": "Privacy-preserving decision trees evaluation via linear functions", "venue": "In ESORICS,", "year": 2017 }, { "authors": [ "Qiang Tang", "Husen Wang" ], "title": "Privacy-preserving hybrid recommender system", "venue": "In AsiaCCS-SCC,", "year": 2017 }, { "authors": [ "Shruti Tople", "Karan Grover", "Shweta Shinde", "Ranjita Bhagwan", "Ramachandran Ramjee" ], "title": "Privado: Practical and secure DNN inference", "venue": "CoRR abs/1810.00602,", "year": 2018 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Slalom: Fast, verifiable and private execution of neural networks in trusted hardware", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Florian Tramèr", "Fan Zhang", "Ari Juels", "Michael K Reiter", "Thomas Ristenpart" ], "title": "Stealing machine learning models via prediction APIs", "venue": "In USENIX Security,", "year": 2016 }, { "authors": [ "Jaideep Vaidya", "Hwanjo Yu", "Xiaoqian Jiang" ], "title": "Privacy-preserving SVM classification", "venue": "Knowl. Inf. Syst.,", "year": 2008 }, { "authors": [ "Stavros Volos", "Kapil Vaswani", "Rodrigo Bruno" ], "title": "Graviton: Trusted execution environments on GPUs", "venue": "In OSDI,", "year": 2018 }, { "authors": [ "Sameer Wagh", "Divya Gupta", "Nishanth Chandran" ], "title": "SecureNN: 3-party secure computation for neural network training", "venue": "PoPETs,", "year": 2019 }, { "authors": [ "Boyang Wang", "Ming Li", "Sherman S.M. Chow", "Hui Li" ], "title": "A tale of two clouds: Computing on data encrypted under multiple keys", "venue": "In IEEE CNS,", "year": 2014 }, { "authors": [ "Nico Weichbrodt", "Pierre-Louis Aublin", "Rüdiger Kapitza" ], "title": "sgx-perf: A performance analysis tool for Intel SGX enclaves", "venue": "In Middleware,", "year": 2018 }, { "authors": [ "pher De Sa" ], "title": "SWALP: Stochastic weight averaging in low-precision training", "venue": null, "year": 1943 }, { "authors": [ "Gilad-Bachrach" ], "title": "CryptoNet. It exploits non-linear functions supported by leveled homomorphic encryption (LHE) and parallel computation to improve the efficiency of neural network evaluation. However, it only supports limited activation function (x2 or sigmoid(x)) and pooling function (average pooling). The experiment results of CryptoNet showed that it is roughly 1000× slower than running a similar neural network in plaintext", "venue": null, "year": 2016 }, { "authors": [ "2017 Zhang", "2017 Liu et al", "Juvekar" ], "title": "Subsequent works (Mohassel", "venue": null, "year": 2017 }, { "authors": [ "Gazelle (Juvekar" ], "title": "2018) is the state-of-the-art cryptographic approach in terms of latency. It performs much better than CryptoNet/MiniONN by delicately choosing the HE scheme with optimized parameters to fit the hardware architecture. Gazelle has much lower latency than MiniONN/SecureML as its plaintext space is at most 20 bits", "venue": "(Juvekar et al.,", "year": 2018 }, { "authors": [], "title": "2016) proposed data-oblivious machine learning algorithms using SGX for training and prediction. Their work also defends against some potential side-channel attacks using oblivious operations. However, their algorithms cannot handle any layer of size that exceeds the amount of usable memory (90MB) in an enclave", "venue": null, "year": 2016 }, { "authors": [ "Shaon" ], "title": "The memory limit has been a huge drawback of SGX", "venue": null, "year": 2018 }, { "authors": [ "Volos" ], "title": "Graviton, an architecture for supporting TEE on GPU with the help of SGX, which supports neural network computation in particular, with near-native performance compared to untrusted GPU. However, they assume that an attacker cannot physically steal information from the GPU cores, which is questionable because GPU cores, unlike SGX", "venue": null, "year": 2018 }, { "authors": [ "Bahmani" ], "title": "SGX-based framework for general-purpose secure multi-party computation. On one hand, our work can be viewed as realizing a specific functionality under their framework at a conceptual level. On the other hand, the general-purpose treatment does not take into account the characteristics of neural network computations", "venue": "More importantly,", "year": 2017 }, { "authors": [ "Kunkel" ], "title": "TensorSCONE to port another popular DNN framework TensorFlow to SCONE. Our baseline approach is similar to this framework, but we provide our implementation to public for benchmarking. Privado (Tople et al., 2018) allows a model owner to outsource privacy-preserving DNN inference to an SGX-enabled cloud server. It guarantees that even a powerful cloud who sees the SGX enclave", "venue": null, "year": 2018 }, { "authors": [ "Mohassel", "Zhang", "Liu et al", "Gilad-Bachrach" ], "title": "2016), we do not protect the hyper-parameters such as the learning rate, the number of layers, the size of each layer etc. These could be inferred by the querier by timing the interaction with the server or by the server from the memory access", "venue": "(Juvekar et al.,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "While deep neural networks (DNN) can produce predictive models with unparalleled performance, its training phase requires enormous data as input. A single data owner may not possess enough data to train a good DNN. Multiple data owners, say, financial institutions, may want to collaborate in training DNNs. Yet, they are often expected to protect the privacy of the data contributors. This discourages any collaborative training over global-scale data that is otherwise promising (Cheng et al., 2019). Moreover, to perform prediction using a trained model, queriers need to submit their own private data (e.g., medical history). Meanwhile, the model owners want to protect the confidentiality of the trained model in the prediction phase as well. The exposure of the (parameters of a) model (to queriers or a third-party cloud server) may reveal information about its training data (Fredrikson et al., 2015), deterring the participation of data contributors. Also, the model itself is of high commercial value. These concerns hinder the deployment of prediction as a service.\nAn increasingly popular approach to ensure privacy is using a trusted execution environment (TEE) (Cheng et al., 2019; Tramèr & Boneh, 2019) and in particular, trusted processors, e.g., Intel Software Guard Extension (SGX). When a data provider sends some private data to a server equipped with SGX, it can initialize an enclave to receive the data in a confidential and authenticated way and subsequently operate on them. Even the untrusted server, who physically owns the enclave, cannot read or tamper the data inside the enclave. This paper investigates the following questions: Can we support DNN training (and prediction) by using SGX and untrusted GPU while still preserving the privacy of all stakeholders? If so, how much speedup do we gain by using GPU?" }, { "heading": "1.1 OUR BASELINE APPROACH: CAFFESCONE", "text": "Arnautov et al. (2016) propose SCONE, a secure container mechanism that allows developers to directly run applications in an SGX enclave with almost zero code change1. We combine SCONE with Caffe (Jia et al., 2014), an efficient open-source DNN framework, to build our baseline privacypreserving DNN framework – CaffeSCONE. Beyond demonstrating what one can get by applying a generic solution that uses SGX (SCONE) for training (not supported by Slalom), our CaffeSCONE implementation enables more benchmarking for insight in possible improvements, which are eventually achieved by our main result (hence further optimizing it is not our goal). For one, we show (in Section 4.2) that this baseline approach greatly suffers when the enclave’s memory limit is reached. Specifically, it invokes a native paging mechanism to swap data in and out, which further requires en/decryption. Also, we found that using more threads and cores cannot improve performance." }, { "heading": "1.2 OUR PROPOSED FRAMEWORK: GOTEN", "text": "Secure Outsourcing to GPU By using SGX solely, CaffeSCONE is already orders of magnitude faster than the state-of-the-art cryptographic solutions (SecureML (Mohassel & Zhang, 2017), MiniONN (Liu et al., 2017), Gazelle (Juvekar et al., 2018), and DiNN (Bourse et al., 2018), while only SecureML supports training). Nevertheless, in general, CPU (with or without SGX) is not optimized for costly operations in DNN such as matrix multiplication. Using specialized hardware such as GPU for such computation is a common practice. However, SGX-enclaves cannot directly leverage GPU because its security guarantee is bounded within the CPU and fixed memory. It is unclear how CaffeSCONE (and other works including TensorSCONE, Chiron (Hunt et al., 2018), and MLCapsule (Hanzlik et al., 2018)) can leverage GPU without trusting it (or losing privacy).\nThe SGX+GPU mode of our framework, which we call Goten, enables an even more efficient approach. To the best of our knowledge, no existing work ever explored this possibility on privacypreserving training. A recent work Slalom (Tramèr & Boneh, 2019) also uses GPU but it only offers prediction privacy. We follow the common practice in the cryptographic privacy-preserving training literature (SecureML, its subsequent work (Wagh et al., 2019), and other prior works (Nikolaenko et al., 2013a;b)) which employ non-colluding servers. Specifically, our framework uses three non-colluding GPU-enabled servers, two of them with a trusted processor. This setup appears to be necessary when the primary goal is to achieve privacy without heavyweight cryptographic tools. In practice, one can employ cloud service providers who are market competitors and value their reputations, or involve a government agency especially in healthcare/financial settings.\nTaking Full Advantage of the Servers We choose to exploit the server-aided setting fully and employ one additional server when compared with SecureML. What this server does is to “bootstrap” the triplets for secret sharing (Beaver, 1991) across the two servers, which SecureML assumes such a bootstrap has been done in advance in an offline phase. Goten thus achieves a higher throughput without worrying that the offline preparation will be “exhausted” when the demand reaches its peak, which is also a hidden problem not addressed by Slalom. It also means Goten provides a “true” outsourcing solution – the time needed for securely outsourcing the job to the untrusted GPU is less than that for computing the job locally by the SGX plus any time needed for pre-computation. If desired, one may easily adapt our framework back to the two-server setting. (See Section 2.2.)\nDynamic Quantization Scheme We quantize the neural network parameters to fixed-point number format for efficient cryptographic operations (cf., static quantization in Slalom). This process needs to be implemented carefully for the following reasons. First, the many matrix multiplications in neural network may scale up the output values quickly, easily exceeding the numeric limit of the data type. Second, there are functions that map values to a small interval (e.g., softmax() and sigmoid()) which require high precision. To avoid these potential accuracy problems, we developed a data-type conversion scheme, again, for enjoying “the best of both worlds,” i.e., the benefit of accurate floating-point operations on trusted processors and efficient fixed-point operations on GPUs. Our experiment (Section 4) confirms that our framework preserves high accuracy.\n1TensorSCONE (Kunkel et al., 2019) employed SCONE with TensorFlow (Abadi et al., 2016a) (a DNN framework like Caffe we used); unfortunately, it is not open source.\nMemory-aware Implementation A naı̈ve solution of overcoming the memory limit of SGX enclaves to rely on the Linux’s paging provided by Intel SGX SDK. However, it imposes much performance overhead ranging from 10× to 1000× comparing to unprotected programs (Arnautov et al., 2016) for exiting the enclave mode and switching back after processing the untrusted memory. Hence, in our framework, we take extra measures to reduce the memory footprints by looking into our specific DNN operations and handle any needed memory swapping by the enclave itself." }, { "heading": "1.3 TECHNICAL CONTRIBUTIONS", "text": "Using both SGX and GPU for privacy-preserving training may sound straightforward, but we stress that we tackled a number of issues. To better understand the obstacles, here we revisit how Slalom performs privacy-preserving prediction and why it fails to support training. The core idea of Slalom can be described in simple terms: first apply static quantization on an input x to be protected, then outsource the job of computing f(x+ r) to GPU by hiding x with a blinding factor r in Zq (where q is a large prime). Since it focuses on linear layers, f is linear and hence f(x+ r) = f(x) + f(r). When SGX gets back f(x + r), it performs “unblinding” using f(r) and obtains f(x). For such outsourcing to be possible, f(r) should be precomputed. As simple as it may seem, Slalom needs to minimize the following three kinds of overheads – (i) computations over Zq performed by the untrusted GPU for the security of the blinding trick, (ii) the communication between TEE and the untrusted GPU, and (iii) loading the precomputed unblinding factor f(r) to TEE. Looking ahead, our outsourcing protocol faces even greater challenges regarding (i) and (ii). Slalom addresses (iii) by assumption – it was done in an offline stage before the TEE needs to process any query. If we just ask the SGX to compute it, computing f(r) is of the same complexity as f(x). Another way is to load them on-spot. It is again subjected to the memory limit and incurs the unwanted communication overhead. More importantly, it is insecure to ask the untrusted environment to compute f(r).\nThere are five conceptual challenges remain unsolved by Slalom regarding training. 1) Dynamic quantization: Slalom explicitly left it as one of the open challenges. 2) DNN weights are fixed at inference time, but it is not for training. This further complicates the dynamic quantization issue since the weights fluctuate. 3) The pre-computation technique does not apply for training. In more details, the training function is actually parameterized by a publicly-known weight W , i.e., fW (x) multiples x withW . Moreover, the weight changes after (a batch of) operations are processed which makes fW (r) useless for a changing weight W ′. 4) It is now apparent that Slalom does not protect the model weight W , which should be protected in private training (and “more private” prediction). This is also one of the open challenges left explicitly by Slalom. 5) The last one is a challenge unique to our solution in addressing the other challenges. In their usage, TEE and GPU are colocated. However, in our settings, we need to propose an outsourcing solution which is efficient enough even we are subjected to an even higher communication overhead between the servers.\nGoten is the first framework that preserve the privacy of not only the prediction queries but the training data and model parameters with GPU and trusted environment. Our work achieves the highest efficiency of training and prediction in such privacy setting. This is the also first work which performs extensive experimental investigations of this possibility. Concretely, in our case study on VGG, we can speed up a linear layers up to 40×, and improve the performance of VGG11 by 8.64×." }, { "heading": "2 SYSTEM MODEL", "text": "There are n mutually untrusted data providers who want to jointly train a DNN using their disjoint training data, but they are not willing to reveal their private data to others. They have already agreed on a specific DNN architecture. The corresponding code for the training algorithm is assumed to be genuine after manual or automated verification (Sinha et al., 2016). After training, a querier can obtain prediction results from the resulting DNN and the results are only revealed to the querier." }, { "heading": "2.1 CAFFESCONE", "text": "Fig. 1a shows the system architecture of CaffeSCONE. The server S initializes an enclave E with the specified training and prediction algorithms. The data providers C1, C2, . . . attest E and verify that it is running the intended algorithms. Then they establish a secure channel with E to send it\ntheir training data. E then trains the DNN with the attested algorithm. Once training is done, Ci sends queries to E, which then computes the prediction according to the trained model parameters." }, { "heading": "2.2 GOTEN", "text": "Goten uses GPU to accelerate the computations of the fully-connected and convolutional layers. We introduce two additional non-colluding servers. Fig. 1b illustrates the system architecture.\nServers S0, S1, and S2 are equipped with GPU and SGX-enabled processor. S0 and S1 initialize E0 and E1 respectively. All the enclaves are attested by the other enclave and data providers, then secure channels are built. S0 and S1 take care of DNN computations. S2 provides multiplication triplets for linear computation (independent of the model parameters or the training/prediction data).\nThe training and prediction phases are similar to those in (pure-SGX) CaffeSCONE but with two important differences. To avoid cumbersome data transfer between the servers, data providers only send their data to E0, which is then responsible for forwarding to other enclaves. We also design a new outsourcing protocol (from SGX to GPU) that significantly changes the way of matrix multiplication. We leverage the best of SGX (deriving randomness) and GPU (for batch processing). While we still employ a known trick that protects the secret using additive secret sharing, existing designs assume a general scenario and do not consider the characteristics of SGX and GPU.\nOur goal is to ensure that an adversary cannot learn anything other than the DNN specification and the data of compromised parties. In particular, the model parameters remain private. Any attacker that observes the communication between all servers cannot compromise privacy. An attacker can compromise any subset of the data providers and at most one of the servers, i.e., two servers cannot collude with each other. We allow the attacker to control all the software (including operating system and hypervisor) of the server, but we assume it cannot launch any hardware attack on SGX. Denialof-service or side-channel attacks are also out of the scope. See Appendix C for further discussions.\nCaffeSCONE further guarantees the correctness of both training and prediction. Goten does not provide it as we present it due to page limitation, but we can resort to the trick used by Slalom.\nReducing Non-colluding Servers Our design can be easily modified to use merely 2 servers with some preparation. Looking ahead, the duty of S2 is to produce two random matrices u, v, and the product z = u · v, and distribute these matrices to E0 and E1. These enclaves can instead prepare u, v, and z by themselves, so S2 is no longer needed. Similar tricks are also used by SecureML and MiniONN. Since matrix computation in enclaves is slower than that in GPU, E0 and E1 should pre-compute these matrices before the training/prediction process to prevent stalling the GPU. Additional storage and preparation are required for removing S2.\nMoreover, the third server can also be a group of triplet providers which provide triplets in turns. In this case, these providers can amortize the computation requirement so they are not necessarily equipped with expensive GPUs and well-connected with the first two servers." }, { "heading": "3 THE DESIGN OF GOTEN", "text": "" }, { "heading": "3.1 HIGH-LEVEL IDEA", "text": "Matrix multiplication and convolution occupy ≥ 90% of computation time (see Appendix A.3). It is well known that GPU can speed up the computation of linear transformation and convolution by orders of magnitude. We thus outsource linear operations to GPU, and prevent leaking information to the hosts of (untrusted) GPU via additive secret sharing. Still, CPU needs to convert data of linear layers into the format used by secret sharing, and then convert the result from GPU back into the normal format for non-linear layers. We call these procedures pre-processing and post-processing of outsourcing linear operations. If they are not handled properly, the processing time could offset the performance gained from GPU. In the following, we introduce our tricks for reducing the run-time of pre/post-processing, and present our modified secret-sharing protocol that improves performance.\nMoreover, not only the computation in linear layers but also pre/post-processing suffer from overheads due to paging. We apply memory-aware measures to reduce such overhead. The high-level idea is to let the enclave specifies the piece of memory going to use, read and write the memory without triggering Linux’s inefficient paging. This approach is also vital for the performance." }, { "heading": "3.2 GPU-POWERED OPERATIONS VIA OUR OUTSOURCING PROTOCOL", "text": "A trivial approach to protect two operands a and b via SGX is to encrypt them to the enclave and ask it to multiply them directly. Yet, it cannot leverage the batch-processing advantage of GPU and is inefficient for large scale computation. We aim to design a protocol that leverages the SGX enclave to secure the unprotected computation environment of GPU, without the enclave performing any expensive decryption beyond the bare minimum, i.e., two decryptions (for the two operands).\nWe start with the “bare minimum” operations which let the two enclavesE0 andE1 know the secrets a and b. The core design principle is to let the enclaves do what they are good for, i.e., generating cryptographic randomness and using them to one-time-pad some values. With the non-colluding assumption (required by the original protocol (Beaver, 1991)), we choose to fully exploit it and introduce one additional server to establish the triplets involved in computing u ⊗ v = z. The triplets generation can be performed by “the initiating client” offline in existing protocols (Mohassel & Zhang, 2017; Liu et al., 2017), thus, this server can be removed as discussed in Section 2.2.\nFig. 2 describes our protocol for outsourcing linear operation of c = a ⊗ b. ⊗ can be convolution (so a and b are tensors) or matrix multiplication (for matrices a and b). Another important usage of enclaves is to store the same seed for deriving the random factors across all the servers. This trick forms a confidential channel between two servers very efficiently without AES or public-key encryption. For example, S2 sends z in the form of z−Rand(rz) toE0 andE1 via insecure channels, which can be computed quickly. In other words, all instances of “→ Ei : var” in the figure refer to loading the variable(s) var to Ei directly without encryption.\nThe steps in line 3 of Fig. 2 appear to be working on many more values than the trivial approach of computing a⊗ b. Our experiments in Section 4.2 confirms that the performance gain can be as large as 40×. Below, we discuss the changes we made over the original triplet-based protocol.\nParallelizable Pre-Processing without Communication Our protocol makes further improvements/refinements over the existing one (in Appendix A.4). Our goal is to compute a⊗b by operating over (e, f), a masked version (a, b). In the original protocol, the shares (〈a〉0, 〈b〉0) and (〈a〉1, 〈b〉1) from the two parties (S0 and S1 here) must be masked independently by the corresponding one-time pads (〈u〉0, 〈v〉0) and (〈u〉1, 〈v〉1). After this step, they must interact to produce e and f . In our protocol, both enclaves know a and b, so they can use the same seed to derive the same onetime pads u and v (which is in, say, Zmq ) and obtain e and f without any interaction. This saves half of the pre/post-processing and communication cost, and makes e and f no longer dependent on 〈a〉i and 〈b〉i. All the steps in line 3 of Fig. 2 thus can be done in parallel. We then further reduce the run-time of such pre-processing roughly by 3/4, i.e., it is 1/4 of the original. Moreover, E0 and E1 no longer need to interact until the last step for result construction, they can then work in parallel.\nReducing Run-time of Share Reconstruction Unlike the original standalone protocol where each party only needs to learn a share 〈c〉i of c but not c = a ⊗ b itself, it is necessary for our enclaves to know c because they need to perform the succeeding non-linear operations of non-linear layers. (In some existing protocols, c is actually recovered “implicitly” via cryptographic means, say, within a garbled circuit.) A naı̈ve way is to let Si encrypt their respective shares to the other enclave E1−i. Again, we use the common seed to form a secure channel which lets Si one-time-pad its own share ci into a ciphertext C1−i for E1−i via the key Ki→1−i derived from the seed. In total, we reduce pre/post-processing time by roughly 87.5% and halve the communication cost.\nPerformance Gain for Linear Layers Our outsourcing protocol, while optimized, still imposes overhead in pre/post-processing and communication between the servers. It is instructive to confirm how much we gain. Beyond the obvious reliance on the relative performance of the GPU, it turns out to be crucially relying on the shapes of the input and weight (specifically, arithmetic intensity (cud, 2019)). Appendix D gives the theoretical analysis. Fig. 5a shows convolution gains speed-up as expected when paging overhead is low." }, { "heading": "3.3 DATA TYPES AND DYNAMIC QUANTIZATION", "text": "The triplet trick we used operates over fixed-point numbers in Zq , while common neural network framework operates over floating-point numbers (“floats”). Therefore, Goten has to accommodate the fixed-point setting so that it can attain superior performance as if using floats.\nThe Choice of Zq GPU is slow in modular arithmetic, off-the-shield optimized libraries do not support them. To work on Zq integers, we thus put them as floats as Slalom (Tramèr & Boneh, 2019). This leaves us only 53 significant bits plus a sign bit to represent the integers in linear layers (where the rest of (64− 53− 1) exponent bits are 0). To make sure the result of the matrix multiplication or tensor convolution a⊗b does not overflow, we need q2k < 253, where k is the number of columns of matrix a or k = Cin · fw · fw in convolution. To avoid overflow in Zq , q should be large; but predicting the value of k beforehand is hard. We thus resort to the heuristics of testing different choices of q over common VGG networks. Based on our experiments, q = 221 − 9 is the largest value that does not overflow in almost all (≈ 100%) cases.\nChallenges in Quantization To compute x⊗f w with floating-point multiplication ⊗f, we need a quantization scheme to convert floats to fixed-point numbers and vice versa for linear layers. We first quantize x and w into xQ = Q(x; θx) and wQ = Q(w; θw), where θx and θw are the corresponding quantization parameters. We then use fixed-point multiplication ⊗Zq to compute yQ = xQ⊗Zq wQ, and derive the result by y = Q−1(yQ; θx, θw) ≈ x⊗f w.\nSlalom only supports prediction. Knowing the model, it knows the value distribution of model parameters. It can then derive the distribution of the input, output, and intermediate values. Picking a static scaling parameter that minimizes the error in prediction is thus relatively easy. In Slalom, Q(·; θ) is always parameterized by θ = 28 for all data (inputs and weights) and every computation. In short, static quantization may not pose a big problem in a prediction-only framework.\nDynamic Quantization for Training Slalom clearly states that quantization for training is a challenging problem. For training, the range of gradient of the weight may change, hence the output, and the input of the successive layer. Knowing the value distribution prior to training is hard, so we cannot determine what parameters for Q is good enough to support training.\nBeyond what Slalom did, we need dynamic quantization for training, meaning that it can adapt the change on the distribution of the model parameters, and hence the intermediate value and gradient. The (de-)quantization process has to be efficient since it is part of the pre(/post)-processing of our GPU-powered scheme. An inefficient scheme would reduce or even offset the performance gain.\nOur Choice SWALP (Yang et al., 2019) is a training scheme which works in a low-precision setting. The forward and backward computation are performed in low-precision fixed-point, but the weights are stored and updated in floats with high-precision.\nSuppose bit is the number of bits available for the low-precision computation, and the default value is 8. For both the weight and the input, SWALP first finds out the maximum absolute value, and then calculates its exponent in the format of bits, i.e., compute exp = b(log2 ◦max ◦ abs)(data)c. Then, it scales up all the values by that exponent so that the new maximum values are roughly aligned to 2bit − 2, rounds them up stochastically (Gupta et al., 2015), and clips all the value to [−2bit − 1, 2bit − 1 − 1], i.e., dataQ = Q(data, exp) = clip(bdata · 2−exp+bit−2e). After the computation, the resulting values are scaled down accordingly, i.e., y = yQ · 2expx+expw−2·bit+2\nBased on the existing SWALP experiment, its accuracy drops by less than 1% when compared to training in full-precision for VGG16, and the operands are only of 8 bits. Also, finding the maximum absolute value and scaling up and down the values only requires 3 linear scans. The scaling can be fused with other pre/post-processing too. Finally, this scheme matches with our expectation that it is dynamic because it samples the maximum value of the weight and input every iteration. Section 4.2 shows that with this quantization scheme, Goten can train VGG11 to attain high accuracy efficiently." }, { "heading": "3.4 MEMORY-AWARE MEASURES", "text": "When the allocated memory in the enclave exceeds the 128MB limit, it incurs excessive overhead. Our memory-aware mechanism handles most operations in the enclave to mediate this problem.\nA naı̈ve solution is Linux’s paging, which is provided by Intel SGX SDK. However, native paging is known to be inefficient. As reported in SCONE (Arnautov et al., 2016), memory access can be 10 − 1000× slower compared to plaintext setting. Eleos (Orenbach et al., 2017) explains that triggering SGX native paging would make the CPU core exit the enclave mode, which is timeconsuming. The more memory allocated, the more frequent such expensive operations are invoked.\nTo prevent these expensive operations, our memory-aware measures restrict the memory usage of the computations in SGX to minimize the chance of native paging. When Goten needs to allocate memory more than 128MB, it would directly encrypt the chunk of memory and evict it to the untrusted zone, which, unlike the native paging, does not leave the enclave mode. When it needs to use memory that is not in the enclave, it loads the chunk of memory into the enclave and decrypts it. Section 4.2 shows that our mechanism speeds up the computation of non-linear layers by 8.72×. For operations inside the enclave, we aim to minimize the memory access across the border between the trusted/untrusted zone. In particular, we fuse together operations that use the same set of memory, and independently handle batches in non-linear layers to prevent excessive use of memory.\nEleos (Orenbach et al., 2017) is also another mechanism for mediating page-fault overhead. It allows the program to handle page-fault without exiting the enclave. CoSMIX (Orenbach et al., 2019) further automates the instrument for this paging-handling mechanism. However, its implementation was released less than a month, so we have not compared or integrated with it." }, { "heading": "4 EMPIRICAL EVALUATION", "text": "For Goten, its SGX part is written in C++ and compiled with Intel SGX SDK 2.5.101.50123, and we use Pytorch 1.2 (pyt, 2019) on Python 3.6.9 to marshal network communication and operation on GPU, which run with CUDA 9.0. The C++ code is compiled by GCC 7.4. Also, we reuse some code of Slalom (Tramèr & Boneh, 2019), including their code of crypgtographicially-secure random number generation and encryption/decryption, and their OS-call-free version of Eigen, a linear-algebra library. All the experiments were conducted for at least 5 times, and we report the average of the results. We uploaded our source code to https://github.com/goten-team/Goten." }, { "heading": "4.1 SETUP", "text": "SGX’s Simulation Mode and Hardware Mode Only limited models of Intel CPU are powered by SGX, which can run in the regular hardware mode and enjoy the SGX protection. Intel SGX SDK also provides simulation mode for testing purpose. Its code compilation is almost the same as hardware mode except that i) the program is not protected by SGX, which is fine for our purpose since the DNN training and prediction algorithms are publicly known, and ii) it does not use encryption, which does not affect our experimental timing figures because we handle most of our secret values via one-time pads. In particular, a ciphertext produced by one-time pad is as long as the plaintext it is encrypting, thus, it does not affect the most important overhead – paging.\nIn term of performance, the largest difference between these two mode is related to paging. When the allocated memory in enclaves exceeds its physical limit, the enclaves in hardware mode may suffer much larger overhead compare to native programs. In simulation mode, the overhead is little. Programs in hardware mode has negligible overhead as long as no paging is triggered. Specifically, according to the experimental results in Privado (Tople et al., 2018), the neural networks which do not trigger page-fault do not have any performance overhead.\nExperiemental Environment for CaffeSCONE and Goten We evaluate the performance of CaffeSCONE on a computer (which supports SGX hardware mode) equipped with Intel i7-7700 Kaby Lake Quad-cores 4.3GHz CPU and 16GB RAM, using Ubuntu 18.04. For reproducibility and for the ease of setting up the experiment, we evaluate the performance Goten on 3 Google Cloud VMs. We specify all VMs to equip CPU with Sky Lake, the latest microarchitecture that can be used for Google Cloud’s VM. Unfortunately, all CPUs on Google VMs do not support Intel SGX’s hardware mode. Also, all these machines are equipped with 32GB RAM and a Nvidia V100 GPU.\nCalibration on Experiment Results Given the constraint, our experiments on the environment we used for Goten would underestimate the performance of programs running in SGX simulation mode because the CPUs have lower clock rate and older microarchitecture compared to Intel i7-7700.\nTo make the comparison between these two frameworks fair, we calibrate Goten’s CPU runtime to CaffeSCONE’s CPU runtime. We measure the runtime of the non-linear layers in the two aforementioned environments. We found that the environment we used for Goten would overestimate the runtime on CaffeSCONE’s CPU. Hence, we decide to scale down the runtime of most timeconsuming non-linear layers in Goten according to the data collected. The scaling factor for ReLU is 0.96, for Batchnorm is 0.56, for Maxpool is 0.85.\nSince the runtime in linear layers is related to the transfer between CPU and GPU and over the network, it is hard to calibrate the runtime of CPU solely. Also, our data showed that the pre/postprocessing CPU time is similar across hardware mode and simulation mode. So we do not calibrate the runtime of linear layers. The results in Fig. 4 and Tables 1 and 2 are calibrated by this method.\nChoice of Dataset and Architecture: CIFAR-10 and VGG11 Both of Goten and CaffeSCONE are evaluated on CIFAR-10, a common dataset for benchmarking the accuracy. We pick a VGG architecture with 11 layers and batch normalization layers because it is a typical DNN that can attain high accuracy on CIFAR-10. Also, it is small enough to fit with (the memory limit of) CaffeSCONE.\n4.2 PERFORMANCE ON VGG11\n1 2 4 8 Number of cores\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\nTh ro ug\nhp ut (I m ag\ne/ s)\nBatch size = 128, HW Mode Batch size = 512, HW Mode Batch size = 128, Sim. Mode Batch size = 512, Sim. Mode\nFigure 3: Training Throughput of CaffeSCONE 0 100000 200000 300000 400000 500000 600000 Running time (s)\n0.60\n0.65\n0.70\n0.75\n0.80\n0.85\n0.90\nTe st A cc ur ac y\nCaffeSCONE (Batch Size = 128) Goten (Batch Size = 512)\nFigure 4: Accuracy Convergence in VGG11\nThroughput of CaffeSCONE First, we show the training throughput of CaffeSCONE in Fig. 3, by which we emphasize that using more cores on CPU cannot improve the performance of such a pure-SGX approach. Moreover, we benchmark the throughput with batch sizes of 128 (a common setting in plaintext setting) and 512 (the setting we adopted for Goten). We confirmed that the former one has better performance for VGG11 in CaffeScone, and thus we adopt it in the later experiments. Note that we adopt batch size to 512 in Goten because with which Goten has better performance.\nTraining Throughput of Goten Table 1 illustrates the speedup of Goten compared to CaffeSCONE in the training phase. For the experimental settings, Goten ran with simulation mode on Google VMs and employed memory-aware measures to reduce the overhead of paging. Moreover, we rescale the running time on non-linear layer, which bases on the running time with the real SGX setting, i.e., the hardware mode on the experimental machine equipped with Intel i7-7700.\nAccording to the experimental results on non-linear layers, programs running with the real setting are faster than those on Google VMs. Hence, we believe that linear layers in the real setting are also faster as both kinds of layers have similar operation and (linear) access patterns.\nIn conclusion, Table 1 shows that Goten outperforms CaffeSCONE by about 8× on linear layers and non-linear layers in VGG11 and by 8.6× on the whole network.\nConvergence on Quantized Neural Networks Furthermore, Fig 4 demonstrates how the performance speedup leads to a higher convergence rate. Since the training methods for CaffeSCONE and Goten are different: the former adopts the most common approach, which uses plain single-precision floats, whereas the latter one employs the dynamic quantization scheme SWALP (in Section 3.3). it is natural to wonder whether Goten can attain a higher convergence rate. Our experimental result is confirmative. We record the converge trajectory of both training methods, which was captured in an unprotected setting on GPU, and then rescale the time axis according to the timing from Table 1. The results show that Goten can converge much faster.\nTo better emphasize our advantage on the convergence rate, Table 2 lists the speedup (at different levels of accuracy), which ranges from 4.93× to 11.78×. It shows our quantization scheme does\nnot have a significant impact on training, and it attains a high accuracy in a shorter time. However, Goten still cannot attain 0.9 accuracy after 200 epochs, while CaffeSCONE can.\nMicro-benchmarks: Speedup of Our GPU Outsourcing Protocol As our main contribution is the performance speedup on linear layers, we further isolate the performance gain of them. Fig. 5 shows the speedup and arithmetic intensity, which is explained in Appendix D, of each convolution layer presented in VGG with CIFAR-10. The shapes correspond to the batch size, the number of input channels, the number of output channels, the height and width of input images. The filter size of all layers is 3 × 3. The results illustrate that Goten are most beneficial to neural networks with high-arithmetic-intensity linear layers.\nFig. 5a shows the result in simulation, where paging overhead is negligible as explained. The result confirms with our analysis in Appendix D: the higher arithmetic intensity a convolution layer has, the higher gains of performance. Furthermore, to have performance gain in our experimental environment, the arithmetic intensity should be at least 250. Also, we notice that the layer with image size 2×2 actually has a huge performance gain while it has a relatively low arithmetic intensity. We suspect that it is because Caffe cannot efficiently handle inputs with a small image size in the CPU.\nFig. 5b shows the estimated speedup in hardware mode, where paging overhead is significant. The estimation is derived from the same setting of Table 1. The results show a much higher speedup when there are small images and many input channels, and the speedup is not proportional to the architecture intensity. We suspect that the convolution’s implementation of Caffe amplifies the paging overhead in the above situation." }, { "heading": "5 CONCLUSIONS", "text": "We proposed a new secure neural network framework using trusted processors. Our framework not only outperforms cryptographic solutions by orders of magnitude, but also resolved the memory limits issues in the existing state-of-the-art trusted processors approach (Ohrimenko et al., 2016). We made privacy-preserving training, prediction, and model-outsourcing for very deep neural networks more deployable in practice by advancing the frontier of the SGX-based machine-learning. For the first time, we can run a very deep neural network, with privacy, but without any memory issue." }, { "heading": "A PRELIMINARIES", "text": "" }, { "heading": "A.1 NEURAL NETWORKS", "text": "A neural network gains its predictive power by imitating biological neural networks (Goodfellow et al., 2016). A (feedforward) neural network can be represented by a sequence of transformations.\nThis paper focuses on supervised learning — every training data is a data point x associated with a label y, and the neural networks try to learn the relationship between x and y. Prediction in supervised learning outputs a label of query x.\nWe refer the computation for prediction by forward-propagation. For training, gradient descend is usually employed, where the computation for updating the parameters is called backwardpropagation." }, { "heading": "A.1.1 COMMON LAYERS IN NEURAL NETWORKS", "text": "Roughly, transformations in a neural network can be divided into two categories: linear transformation and non-linear transformation.2\nFor the linear transformation, we have two kinds of layers. i) Fully-connected layer (a.k.a. dense layer) — It just multiplies a weighting matrix to the input (for training or prediction). ii) Convolutional layer — It is similar to the convolution operation except it rotates the kernels by 180 degrees. The data structure of inputs, outputs, and kernels are tensors, which are usually 3-dimensional or 4-dimensional.\nFor non-linear transformation, we have — i) Activation layer, which applies a non-linear function on each element to mimic the impulse activation of biological cells. ii) Pooling layer, which aggregates values in a group after applying a function like max() or mean() function. iii) Output layer, which outputs the results in the prediction phase. In the training phase, it computes a loss value measuring the error between the ground truth and the neural network’s prediction." }, { "heading": "A.1.2 COMPUTATIONAL ASPECTS", "text": "The linear transformation is the most computationally intensive part (Jia, 2014) when we compute in plaintext. The same applies to the SGX setting. Looking ahead, we will leverage GPU to accelerate the computation of linear layers. Looking ahead, we further outsource the linear transformation to multiple servers by additive secret sharing (Section A.4) to improve efficiency.\nIn SGX-enclave, the non-linear transformation can be processed in plaintext efficiently. These nonlinear transformations are basically aggregating the output from its previous layer and/or applying element-wise operations. A simple but efficient way to handle them is to load the entries from\n2Some weird layers may appear in some architecture but can be easily implemented using the principle we introduced.\nthe previous layer to the enclave cache memory one-by-one in a deterministic order and output the results once it got enough inputs. In this way, the data remains confidential and the memory access pattern is hidden.\nIn contrast, without SGX, (cryptographic) solutions either use garbled circuits, resulting in high computation and communication overhead (SecureML (Mohassel & Zhang, 2017), MiniONN (Liu et al., 2017), and Gazelle (Juvekar et al., 2018)), restricted choice of the activation layer and pooling layer (CryptoNet (Gilad-Bachrach et al., 2016)), or dramatic reduction of the size of neural networks (DiNN (Bourse et al., 2018)). As a result, these solutions are not compatible with many welldeveloped neural network architectures such as AlexNet (Krizhevsky et al., 2017), VGG16/19 (Simonyan & Zisserman, 2015), etc.\nA.1.3 VERY DEEP CONVOLUTIONAL NETWORK (VGG)\nThis is a family of very deep neural networks with 9 − 19 layers with parameters (Simonyan & Zisserman, 2015) and has extraordinary performances on object classification. They have convolution layers with similar setting, e.g., all of the convolution has filters of 3 × 3 and followed by ReLU and some of them further followed by 2 × 2 max-pooling layers. They are commonly used neural networks and hence it is worth to study how to improve the performance of neural networks in privacy-preserving setting.\nA.2 INTEL SGX\nSGX is the latest Intel hardware-assisted remote secure computing design. Since its seventh generation (Intel, 2017), Intel introduced a set of instructions and hardware design with which an enclave can be allocated in the trusted hardware, protecting the privacy and integrity of the data to be processed within it." }, { "heading": "A.2.1 SECURITY ENCLAVES AND MEMORY LIMIT", "text": "In SGX, enclaves are used as secure containers. When the secure software requests a secure container, an enclave will be loaded with the code and the data specified by the secure software. The enclave will isolate itself from the rest of the computer. Then the data owner can verify the integrity of the enclave by undergoing a standard remote attestation of SGX. Inside an enclave, all the data will be stored in the main memory in an encrypted and authenticated form when the CPU core is not processing them. When some specific data is going to be processed, it will be loaded into memory caches dedicated to a CPU core with SGX protection enabled and then be decrypted.\nAlthough Intel claims that the current SGX supports up to 128MB of memory, at most 90MB is usable according to Shaon et al. (2017)." }, { "heading": "A.2.2 GENERIC APPLICATION", "text": "The trusted hardware is directly applicable to secure computation. Imagine that a data provider holding some sensitive data wants to perform some secure computation on a remote server. The data provider does not trust the server owner and thus he wants that only the server owner can know the pre-defined output. The trusted processor is an efficient solution satisfying these requirements: data can be processed in plaintext inside the trusted processor but remains unknown and tamper-proof, even to the server owner. Of course, the data owner needs to trust both the software provider and the hardware manufacturer." }, { "heading": "A.3 GRAPHICS PROCESSING UNIT", "text": "A GPU consists of thousands of cores that can perform similar instructions in parallel. If an algorithm is parallelizable, GPU can increase its computation performance by orders of magnitude.\nThe most computationally intensive part of neural networks can be transformed into matrix computation, which is well-suited for GPU. Jia (2014) showed that fully-connected layers and convolutional layers occupy over 95% computational time. Abdelfattah et al. (2016) concluded that GPU can speed-up matrix multiplication by ≥ 10× compared to multi-core CPU." }, { "heading": "A.4 TWO-PARTY COMPUTATION VIA SECRET SHARING", "text": "For two servers P0 and P1 holding private input a, b ∈ Zq respectively, where q is a prime, they can let a third server learn c = a + b ∈ Zq without revealing a, b as follows. P0 chooses a uniformly random a′ ∈ Zq , then sends 〈a〉1 = a′ to P1, and keeps 〈a〉0 = a − a′. P1 does a similar job: samples and sends 〈b〉1 = b′ to P0, and keeps 〈b〉0 = b− b′. No one revealed a or b in this process. Then, P0 computes 〈c〉0 = 〈a〉0 + 〈b〉0 and P1 computes 〈c〉1 = 〈a〉1 + 〈b〉1. At this point, P0 and P1 both hold (additive) secret shares of c = a+ b. Any third party with both shares {〈c〉i} can learn c = 〈c〉0 + 〈c〉1. Beaver (1991) generalized the above method to let P0 and P1 compute secret shares of c = a · b as follows. Suppose P0 and P1 have already pre-computed additive secret shares of u, v, and z where u · v = z. Namely, Pi has 〈u〉i, 〈v〉i, and 〈z〉i. Pi masks 〈a〉i, 〈b〉i via 〈e〉i = 〈a〉i − 〈u〉i and 〈f〉i = 〈b〉i−〈v〉i. They then exchange 〈e〉i and 〈f〉i to reconstruct e and f , which is masking a and b respectively. Finally, with e and f , they compute 〈c〉i = −i(e ·f)+f · 〈a〉i+e · 〈b〉i+ 〈z〉i locally, where 〈c〉0 + 〈c〉1 = ab. This technique can be further generalized to matrix addition/multiplication by replacing Zq with Zm×kq or Zk×nq . Indeed, this technique can applied to any linear operation, including convolution.\nUsing this protocol as-is requires two rounds of communication (for recovering (e, f)) and precomputation (of shares of (u, v, z)). Looking ahead, we will illustrate how to reduce the communication cost and the pre-computation and hence improve the throughput.\nIn the rest of the paper, we use Rand(rx) to denote a function that takes a random seed rx and outputs a random element x′ ∈ Zq . Then the (additive) secret share of x held by Pi can be written as 〈x〉i = Geni(x, rx) = i · x+ (−1)i · Rand(rx)." }, { "heading": "B RELATED WORK", "text": "" }, { "heading": "B.1 CRYPTOGRAPHIC SOLUTIONS", "text": "Gilad-Bachrach et al. (2016) proposed CryptoNet. It exploits non-linear functions supported by leveled homomorphic encryption (LHE) and parallel computation to improve the efficiency of neural network evaluation. However, it only supports limited activation function (x2 or sigmoid(x)) and pooling function (average pooling). The experiment results of CryptoNet showed that it is roughly 1000× slower than running a similar neural network in plaintext. Subsequent works (Mohassel & Zhang, 2017; Liu et al., 2017; Juvekar et al., 2018) improve or extend CryptoNet in various dimensions. SecureML (Mohassel & Zhang, 2017) uses two noncolluding servers to support both training and prediction for neural networks, but it is slower than CryptoNet for prediction. MiniONN (Liu et al., 2017) achieves higher prediction accuracy than SecureML for the same network structure. It is also 5× faster than SecureML for small neural networks via the single instruction multiple data (SIMD) batching technique on LHE.\nTo the best of our knowledge, Gazelle (Juvekar et al., 2018) is the state-of-the-art cryptographic approach in terms of latency. It performs much better than CryptoNet/MiniONN by delicately choosing the HE scheme with optimized parameters to fit the hardware architecture. Gazelle has much lower latency than MiniONN/SecureML as its plaintext space is at most 20 bits. However, it is still unclear whether Gazelle harms the accuracy, which is not stated in their paper (Juvekar et al., 2018).\nDiNN (Bourse et al., 2018) follows an approach similar to CryptoNet’s. It does not require user interaction during the evaluation. To the best of authors’ knowledge, it is the state-of-the-art pureHE-based approach. Yet, as stressed in the DiNN paper (Bourse et al., 2018), they aim to show that a pure-HE approach is possible and can outperform CryptoNet, at the cost of lower accuracy.\nIn general, all frameworks mentioned above use expensive cryptographic primitives, such as LHE, garbled circuits, and oblivious transfer, during (training and) prediction, resulting in huge data and computation overheads. Also, using these primitives usually requires multiple rounds of communication between different parties.\nAs a final remark, there are cryptographic solutions that protect the privacy of (mostly the prediction phase of) other machine learning algorithms. A non-exhaustive list includes decision trees or random\nforests (Tai et al., 2017; Wu et al., 2016; Bost et al., 2015), logistic regression (Slavkovic et al., 2007; Bost et al., 2015), support vector machine (Vaidya et al., 2008; Yu et al., 2006), collaborative filtering (Tang & Wang, 2017; Zhao & Chow, 2015), and k-means clustering (Bunn & Ostrovsky, 2007; Jagannathan & Wright, 2005). They are conceivably less powerful than a deep neural network." }, { "heading": "B.2 TRUSTED EXECUTION ENVIRONMENT", "text": "Memory Limit Ohrimenko et al. (2016) proposed data-oblivious machine learning algorithms using SGX for training and prediction. Their work also defends against some potential side-channel attacks using oblivious operations. However, their algorithms cannot handle any layer of size that exceeds the amount of usable memory (90MB) in an enclave.\nThe memory limit has been a huge drawback of SGX. Different efforts have been devoted to resolving this issue. Shaon et al. (2017) proposed SGX-BigMatrix. It supports operations on matrices which size exceed 90MB, but still have very high overhead comparing to optimized libraries for unprotected matrices. Linux’s SGX supports memory oversubscription for enclaves, but it introduces overhead for the incurred paging, which is reported widely (Weichbrodt et al., 2018; Chakrabarti, 2017; Harnik & Tsfadia, 2017; Brenner et al., 2016; Arnautov et al., 2016). Intel official forum even reported examples of 10× to 350× overheads (Feng, 2017). Moreover, based on our experiments, Linux’s paging introduces up to 24× runtime on matrix multiplications. Orenbach et al. (2017) proposed Eleos, a memory handling mechanism to reduce performance overhead due to SGX’s memory page fault. Its main idea is to prevent exiting enclaves when page fault happens because it is an expensive instruction. The experimental results showed that it can reduce the paging overhead by 5×. And its successive work CoSMiX (Orenbach et al., 2019) shows that the paging overhead can be further reduced to 1.3 − 2.4×. We assume Goten and CaffeSCONE employ this memory handling mechanism to handle paging, and we simulate the performance that does not affected by paging by using simulation mode form Intel SGX SDK.\nTEE-based Approaches and TEE+GPU-based Approaches A few proposals rely on TEE (Hunt et al., 2018; Tople et al., 2018) or TEE and GPU (Volos et al., 2018; Tramèr & Boneh, 2019).\nChiron (Hunt et al., 2018) assumes the data provider shards training data into n pieces for n enclaves, such that each shard fits in enclave memory. The authors left the policy for managing insufficient enclave memory as future work. Most importantly, Chiron requires new SGX features that are not available yet. Volos et al. (2018) proposed Graviton, an architecture for supporting TEE on GPU with the help of SGX, which supports neural network computation in particular, with near-native performance compared to untrusted GPU. However, they assume that an attacker cannot physically steal information from the GPU cores, which is questionable because GPU cores, unlike SGX, are not designed for trusted operation and their security is not well examined.\nBahmani et al. (2017) proposed an SGX-based framework for general-purpose secure multi-party computation. On one hand, our work can be viewed as realizing a specific functionality under their framework at a conceptual level. On the other hand, the general-purpose treatment does not take into account the characteristics of neural network computations. More importantly, we provided important technical contribution and we use untrusted GPU to further accelerate computations. Kunkel et al. (2019) proposed TensorSCONE to port another popular DNN framework TensorFlow to SCONE. Our baseline approach is similar to this framework, but we provide our implementation to public for benchmarking.\nPrivado (Tople et al., 2018) allows a model owner to outsource privacy-preserving DNN inference to an SGX-enabled cloud server. It guarantees that even a powerful cloud who sees the SGX enclave memory access pattern does not learn model parameters or the user query. Compare with our solution, Privado does not handle training phase, nor does it leverage untrusted hardware like GPU for acceleration.\nTramèr & Boneh (2019) recently proposed Slalom for verifiable and private inference using a trusted enclave which also outsources some computation to a GPU. Their approach heavily relies on the assumption that the server knows the model’s parameters. It is thus not applicable to privacypreserving training." }, { "heading": "B.3 DIFFERENTIAL PRIVACY", "text": "Another line of research focuses on achieving differential privacy (Dwork, 2006; Dwork et al., 2006a;b). Abadi et al. (2016b) propose a differentially private stochastic gradient descent algorithm for deep learning. Shokri & Shmatikov (2015) propose collaborative learning, in which data owners jointly train a deep neural network by exchanging differentially private gradients through a parameter server instead of directly sharing local training data. Although Shokri & Shmatikov (2015) makes it hard to tell whether a specific record exists in the victim’s private training set, it does not prevent an adversary from learning macro-feature of the training set. Phong et al. (2018) showed that the parameter server in Shokri & Shmatikov (2015) can extract information about the training set, and proposed to use additive HE to eliminate the leakage during training." }, { "heading": "C SECURITY ANALYSIS", "text": "" }, { "heading": "C.1 PROTECTION SCOPE", "text": "From the perspective of the querier, no one else can learn the prediction query and the corresponding result. For the model, the most valuable information includes the parameter of the neural network (e.g., weights and bias of convolutional filters and fully-connected layer), the accuracy according to the training dataset, and the intermediate results. These explicit parameters of the model would not be known by any data-provider and server (with protection against side-channel attacks described shortly afterwards).\nWe aim for a practical framework instead of a perfectly leakage-free solution. Following the literature (Juvekar et al., 2018; Mohassel & Zhang, 2017; Liu et al., 2017; Gilad-Bachrach et al., 2016), we do not protect the hyper-parameters such as the learning rate, the number of layers, the size of each layer etc. These could be inferred by the querier by timing the interaction with the server or by the server from the memory access pattern. One may hide these by adding dummy storage and computation, which is ought to be inefficient.\nSide-channel leakage is also out of our protection scope. Specifically, the access pattern in cacheline may reveal information about the data (Ohrimenko et al., 2016). In our case, max-pooling layers and the argmax function in the output layer would be exploitable for their branching depending on the intermediate results. Yet, the existing defence (Ohrimenko et al., 2016) can be easily employed by changing the assembly code of max() in the enclave, and the computation overhead is less than 2% (Ohrimenko et al., 2016).\nModel extraction attacks (Tramèr et al., 2016; Fredrikson et al., 2015) can be launched in a blackbox environment, namely, the attacker knows nothing about the model parameters and its architecture but can query the model, whereby he/she duplicates the functionality of the model. We can easily employ two effective mitigations. First, the training data providers can limit the query rate or set up a query quota by consensus. Second, we only return the labels of evaluation results, instead of the confidence values (the values of the output vector) since it is the main attribute being abused by the attackers (Tramèr et al., 2016)." }, { "heading": "C.2 REALIZING THE NON-COLLUDING ASSUMPTION", "text": "To perform training, companies may join forces and have the motivation to dedicate a better network line between them. A similar argument also applies to the setting in which one of the parties is generally trusted would not collude with others, say, the government. To provide machine learning as a service in the application context of electronic healthcare (e.g., precision medicine), the Department of Health and Human Services (or alike) can take the effort.\nTechnically, to ensure at least one server remains secure, we resort to the standard practice from the security community on how to fortify a selected machine. We got many possible ways, including but not limited to, i) using different hardware and software configuration from the other server (so a vulnerability in one platform will not lead to an easy compromise of both machines), ii) placing it within the network-level security parameter behind the DMZ, and iii) limited access to the machine instead of public-facing. Even when a machine is compromised by an insider, we can further enforce\naccess control, which at least holds the entity with special access-permissions accountable when the access-rights are abused, assuming the security of the audit log of the access control system." }, { "heading": "C.3 OPERATIONS INSIDE ENCLAVES", "text": "With the use of SGX, the security guarantee is easy to see. In our construction, the data provided by data providers are either stored in the enclave or sealed on server storage. When data is stored inside the enclave, by the security guarantee of SGX, no other party is able to gain any information. When data is stored outside the enclave, we seal the data by an authenticated encryption (AESGCM) (Costan & Devadas, 2016), which protects the confidentiality and integrity of a sealed block. We also authenticate the meta-data, in particular, the identity and the number of executions of the block, which disallows arbitrary manipulation of the input data by mix-and-match.\nApart from storage, we also perform execution over the data. In our framework, all executions are data-independent — the executions of neural networks have no branching dependent on the data or models parameters. We analysis our implementation to be data-oblivious using PinTool (Luk et al., 2005), a tool for analysis execution trace, to make sure the trace is the same given model parameter, training data, and prediction queries. The execution view observed by other parties can thus be simulated by without the actual data.\nData-oblivious Operations The host of an enclave can observe the memory access pattern, even in L2 cache level (Brasser et al., 2017). Hence, we need to ensure algorithms running in enclaves are data-oblivious, meaning that the trace of executed cpu instruction should be the same even given different input data.\nFunctions involves branching, e.g. max, min, may arouse concern on data-oblivious because some optimization of compilers may skip the write instruction if the computed value is equal the original value. For example, the write in y = max(y, 0) may be skipped if y is indeed large than 0.\nFortunately, we can always use vectorization techniques to avoid such situations. With vectorization techniques, e.g., SSE and AVX, the vectorized read and write instructions will not be skipped since they are atomic and hence no branch depending the data value. Even better, such vectorization techniques are usually automatically employed by common compilers, e.g. GCC, with proper flags, e.g., -march=native. All we need to do is manually inspecting compiled assemble code or using trace analysis tools, e.g. PinTool (Luk et al., 2005), for automatic verification." }, { "heading": "C.4 OUTSOURCING TO GPUS", "text": "The only cryptographic primitive we used in the outsourcing protocol is additive secret sharing, which is commonly used in the non-colluding server setting (Wang et al., 2014; Mohassel & Zhang, 2017) for privacy-preserving machine learning. It is also not uncommon in the bigger context of secure multi-party computation (Hohenberger & Lysyanskaya, 2005; Chow et al., 2009; Demmler et al., 2015). Its confidentiality holds in the strong information-theoretic sense against any adversary without enough shares. This fits with the non-colluding server setting well.\nHere, we prove that our modified triplet multiplication is secure, namely, none of the server S0, S1, and S2 can gain any information of the contents of a, b, or c = a ⊗ c (the servers can learn their dimensions). Due to the non-colluding assumption, we only need to prove that the knowledge of each individual server can be reduced to their counterpart the original protocol.\nFor S2, it knows u and v, which are random tensors/matrices and contain no information about a, b, or c. Also, z = u⊗ v derived from u and v contains no extra information. Speaking at high-level, the extra knowledge of S0 and S1 leaks no meaningful information because it is all one-time padding.\nComparing the original protocol described in Section A.4 with our protocol described in Fig. 2. in our protocol, S0 has extra knowledge 〈z〉1, c1 + K1→0, and K0→1. Now, we apply the gamehopping technique to prove that our scheme is reducible to the original protocol. Firstly, since S0 does not know 〈z〉0 in our protocol, we can replace 〈z〉1 by 〈z〉0. Then, since S0 also does not K1→0, ci +K1→0 can also be replaced by a random matrix/tensor. Likewise, K0→1 is just another\nrandom matrix/tensor so it can be replaced trivially. Now, S0 has the view of S0 in the original protocol plus two random matrices/tensors.\nLikewise, S1 has extra knowledge of c0 + K0→1 and K1→0. Applying the same principles for analyzing S0, it can be reduced to S1 in the original protocol." }, { "heading": "D ANALYSIS FOR PERFORMANCE GAIN FOR LINEAR LAYERS", "text": "We first analyze the case of fully-connected layers. Assume x ∈ Zm×kq is the input, w ∈ Zk×nq is the weight, and y ∈ Zm×nq is the output, We found that we should maximize min(m, k, n). Since m, the batch size, is usually small compared to k and n, it is better to be large.\nWe should minimize the run-time ratio of our GPU-powered matrix multiplication scheme to the vanilla CPU scheme. The forward computation in fully-connected layer is x⊗w = y. The run-time of our GPU-powered scheme is tpre-proc ·(m·k+k ·n)+(tpost-proc+tcomm)·(m·n)+tgpu-op ·(m·k ·n).\nThe backward computation computes dx = dy ⊗ w and dw = dyT ⊗ x, where dx, dw, and dy are the gradient of x,w, and y respectively, and they are of the same size as their counterparts. Similar to the analysis above, the total run-time of both forward and backward computations is tgpu-scheme = (2 · tpre-proc + tpost-proc + tcomm) · (m · k + k · n + m · n) + 3 · tgpu-op · (m · k · n), while the run-time of the vanilla CPU scheme is tcpu-scheme = 3 · tcpu-op · (m · k · n). We denote textra = 2 · tpre-proc + tpost-proc + tcomm. Finally, the run-time ratio of these two schemes is\ntgpu-scheme tcpu-scheme = textra tcpu-op · ( 1 m + 1 n + 1 k ) + tgpu-op tcpu-op .\nThe last term matches with the intuition that GPU governs the performance gain. The pre/postprocessing and communication time also play an important role if 1/m+ 1/n+ 1/k is large. Note that the inverse of 1/m+ 1/n+ 1/k is also known as the arithmetic intensity (cud, 2019).\nThe analysis on convolution layers follows the same principle but is more involved. If we assume the image size of input and output are the same, we can have a similar result as fully-connected layers by replacing m, k, and n to Cout · fh · fw, Cin · fw · fh, and B · Ih · Iw respectively." } ]
2,019
GOTEN: GPU-OUTSOURCING TRUSTED EXECUTION
SP:e95b1fa1f8d1b66ef0fdfb6c7aa56983f83e2277
[ "This paper presents a mainly theoretical argument comparing the expressivity of model-free and model-based RL methods contrary to analysis in the past which usually relies on sample complexity. They construct a family of MDPs, where the true dynamics belong to a simple function class (in terms of the number of linear pieces needed to define the function), but the corresponding optimal Q-function belongs to a function class not necessarily expressible by a simple function. The paper then builds a similar case for randomly/semi-randomly generated MDPs. Finally, they propose to bootstrap the Q-function with n-step returns to boost the expressivity exponentially. ", "The paper highlights an interesting issue regarding approximability of function approximators (neural networks). The paper provides cases where the action value function is difficult to approximate and is much more difficult than the dynamics of a model. The author conducts some experiments to claim that even with a large NN, DQN still finds a suboptimal policy. Theorems regarding the appoximability of the action value function are presented. Then the paper proposes that rollout-based search should be preferred for planning and conducts some experiments to verify this. Although the paper points out interesting issues of approximating action-value function, both the motivation and the suggestion regarding MBRL are not convincing." ]
We compare the model-free reinforcement learning with the model-based approaches through the lens of the expressive power of neural networks for policies, Q-functions, and dynamics. We show, theoretically and empirically, that even for one-dimensional continuous state space, there are many MDPs whose optimal Q-functions and policies are much more complex than the dynamics. We hypothesize many real-world MDPs also have a similar property. For these MDPs, model-based planning is a favorable algorithm, because the resulting policies can approximate the optimal policy significantly better than a neural network parameterization can, and model-free or model-based policy optimization rely on policy parameterization. Motivated by the theory, we apply a simple multi-step model-based bootstrapping planner (BOOTS) to bootstrap a weakQ-function into a stronger policy. Empirical results show that applying BOOTS on top of modelbased or model-free policy optimization algorithms at the test time improves the performance on MuJoCo benchmark tasks.
[]
[ { "authors": [ "Mohammad Gheshlaghi Azar", "Ian Osband", "Rémi Munos" ], "title": "Minimax regret bounds for reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jacob Buckman", "Danijar Hafner", "George Tucker", "Eugene Brevdo", "Honglak Lee" ], "title": "Sampleefficient reinforcement learning with stochastic ensemble value expansion", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ignasi Clavera", "Jonas Rothfuss", "John Schulman", "Yasuhiro Fujita", "Tamim Asfour", "Pieter Abbeel" ], "title": "Model-based reinforcement learning via meta-policy optimization", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Sarah Dean", "Horia Mania", "Nikolai Matni", "Benjamin Recht", "Stephen Tu" ], "title": "On the sample complexity of the linear quadratic regulator", "venue": "CoRR, abs/1710.01688,", "year": 2017 }, { "authors": [ "Sarah Dean", "Horia Mania", "Nikolai Matni", "Benjamin Recht", "Stephen Tu" ], "title": "Regret bounds for robust adaptive control of the linear quadratic regulator", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Yilun Du", "Karthik Narasimhan" ], "title": "Task-agnostic dynamics priors for deep reinforcement learning", "venue": "arXiv preprint arXiv:1905.04819,", "year": 2019 }, { "authors": [ "V Feinberg", "A Wan", "I Stoica", "MI Jordan", "JE Gonzalez", "S Levine" ], "title": "Model-based value expansion for efficient model-free reinforcement learning", "venue": "In Proceedings of the 35th International Conference on Machine Learning", "year": 2018 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Matthew Soh", "Sergey Levine" ], "title": "Diagnosing bottlenecks in deep qlearning algorithms", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nicolas Heess", "Gregory Wayne", "David Silver", "Timothy Lillicrap", "Tom Erez", "Yuval Tassa" ], "title": "Learning continuous control policies by stochastic value gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy", "venue": "optimization. ArXiv,", "year": 2019 }, { "authors": [ "Chi Jin", "Zeyuan Allen-Zhu", "Sebastien Bubeck", "Michael I Jordan" ], "title": "Is q-learning provably efficient", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H. Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine", "Ryan Sepassi", "George Tucker", "Henryk Michalewski" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Sham Kakade", "John Langford" ], "title": "Approximately optimal approximate reinforcement learning", "venue": "In Proceedings of the Nineteenth International Conference on Machine Learning,", "year": 2002 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Sergey Levine", "Vladlen Koltun" ], "title": "Guided policy search", "venue": "In International Conference on Machine Learning, pp", "year": 2013 }, { "authors": [ "Kendall Lowrey", "Aravind Rajeswaran", "Sham M. Kakade", "Emanuel Todorov", "Igor Mordatch" ], "title": "Plan online, learn offline: Efficient learning and exploration via model-based control", "venue": null, "year": 2018 }, { "authors": [ "Yuping Luo", "Huazhe Xu", "Yuanzhi Li", "Yuandong Tian", "Trevor Darrell", "Tengyu Ma" ], "title": "Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ali Malik", "Volodymyr Kuleshov", "Jiaming Song", "Danny Nemer", "Harlan Seymour", "Stefano Ermon" ], "title": "Calibrated model-based deep reinforcement learning", "venue": null, "year": 1906 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin A. Riedmiller", "Andreas Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518:529–533,", "year": 2015 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee" ], "title": "Value prediction network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Razvan Pascanu", "Guido F Montufar", "Yoshua Bengio" ], "title": "On the number of inference regions of deep feed forward networks with piece-wise linear activations", "venue": null, "year": 2013 }, { "authors": [ "Alexandre Piché", "Valentin Thomas", "Cyril Ibrahim", "Yoshua Bengio", "Chris Pal" ], "title": "Probabilistic planning with sequential monte carlo methods", "venue": null, "year": 2018 }, { "authors": [ "Sébastien Racanière", "Théophane Weber", "David Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomènech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li" ], "title": "Imagination-augmented agents for deep reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine" ], "title": "Epopt: Learning robust neural network policies using model ensembles", "venue": "arXiv preprint arXiv:1610.01283,", "year": 2016 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Vedavyas Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy P. Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "David Silver", "Hado van Hasselt", "Matteo Hessel", "Tom Schaul", "Arthur Guez", "Tim Harley", "Gabriel Dulac-Arnold", "David Reichert", "Neil Rabinowitz", "Andre Barreto" ], "title": "The predictron: End-toend learning and planning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "venue": null, "year": 2018 }, { "authors": [ "Alexander L Strehl", "Lihong Li", "Eric Wiewiora", "John Langford", "Michael L Littman" ], "title": "Pac modelfree reinforcement learning", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Wen Sun", "Nan Jiang", "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford" ], "title": "Model-based rl in contextual decision processes: Pac bounds and exponential improvements over model-free approaches", "venue": "In Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Richard S. Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "SIGART Bulletin,", "year": 1990 }, { "authors": [ "Erik Talvitie" ], "title": "Model regularization for stable sample rollouts", "venue": "In UAI, pp", "year": 2014 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Stephen Tu", "Benjamin Recht" ], "title": "The gap between model-based and model-free methods on the linear quadratic regulator: An asymptotic viewpoint", "venue": "arXiv preprint arXiv:1812.03565,", "year": 2018 }, { "authors": [ "Tingwu Wang", "Jimmy Ba" ], "title": "Exploring model-based planning with policy networks", "venue": "arXiv preprint arXiv:1906.08649,", "year": 2019 }, { "authors": [ "Andrea Zanette", "Emma Brunskill" ], "title": "Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Yann N Dauphin", "Tengyu Ma" ], "title": "Fixup initialization: Residual learning without normalization", "venue": "arXiv preprint arXiv:1901.09321,", "year": 2019 }, { "authors": [ "Ba", "Kurutach" ], "title": "2018) is that we don’t use model ensemble. Instead, we occasionally optimize the dynamics by one step of Adam to introduce stochasticity in the dynamics, following the technique in SLBO (Luo et al., 2019). As argued in (Luo et al., 2019), the stochasticity in the dynamics can play a similar role as the model ensemble", "venue": "MBPO on Humanoid,", "year": 2019 }, { "authors": [ "Luo" ], "title": "non-squared `2 loss is used instead of the standard MSE loss", "venue": "Cheetah and Walker. To diagnose the issue,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Model-based deep reinforcement learning (RL) algorithms offer a lot of potentials in achieving significantly better sample efficiency than the model-free algorithms for continuous control tasks. We can largely categorize the model-based deep RL algorithms into two types: 1. model-based policy optimization algorithms which learn policies or Q-functions, parameterized by neural networks, on the estimated dynamics, using off-the-shelf model-free algorithms or their variants (Luo et al., 2019; Janner et al., 2019; Kaiser et al., 2019; Kurutach et al., 2018; Feinberg et al., 2018; Buckman et al., 2018), and 2. model-based planning algorithms, which plan with the estimated dynamics Nagabandi et al. (2018); Chua et al. (2018); Wang & Ba (2019).\nA deeper theoretical understanding of the pros and cons of model-based and the model-free algorithms in the continuous state space case will provide guiding principles for designing and applying new sample-efficient methods. The prior work on the comparisons of model-based and model-free algorithms mostly focuses on their sample efficiency gap, in the case of tabular MDPs (Zanette & Brunskill, 2019; Jin et al., 2018), linear quadratic regulator (Tu & Recht, 2018), and contextual decision process with sparse reward (Sun et al., 2019).\nIn this paper, we theoretically compare model-based RL and model-free RL in the continuous state space through the lens of approximability by neural networks, and then use the insight to design practical algorithms. What is the representation power of neural networks for expressing the Qfunction, the policy, and the dynamics? How do the model-based and model-free algorithms utilize the expressivity of neural networks?\nOur main finding is that even for the case of one-dimensional continuous state space, there can be a massive gap between the approximability of Q-function and the policy and that of the dynamics:\nThe optimal Q-function and policy can be significantly more complex than the dynamics.\nWe construct environments where the dynamics are simply piecewise linear functions with constant pieces, but the optimal Q-functions and the optimal policy require an exponential (in the horizon)\n* indicates equal contribution\nFigure 1: Left: The dynamics of two randomly generated MDPs (from the RAND, and SEMI-RAND methods outlined in Section 4.3 and detailed in Appendix D.1). Right: The corresponding Q-functions which are more complex than the dynamics (more details in Section 4.3).\nnumber of linear pieces, or exponentially wide neural networks, to approximate.1 The approximability gap can also be observed empirically on (semi-) randomly generated piecewise linear dynamics with a decent chance. (See Figure 1 for two examples.)\nWhen the approximability gap occurs, any deep RL algorithms with policies parameterized by neural networks will suffer from a sub-optimal performance. These algorithms include both model-free algorithms such as DQN (Mnih et al., 2015) and SAC (Haarnoja et al., 2018), and model-based policy optimization algorithms such as SLBO (Luo et al., 2019) and MBPO (Janner et al., 2019). To validate the intuition, we empirically apply these algorithms to the constructed or the randomly generated MDPs. Indeed, they fail to converge to the optimal rewards even with sufficient samples, which suggests that they suffer from the lack of expressivity.\nHowever, in such cases, model-based planning algorithms should not suffer from the lack of expressivity, because they only use the learned, parameterized dynamics, which are easy to express. The policy obtained from the planning is the maximizer of the total future reward on the learned dynamics, and can have an exponential (in the horizon) number of pieces even if the dynamics has only a constant number of pieces. In fact, even a partial planner can help improve the expressivity of the policy. If we plan for k steps and then resort to some Q-function for estimating the total reward of the remaining steps, we can obtain a policy with 2k more pieces than what Q-function has.\nWe hypothesize that the real-world continuous control tasks also have a more complex optimal Qfunction and a policy than the dynamics. The theoretical analysis of the synthetic dynamics suggests that a model-based few-steps planner on top of a parameterizedQ-function will outperform the original Q-function because of the addtional expressivity introduced by the planning. We empirically verify the intuition on MuJoCo benchmark tasks. We show that applying a model-based planner on top of Q-functions learned from model-based or model-free policy optimization algorithms in the test time leads to significant gains over the original Q-function or policy.\nIn summary, our contributions are:\n1. We construct continuous state space MDPs whose Q-functions and policies are proved to be more complex than the dynamics (Sections 4.1 and 4.2.)\n2. We empirically show that with a decent chance, (semi-)randomly generated piecewise linear MDPs also have complex Q-functions (Section 4.3.)\n3. We show theoretically and empirically that the model-free RL or model-based policy optimization algorithms suffer from the lack of expressivity for the constructed MDPs (Sections 4.3), whereas model-based planning solve the problem efficiently (Section 5.2.)\n4. Inspired by the theory, we propose a simple model-based bootstrapping planner (BOOTS), which can be applied on top of any model-free or model-based Q-learning algorithms at\n1In turn, the dynamics can also be much more complex than the Q-function. Consider the following situation: a subset of the coordinates of the state space can be arbitrarily difficult to express by neural networks, but the reward function can only depend on the rest of the coordinates and remain simple.\nthe test time. Empirical results show that BOOTS improves the performance on MuJoCo benchmark tasks, and outperforms previous state-of-the-art on MuJoCo humanoid environment." }, { "heading": "2 ADDITIONAL RELATED WORK", "text": "Comparisons with Prior Theoretical Work. Model-based RL has been extensively studied in the tabular case (see (Zanette & Brunskill, 2019; Azar et al., 2017) and the references therein), but much less so in the context of deep neural networks approximators and continuous state space. (Luo et al., 2019) give sample complexity and convergence guarantees suing principle of optimism in the face of uncertainty for non-linear dynamics.\nBelow we review several prior results regarding model-based versus model-free dichotomy in various settings. We note that our work focuses on the angle of expressivity, whereas the work below focuses on the sample efficiency.\nTabular MDPs. The extensive study in tabular MDP setting leaves little gap in their sample complexity of model-based and model-free algorithms, whereas the space complexity seems to be the main difference. (Strehl et al., 2006). The best sample complexity bounds for model-based tabular RL (Azar et al., 2017; Zanette & Brunskill, 2019) and model-free tabular RL (Jin et al., 2018) only differ by a poly(H) multiplicative factor (where H is the horizon.)\nLinear Quadratic Regulator. Dean et al. (2018) and Dean et al. (2017) provided sample complexity bound for model-based LQR. Recently, Tu & Recht (2018) analyzed sample efficiency of the modelbased and model-free problem in the setting of Linear Quadratic Regulator, and proved an O(d) gap in sample complexity, where d is the dimension of state space. Unlike tabular MDP case, the space complexity of model-based and model-free algorithms has little difference. The sample-efficiency gap mostly comes from that dynamics learning has d-dimensional supervisions, whereasQ-learning has only one-dimensional supervision.\nContextual Decision Process (with function approximator). Sun et al. (2019) prove an exponential information-theoretical gap between mode-based and model-free algorithms in the factored MDP setting. Their definition of model-free algorithms requires an exact parameterization: the value-function hypothesis class should be exactly the family of optimal value-functions induced by the MDP family. This limits the application to deep reinforcement learning where overparameterized neural networks are frequently used. Moreover, a crucial reason for the failure of the model-free algorithms is that the reward is designed to be sparse.\nRelated Empirical Work. A large family of model-based RL algorithms uses existing model-free algorithms of its variant on the learned dynamics. MBPO (Janner et al., 2019), STEVE (Buckman et al., 2018), and MVE (Feinberg et al., 2018) are model-basedQ-learning-based policy optimization algorithms, which can be viewed as modern extensions and improvements of the early model-based Q-learning framework, Dyna (Sutton, 1990). SLBO (Luo et al., 2019) is a model-based policy optimization algorithm using TRPO as the algorithm in the learned environment.\nAnother way to exploit the dynamics is to use it to perform model-based planning. Racanière et al. (2017) and Du & Narasimhan (2019) use the model to generated additional extra data to do planning implicitly. Chua et al. (2018) study how to combine an ensemble of probabilistic models and planning, which is followed by Wang & Ba (2019), which introduces a policy network to distill knowledge from a planner and provides a prior for the planner. Piché et al. (2018) uses methods in Sequential Monte Carlo in the context of control as inference. Oh et al. (2017) trains a Q-function and then perform lookahead planning. Nagabandi et al. (2018) uses random shooting as the planning algorithm. Lowrey et al. (2018) uses the dynamics to improve the performance of model-free algorithms.\nHeess et al. (2015) backprops through a stochastic computation graph with a stochastic gradient to optimize the policy under the learned dynamics. Levine & Koltun (2013) distills a policy from trajectory optimization. Rajeswaran et al. (2016) trains a policy adversarially robust to the worst dynamics in the ensemble. Clavera et al. (2018) reformulates the problem as a meta-learning problem and using meta-learning algorithms. Predictron (Silver et al., 2017) learns a dynamics and value function and then use them to predict the future reward sequences.\nAnother line of work focus on how to improve the learned dynamics model. Many of them use an ensemble of models (Kurutach et al., 2018; Rajeswaran et al., 2016; Clavera et al., 2018), which are further extended to an ensemble of probabilistic models (Chua et al., 2018; Wang & Ba, 2019). Luo et al. (2019) designs a discrepancy bound for learning the dynamics model. Talvitie (2014) augments the data for model training in a way that the model can output a real observation from its own prediction. Malik et al. (2019) calibrates the model’s uncertainty so that the model’s output distribution should match the frequency of predicted states. Oh et al. (2017) learns a representation of states by predicting rewards and future returns using representation." }, { "heading": "3 PRELIMINARIES", "text": "Markov Decision Process. A Markov Decision Process (MDP) is a tuple 〈S,A, f, r, γ〉, where S is the state space, A the action space, f : S × A → ∆(S) the transition dynamics that maps a state action pair to a probability distribution of the next state, γ the discount factor, and r ∈ RS×A the reward function. Throughout this paper, we will consider deterministic dynamics, which, with slight abuse of notation, will be denoted by f : S ×A → S. A deterministic policy π : S → A maps a state to an action. The value function for the policy is defined as is defined V π(s) def= ∑∞ h=1 γ\nh−1r(sh, ah). where ah = π(sh), s1 = s and sh+1 = f(sh, ah).\nAn RL agent aims to find a policy π that maximizes the expected total reward defined as\nη(π) def = Es1∼µ [V π(s1)] ,\nwhere µ is the distribution of the initial state.\nBellman Equation. Let π? be the optimal policy, and V ? the optimal value function (that is, the value function for policy π?). The value function V π for policy π and optimal value function V ? satisfy the Bellman equation and Bellman optimality equation, respectively. Let Qπ and Q? defines the state-action value function for policy π and optimal state-action value function. Then, for a deterministic dynamics f , we have{\nV π(s) = Qπ(s, π(s)), Qπ(s, a) = r(s, a) + γV π(f(s, a)), and\n{ V ?(s) = maxa∈AQ ?(s, a),\nQ?(s, a) = r(s, a) + γV ?(f(s, a)). (1)\nDenote the Bellman operator for dynamics f by Bf : (Bf [Q]) (s, a) = r(s, a) + maxa′ γQ(f(s, a), a ′).\nNeural Networks. We focus on fully-connected neural nets with ReLU function as activations. A one-dimensional input and one-dimensional output ReLU neural net represents a piecewise linear function. A two-layer ReLU neural net with d hidden neurons represents a piecewise linear function with at most (d + 1) pieces. Similarly, an H-layer neural net with d hidden neurons in each layer represents a piecewise linear function with at most (d+ 1)H pieces (Pascanu et al., 2013).\nProblem Setting and Notations. In this paper, we focus on continuous state space, discrete action space MDPs with S ⊂ R. We assume the dynamics is deterministic (that is, st+1 = f(st, at)), and the reward is known to the agent. Let bxc denote the floor function of x, that is, the greatest integer less than or equal to x. We use I[·] to denote the indicator function.\n4 APPROXIMABILITY OF Q-FUNCTIONS AND DYNAMICS\nWe show that there exist MDPs in one-dimensional continuous state space that have simple dynamics but complex Q-functions and policies. Moreover, any polynomial-size neural network function approximator of the Q-function or policy will result in a sub-optimal expected total reward, and learningQ-functions parameterized by neural networks requires fundamentally an exponential number of samples (Section 4.2). Section 4.3 illustrates the phenomena thatQ-function is more complex than the dynamics occurring frequently and naturally even with random MDP, beyond the theoretical construction." }, { "heading": "4.1 A PROVABLE CONSTRUCTION OF MDPS WITH COMPLEX Q", "text": "Recall that we consider the infinite horizon case and 0 < γ < 1 is the discount factor. Let H = (1 − γ)−1 be the “effective horizon” — the rewards after H steps become negligible due to the discount factor. For simplicity, we assume that H > 3 and it is an integer. (Otherwise we take just take H = b(1− γ)−1c.) Throughout this section, we assume that the state space S = [0, 1) and the action space A = {0, 1}. Definition 4.1. Given the effective horizon H = (1− γ)−1, we define an MDP MH as follows. Let κ = 2−H . The dynamics f by the following piecewise linear functions with at most three pieces.\nf(s, 0) =\n{ 2s if s < 1/2\n2s− 1 if s ≥ 1/2 f(s, 1) =\n{ 2s+ κ if s < (1− κ)/2\n2s+ κ− 1 if (1− κ)/2 ≤ s ≤ (2− κ)/2 2s+ κ− 2 otherwise." }, { "heading": "The reward function is defined as", "text": "r(s, 0) = I[1/2 ≤ s < 1] r(s, 1) = I[1/2 ≤ s < 1]− 2(γH−1 − γH)\nThe initial state distribution µ is uniform distribution over the state space [0, 1).\nThe dynamics and the reward function for H = 4 are visualized in Figures 2a, 2b. Note that by the definition, the transition function for a fixed action a is a piecewise linear function with at most 3 pieces. Our construction can be modified so that the dynamics is Lipschitz and the same conclusion holds (see Appendix C).\nAttentive readers may also realize that the dynamics can be also be written succinctly as f(s, 0) = 2s mod 1 and f(s, 1) = 2s + κ mod 12, which are key properties that we use in the proof of Theorem 4.2 below.\nOptimal Q-function Q? and the optimal policy π?. Even though the dynamics of the MDP constructed in Definition 4.1 has only a constant number of pieces, the Q-function and policy are very complex: (1) the policy is a piecewise linear function with exponentially number of pieces, (2) the optimal Q-function Q? and the optimal value function V ? are actually fractals that are not continuous anywhere. These are formalized in the theorem below.\nTheorem 4.2. For s ∈ [0, 1), let s(k) denotes the k-th bit of s in the binary representation.3 The optimal policy π? for the MDP defined in Definition 4.1 has 2H+1 number of pieces. In particular,\nπ?(s) = I[s(H+1) = 0]. (2) 2The mod function is defined as: x mod 1 , x − bxc. More generally, for positive real k, we define x\nmod k , x− kbx/kc. 3Or more precisely, we define s(h) , b2hsc mod 2.\nAnd the optimal value function is a fractal with the expression:\nV ?(s) = H∑ h=1 γh−1s(h) + ∞∑ h=H+1 γh−1 ( 1 + 2(s(h+1) − s(h)) ) + γH−1 ( 2s(H+1) − 2 ) . (3)\nThe close-form expression of Q? can be computed by Q?(s, a) = r(s, a) + V ?(f(s, a)), which is also a fractal.\nWe approximate the optimal Q-function by truncating the infinite sum to 2H terms, and visualize it in Figure 2c. We discuss the main intuitions behind the construction in the following proof sketch of the Theorem. A rigorous proof of Theorem 4.2) is deferred to Appendix B.1.\nProof Sketch. The key observation is that the dynamics f essentially shift the binary representation of the states with some addition. We can verify that the dynamics satisfies f(s, 0) = 2s mod 1 and f(s, 1) = 2s+ κ mod 1 where κ = 2−H . In other words, suppose s = 0.s(1)s(2) · · · is the binary representation of s, and let left-shift(s) = 0.s(2)s(3) · · · .\nf(s, 0) = left-shift(s) (4)\nf(s, 1) = (left-shift(s) + 2−H) mod 1 (5)\nMoreover, the reward function is approximately equal to the first bit of the binary representation\nr(s, 0) = s(1), r(s, a) ≈ s(1) (6)\n(Here the small negative drift of reward for action a = 1, −2(γH−1 − γH), is only mostly designed for the convenience of the proof, and casual readers can ignore it for simplicity.) Ignoring carries, the policy pretty much can only affect the H-th bit of the next state s′ = f(s, a): the H-th bit of s′ is either equal to (H + 1)-th bit of s when action is 0, or equal its flip when action is 1. Because the bits will eventually be shifted left and the reward is higher if the first bit of a future state is 1, towards getting higher future reward, the policy should aim to create more 1’s. Therefore, the optimal policy should choose action 0 if the (H + 1)-th bit of s is already 1, and otherwise choose to flip the (H + 1)-th bit by taking action 1.\nA more delicate calculation that addresses the carries properly would lead us to the form of the optimal policy (Equation (2).) Computing the total reward by executing the optimal policy will lead us to the form of the optimal value function (equation (3).) (This step does require some elementary but sophisticated algebraic manipulation.)\nWith the form of the V ?, a shortcut to a formal, rigorous proof would be to verify that it satisfies the Bellman equation, and verify π? is consistent with it. We follow this route in the formal proof of Theorem 4.2) in Appendix B.1.\n4.2 THE APPROXIMABILITY OF Q-FUNCTION\nA priori, the complexity of Q? or π? does not rule out the possibility that there exists an approximation of them that do an equally good job in terms of maximizing the rewards. However, we show that in this section, indeed, there is no neural network approximation of Q? or π? with a polynomial width. We prove this by showing any piecewise linear function with a sub-exponential number of pieces cannot approximate either Q? or π? with a near-optimal total reward. Theorem 4.3. Let MH be the MDP constructed in Definition 4.1. Suppose a piecewise linear policy π has a near optimal reward in the sense that η(π) ≥ 0.92 · η(π?), then it has to have at least Ω (exp(cH)/H) pieces for some universal constant c > 0. As a corollary, no constant depth neural networks with polynomial width (inH) can approximate the optimal policy with near optimal rewards.\nConsider a policy π induced by a value functionQ, that is, π(s) = arg maxa∈AQ(s, a). Then,when there are two actions, the number of pieces of the policy is bounded by twice the number of pieces of Q. This observation and the theorem above implies the following inapproximability result of Q?. Corollary 4.4. In the setting of Theorem 4.3, let π be the policy induced by some Q. If π is nearoptimal in a sense that η(π) ≥ 0.92 · η(π?), then Q has at least Ω (exp(cH)/H) pieces for some universal constant c > 0.\nThe intuition behind the proof of Theorem 4.3 is as follows. Recall that the optimal policy has the form π?(s) = I[s(H+1) = 0]. One can expect that any polynomial-pieces policy π behaves suboptimally in most of the states, which leads to the suboptimality of π. Detailed proof of Theorem 4.3 is deferred to Appendix B.2.\nBeyond the expressivity lower bound, we also provide an exponential sample complexity lower bound for Q-learning algorithms parameterized with neural networks (see Appendix B.4).\n4.3 THE APPROXIMABILITY OF Q-FUNCTIONS OF RANDOMLY GENERATED MDPS\nIn this section, we show the phenomena that the Q-function not only occurs in the crafted cases as in the previous subsection, but also occurs more robustly with a decent chance for (semi-) randomly generated MDPs. (Mathematically, this says that the family of MDPs with such a property is not a degenerate measure-zero set.)\nIt is challenging and perhaps requires deep math to characterize the fractal structure of Q-functions for random dynamics, which is beyond the scope of this paper. Instead, we take an empirical approach here. We generate random piecewise linear and Lipschitz dynamics, and compute their Q-functions for the finite horizon, and then visualize the Q-functions or count the number of pieces in theQ-functions. We also use DQN algorithm (Mnih et al., 2015) with a finite-size neural network to learn the Q-function.\nWe set horizon H = 10 for simplicity and computational feasibility. The state and action space are [0, 1) and {0, 1} respectively. We design two methods to generate random or semi-random piecewise dynamics with at most four pieces. First, we have a uniformly random method, called RAND, where we independently generate two piecewise linear functions for f(s, 0) and f(s, 1), by generating random positions for the kinks, generating random outputs for the kinks, and connecting the kinks by linear lines (See Appendix D.1 for a detailed description.)\nIn the second method, called SEMI-RAND, we introduce a bit more structure in the generation process, towards increasing the chance to see the phenomenon. The functions f(s, 0) and f(s, 1) have 3 pieces with shared kinks. We also design the generating process of the outputs at the kinks so that the functions have more fluctuations. The reward for both of the two methods is r(s, a) = s,∀a ∈ A. (See Appendix D.1 for a detailed description.) Figure 1 illustrates the dynamics of the generated MDPs from SEMI-RAND. More details of empirical settings can be found in Appendix D.1. The optimal policy and Q can have a large number of pieces. Because the state space has one dimension, and the horizon is 10, we can compute the exact Q-functions by recursively applying Bellman operators, and count the number of pieces. We found that, 8.6% fraction of the 1000 MDPs independently generated from the RAND method has policies with more than 100 pieces, much larger than the number of pieces in the dynamics (which is 4). Using the SEMI-RAND method, a 68.7% fraction of the MDPs has polices with more than 103 pieces. In Section D.1, we plot the histogram of the number of pieces of the Q-functions. Figure 1 visualize the Q-functions and dynamics of two MDPs generated from RAND and SEMI-RAND method. These results suggest that the phenomenon thatQ-function is more complex than dynamics is not a degenerate phenomenon and can occur with non-zero measure. For more empirical results, see Appendix D.2.\nModel-based policy optimization methods also suffer from a lack of expressivity. As an implication of our theory in the previous section, when the Q-function or the policy are too complex to be approximated by a reasonable size neural network, both model-free algorithms or model-based policy optimization algorithms will suffer from the lack of expressivity, and as a consequence, the sub-optimal rewards. We verify this claim on the randomly generated MDPs discussed in Section 4.3, by running DQN (Mnih et al., 2015), SLBO (Luo et al., 2019), and MBPO (Janner et al., 2019) with various architecture size.\nFor the ease of exposition, we use the MDP visualized in the bottom half of Figure 1. The optimal policy for this specific MDP has 765 pieces, and the optimal Q-function has about 4× 104 number of pieces, and we can compute the optimal total rewards.\nFirst, we apply DQN to this environment by using a two-layer neural network with various widths to parameterize the Q-function. The training curve is shown in Figure 3 (Left). Model-free algorithms\nAlgorithm 1 Model-based Bootstrapping Planner (BOOTS) + RL Algorithm X 1: training: run Algorithm X, store the all samples in the set R, store the learned Q-function Q,\nand the learned dynamics f̂ if it is available in Algorithm X. 2: testing: 3: if f̂ is not available, learn f̂ from the data in R 4: execute the policy BOOTS(s) at every state s 5: 1: function BOOTS(s) 2: Given: query oracle for function Q and f̂ 3: Compute\nπboots k,Q,f̂\n(s) = arg max a max a1,...,ak\nr(s, a) + · · ·+ γk−1r(sk−1, ak−1) + γkQ(sk, ak)\nusing a zero-th order optimization algorithm (which only requires oracle query of the function value) such as cross-entropy method or random shooting.\ncan not find near-optimal policy even with 214 hidden neurons and 1M trajectories, which suggests that there is a fundamental approximation issue. This result is consistent with Fu et al. (2019), in a sense that enlarging Q-network improves the performance of DQN algorithm at convergence.\nSecond, we apply SLBO and MBPO in the same environment. Because the policy network and Q-function in SLOBO and MBPO cannot approximate the optimal policy and value function, we see that they fail to achieve near-optimal rewards, as shown in Figure 3 (Left)." }, { "heading": "5 MODEL-BASED BOOTSTRAPPING PLANNER", "text": "Our theory and experiments in Section 4.2 and 4.3 demonstrate that when the Q-function or the policy is complex, model-free or model-based policy optimization algorithms will suffer from the lack of expressivity. The intuition suggests that model-based planning algorithms will not suffer from the lack of expressivity because the final policy is not represented by a neural network. For the construction in Section 4.1, we can actually prove that even a few-steps planner can bootstrap the expressivity of the Q-function (formalized in Theorem 5.1 below).\nInspired the theoretical result, we apply a simple k-step model-based bootstrapping planner on top of existing Q-functions (trained from either model-based or model-free approach) in the test time, on either the one-dimensional MDPs considered in Section 4 or the continuous control benchmark tasks in MuJoCo. The bootstrapping planner is reminiscent of MCTS using in AlphaGo (Silver et al., 2016; 2018). However, here, we use the learned dynamics and deal with the continuous state space.\n5.1 BOOTSTRAPPING THE Q-FUNCTION\nGiven a function Q that is potentially not expressive enough to approximate the optimal Q-function, we can apply the Bellman operator with a learned dynamics f̂ for k times to get a bootstrapped version of Q:\nBk f̂ [Q] = Bf̂ [· · · · [Bf̂ [Q]]]︸ ︷︷ ︸\nk times\n(7)\nor Bk f̂ [Q](s, a) = max a1,··· ,ak r(s0, a0) + · · ·+ γk−1r(sk−1, ak−1) + γkQ(sk, ak) (8)\nwhere s0 = s, a0 = a and sh+1 = f̂(sh, ah).\nGiven the bootstrapped version, we can derive a greedy policy w.r.t it:\nπboots k,Q,f̂ (s) = max a Bk f̂ [Q](s, a) (9)\nAlgorithm 1, called BOOTS summarizes how to apply the planner on top of any RL algorithm with a Q-function (straightforwardly).\nFor the MDPs constructed in Section 4.1, we can prove that representing the optimal Q-function by Bk f̂ [Q] requires fewer pieces in Q than representing the optimal Q-function by Q directly.\nTheorem 5.1. Consider the MDPMH defined in Definition 4.1. There exists a constant-piece piecewise linear dynamics f̂ and a 2H−k+1-piece piecewise linear functionQ, such that the bootstrapped policy πboots\nk,Q,f̂ (s) achieves the optimal total rewards.\nBy contrast, recall that in Theorem 4.3, we show that approximating the optimal Q-function directly with a piecewise linear function requires ≈ 2H piecewise. Thus we have a multiplicative factor of 2k gain in the expressivity by using the bootstrapped policy. Here the exponential gain is only magnificent enough when k is close to H because the gap of approximability is huge. However, in more realistic settings — the randomly-generated MDPs and the MuJoCo environment — the bootstrapping planner improvs the performance significantly as shown in the next subsection." }, { "heading": "5.2 EXPERIMENTS", "text": "BOOTS on random piecewise linear MDPs. We implement BOOTS (Algorithm 1) with various steps of planning and with the learned dynamics.4 . The planner is an exponential-time planner which enumerates all the possible future sequence of actions. We also implement bootstrapping with partial planner with varying planning horizon. As shown in Figure 3, BOOTS + DQN not only has the best sample-efficiency, but also achieves the optimal reward. In the meantime, even a partial planner helps to improve both the sample-efficiency and performance. More details of this experiment are deferred to Appendix D.3.\n4Our code is available at https://github.com/roosephu/boots.\n0 50K 100K 150K 200K 250K 300K 350K 400K # steps\n0\n1000\n2000\n3000\n4000\n5000\nAv er\nag e\nRe tu\nrn\nBOOTS-MBSAC STEVE SAC MBPO BOOTS-MBPO\nBOOTS on MuJoCo environments. We work with the OpenAI Gym environments (Brockman et al., 2016) based on the Mujoco simulator (Todorov et al., 2012) with maximum horizon 1000 and discount factor 1. We apply BOOTS on top of three algorithms: (a) SAC (Haarnoja et al., 2018), the state-of-the-art model-free RL algorithm; (b) MBPO (Janner et al., 2019), a model-based Qlearning algorithm, and an extension of Dyna (Sutton, 1990); (c) a computationally efficient variant of MBPO that we develop using ideas from SLBO (Luo et al., 2019), which is called MBSAC. The main difference here from MBPO and other works such as (Wang & Ba, 2019; Kurutach et al., 2018) is that we don’t use model ensemble. Instead, we occasionally optimize the dynamics by one step of Adam to introduce stochasticity in the dynamics, following the technique in SLBO Luo et al. (2019). Our algorithm is a few times faster than MBPO in wall-clock time. It performs similarly to MBPO on Humanoid, but generally worse than MBPO on other environments. See Appendix A.1 for details.\nWe use k = 4 steps of planning unless explicitly mentioned otherwise in the ablation study (Section A.2). In Figure 4, we compare BOOTS+SAC with SAC, and BOOTS + MBSAC with MBSAC on Gym Ant and Humanoid environments, and demonstrate that BOOTS can be used on top of existing strong baselines. We found that BOOTS has little help for other simpler environments, and we suspect that those environments have much less complex Q-functions so that our theory and intuitions do not necessarily apply. (See Section A.2 for more ablation study.)\nIn Figure 5, we compare BOOTS+MBSAC and BOOTS+MBPO with other MBPO, SAC, and STEVE (Buckman et al., 2018)5 on the humanoid environment. We see a strong performance surpassing the previous state-of-the-art MBPO." }, { "heading": "6 CONCLUSION", "text": "Our study suggests that there exists a significant representation power gap of neural networks between for expressingQ-function, the policy, and the dynamics in both constructed examples and empirical benchmarking environments. We show that our model-based bootstrapping planner BOOTS helps to overcome the approximation issue and improves the performance in synthetic settings and in the difficult MuJoCo environments. We raise some interesting open questions.\n• Can we theoretically generalize our results to high-dimensional state space, or continuous actions space? Can we theoretically analyze the number of pieces of the optimalQ-function of a stochastic dynamics?\n• In this paper, we measure the complexity by the size of the neural networks. It’s conceivable that for real-life problems, the complexity of a neural network can be better measured by its weights norm. Could we build a more realistic theory with another measure of complexity?\n• The BOOTS planner comes with a cost of longer test time. How do we efficiently plan in high-dimensional dynamics with a long planning horizon?\n• The dynamics can also be more complex (perhaps in another sense) than the Q-function in certain cases. How do we efficiently identify the complexity of the optimal Q-function, policy, and the dynamics, and how do we deploy the best algorithms for problems with different characteristics?\n5For STEVE, we use the official code at https://github.com/tensorflow/models/tree/ master/research/steve" }, { "heading": "A EXPERIMENT DETAILS IN SECTION 5.2", "text": "" }, { "heading": "A.1 MODEL-BASED SAC (MBSAC)", "text": "Here we describe our MBSAC algorithm in Algorithm 2, which is a model-based policy optimization and is used in BOOTS-MBSAC. As mentioned in Section 5.2, the main difference from MBPO and other works such as (Wang & Ba, 2019; Kurutach et al., 2018) is that we don’t use model ensemble. Instead, we occasionally optimize the dynamics by one step of Adam to introduce stochasticity in the dynamics, following the technique in SLBO (Luo et al., 2019). As argued in (Luo et al., 2019), the stochasticity in the dynamics can play a similar role as the model ensemble. Our algorithm is a few times faster than MBPO in wall-clock time. It performs similarlty to MBPO on Humanoid, but a bit worse than MBPO in other environments. In MBSAC, we use SAC to optimize the policy πβ and the Q-function Qϕ. We choose SAC due to its sample-efficiency, simplicity and off-policy nature. We mix the real data from the environment and the virtual data which are always fresh and are generated by our learned dynamics model f̂θ.6\nAlgorithm 2 MBSAC\n1: Parameterize the policy πβ , dynamics f̂θ, and the Q-function Qϕ by neural networks. Initialize replay buffer B with ninit steps of interactions with the environments by a random policy, and pretrain the dynamics on the data in the replay buffer. 2: t← 0, and sample s0 from the initial state distribution. 3: for niter iterations do 4: Perform action at ∼ πβ(·|st) in the environment, obtain s′ as the next state from the envi-\nronment. 5: st+1 ← s′, and add the transition (st, at, st+1, rt) to B. 6: t ← t + 1. If t = T or the trajectory is done, reset to t = 0 and sample s0 from the initial\nstate distribution. 7: for npolicy iterations do 8: for nmodel iterations do 9: Optimize f̂θ with a mini-batch of data from B by one step of Adam.\n10: Sample nreal data Breal and nstart data Bstart from B. 11: Perform q steps of virtual rollouts using f̂θ and policy πβ starting from states in Bstart;\nobtain Bvirtual. 12: Update πβ and Qϕ using the mini-batch of data in Breal ∪ Bvirtual by SAC.\nFor Ant, we modify the environment by adding the x and y axis to the observation space to make it possible to compute the reward from observations and actions. For Humanoid, we add the position of center of mass. We don’t have any other modifications. All environments have maximum horizon 1000.\nFor the policy network, we use an MLP with ReLU activation function and two hidden layers, each of which contains 256 hidden units. For the dynamics model, we use a network with 2 Fixup blocks (Zhang et al., 2019), with convolution layers replaced by a fully connected layer. We found out that with similar number of parameters, fixup blocks leads to a more accurate model in terms of validation loss. Each fixup block has 500 hidden units. We follow the model training algorithm in Luo et al. (2019) in which non-squared `2 loss is used instead of the standard MSE loss." }, { "heading": "A.2 ABLATION STUDY", "text": "Planning with oracle dynamics and more environments. We found that BOOTS has smaller improvements on top of MBSAC and SAC for the environment Cheetah and Walker. To diagnose the issue, we also plan with an oracle dynamics (the true dynamics). This tells us whether the lack of improvement comes from inaccurate learned dynamics. The results are presented in two ways in Figure 6 and Figure 7. In Figure 6, we plot the mean rewards and the standard deviation of various methods across the randomness of multiple seeds. However, the randomness from the seeds\n6In the paper of MBPO (Janner et al., 2019), the authors don’t explicitly state their usage of real data in SAC; the released code seems to make such use of real data, though.\nsomewhat obscures the gains of BOOTS on each individual run. Therefore, for completeness, we also plot the relative gain of BOOTS on top of MBSAC and SAC, and the standard deviation of the gains in Figure 7.\nFrom Figure 7 we can see planning with the oracle dynamics improves the performance in most of the cases (but with various amount of improvements). However, the learned dynamics sometimes not always can give an improvement similar to the oracle dynamics. This suggests the learned dynamics is not perfect, but oftentimes can lead to good planning. This suggests the expressivity of the Q-functions varies depending on the particular environment. How and when to learn and use a learned dynamics for planning is a very interesting future open question.\nThe effect of planning horizon. We experimented with different planning horizons in Figure 8. By planning with a longer horizon, we can earn slightly higher total rewards for both MBSAC and SAC. Planning horizon k = 16, however, does not work well. We suspect that it’s caused by the compounding effect of the errors in the dynamics." }, { "heading": "B OMITTED PROOFS IN SECTION 4", "text": "In this section we provide the proofs omitted in Section 4." }, { "heading": "B.1 PROOF OF THEOREM 4.2", "text": "Proof of Theorem 4.2. Since the solution to Bellman optimal equations is unique, we only need to verify that V ? and π? defined in equation (1) satisfy the following,\nV ?(s) = r(s, π?(s)) + γV ?(f(s, π?(s))), (10) V ?(s) ≥ r(s, a) + γV ?(f(s, a)), ∀a 6= π?(s). (11)\nRecall that s(i) is the i-th bit in the binary representation of s, that is, s(i) = b2isc mod 2. Let ŝ = f(s, π?(s)). Since π?(s) = I[s(H+1) = 0], which ensures the H-bit of the next state is 1, we have\nŝ(i) = { s(i+1), i 6= H, 1, i = H.\n(12)\nFor simplicity, define ε = 2(γH−1 − γH). The definition of r(s, a) implies that\nr(s, π?(s)) = I[1/2 ≤ s < 1]− I[π?(s) = 1]ε = s(1) − ( 1− s(H+1) ) ε.\nBy elementary manipulation, Eq. (3) is equivalent to\nV ?(s) = H∑ i=1 γi−1s(i) + ∞∑ i=H+1 ( γi−1 − 2(γi−2 − γi−1) ( 1− s(i) )) , (13)\nNow, we verify Eq. (10) by plugging in the proposed solution (namely, Eq. (13)). As a result,\nr(s, π?(s)) + γV ?(ŝ)\n= s(1) − ( 1− s(H+1) ) ε+ γ H∑ i=1 γi−1I[ŝ(i) = 1] + γ ∞∑ i=H+1 ( γi−1 − ( 1− ŝ(i) ) 2(γi−2 − γi−1) )\n= s(1) − ( 1− s(H+1) ) ε+ H∑ i=2 γi−1s(i) + γH + ∞∑ i=H+2 ( γi−1 − ( 1− s(i) ) 2(γi−2 − γi−1) )\n= H∑ i=1 γi−1s(i) + ∞∑ i=H+1 ( γi−1 − ( 1− s(i) ) 2(γi−2 − γi−1) ) = V ?(s),\nwhich verifies Eq. (10).\nIn the following we verify Eq. (11). Consider any a 6= π?(s). Let s̄ = f(s, a) for shorthand. Note that s̄(i) = s(i+1) for i > H . As a result,\nV ?(s)− γV ?(s̄)\n= H∑ i=1 γi−1s(i) + ∞∑ i=H+1 ( γi−1 − ( 1− s(i) ) 2(γi−2 − γi−1) )\n− H∑ i=1 γi−1s̄(i) − ∞∑ i=H+1 ( γi−1 − ( 1− s̄(i) ) 2(γi−2 − γi−1) )\n=s(1) + H−1∑ i=1 γi ( s(i+1) − s̄(i) ) − γH s̄(H) + γH − 2 ( 1− s(H+1) ) ( γH−1 − γH )\nFor the case where s(H+1) = 0, we have π?(s) = 1. For a = 0, s̄(i) = s(i+1) for all i ≥ 1. Consequently, V ?(s)− γV ?(s̄) = s(1) + γH − ε > s(1) = r(s, 0), where the last inequality holds when γH − ε > 0, or equivalently, γ > 2/3.\nFor the case where s(H+1) = 1, we have π?(s) = 0. For a = 1, we have s(H+1) = 1 and s̄(H) = 0. Let p = max{i ≤ H : s(i) = 0}, where we define the max of an empty set is 0. The dynamics f(s, 1) implies that\ns̄(i) = s(i+1), i+ 1 < p or i > H, 1, i+ 1 = p,\n0, p < i+ 1 ≤ H + 1. Therefore,\nV ?(s)− γV ?(s̄) = s(1) + γH + H−1∑ i=1 γi ( s(i+1) − s̄(i) ) > s(1) − ε = r(s, 1).\nIn both cases, we have V ? − γV ?(s̄) > r(s, a) for a 6= π?(s), which proves Eq. (11)." }, { "heading": "B.2 PROOF OF THEOREM 4.3", "text": "For a fixed parameter H , let z(π) be the number of pieces in π. For a policy π, define the state distribution when acting policy π at step h as µπh.\nIn order to prove Theorem 4.3, we show that if 1/2− 2Hz(π)/2H < 0.3, then η(π) < 0.92η(π?). The proof is based on the advantage decomposition lemma.\nLemma B.1 (Advantage Decomposition Lemma (Schulman et al., 2015; Kakade & Langford, 2002)). Define Aπ(s, a) = r(s, a) + γV π(f(s, a)) − V π(s) = Qπ(s, a) − V π(s). Given policies π and π̃, we have\nη(π) = η(π̃) + ∞∑ h=1 γh−1Es∼µπh [ Aπ̃(s, π(s)) ] . (14)\nCorollary B.2. For any policy π, we have\nη(π?)− η(π) = ∞∑ h=1 γh−1Es∼µπh [V ?(s)−Q?(s, π(s))] . (15)\nIntuitively speaking, since π? = I[s(H+1) = 0], the a policy π with polynomial pieces behaves suboptimally in most of the states. Lemma B.3 shows that the single-step suboptimality gap V ?(s)− Q?(s, π(s)) is large for a constant portion of the states. On the other hand, Lemma B.4 proves that\nthe state distribution µπh is near uniform, which means that suboptimal states can not be avoided. Combining with Corollary B.2, the suboptimal gap of policy π is large.\nThe next lemma shows that, if π does not change its action for states from a certain interval, the average advantage term V ?(s)−Q?(s, π(s)) in this interval is large. Proof of this lemma is deferred of Section B.3. Lemma B.3. Let `k = [k/2H , (k + 1)/2H), and K = {0 ≤ k < 2H : k mod 2 = 1}. Then for k ∈ K, if policy π does not change its action at interval `k (that is, |{π(s) : s ∈ `k}| = 1), we have\n1\n|`k| ∫ s∈`k (V ?(s)−Q?(s, π(s))) ds ≥ 0.183 (16)\nfor H > 500.\nNext lemma shows that when the number of pieces in π is not too large, the distribution µπh is close to uniform distribution for step 1 ≤ h ≤ H. Proof of this lemma is deferred of Section B.3 Lemma B.4. Let z(π) be the number of pieces of policy π. For k ∈ [2H ], define interval `k = [k/2 H , (k + 1)/2H). Let νh(k) = infs∈`k µ π h(s), If the initial state distribution µ is uniform distribution, then for any h ≥ 1,∑ 0≤k<2H 2−H · νh(k) ≥ 1− 2h z(π) 2H . (17)\nNow we present the proof for Theorem 4.3.\nProof of Theorem 4.3. For any k ∈ [2H ], consider the interval `k = [k/2H , (k + 1)/2H). Let K = {k ∈ [AH ] : k mod 2 = 1}. If π does not change at interval `k (that is, |{π(s) : s ∈ `k}| = 1), by Lemma B.3 we have ∫\ns∈`k (V ?(s)−Q?(s, π(s))) ds ≥ 0.183 · 2−H . (18)\nLet νh(k) = infs∈`k µ π h(s), then by advantage decomposition lemma (namely, Corollary B.2), we have\nη(π?)− η(π) = ∞∑ h=1 γh−1 (∫ s∈[0,1) (V ∗(s)−Q?(s, π(s))) dµπh(s) )\n≥ 10H∑ h=1 γh−1 (∑ k∈K ∫ s∈`k (V ∗(s)−Q?(s, π(s))) dµπh(s) )\n≥ 10H∑ h=1 γh−1 (∑ k∈K ∫ s∈`k νh(k)(V ∗(s)−Q?(s, π(s))) ds )\n≥ 10H∑ h=1 γh−1 (∑ k∈K 0.183 · 2−H · νh(k) ) .\nBy Lemma B.4 and union bound, we get∑ k∈K 2−H · νh(k) ≥ 1 2 − 2hz(π) 2H . (19)\nFor the sake of contradiction, we assume z(π) = o (exp(cH)/H), then for large enoughH we have,\n1/2− 20Hz(π) 2H ≥ 0.49,\nwhich means that ∑ k∈K 2 −H ·νh(k) ≥ 0.49 for all h ≤ 10H. Consequently, forH > 500, we have\nη(π?)− η(π) ≥ 10H∑ h=1 (0.183× 0.49)γh−1 ≥ 0.089 · 1− γ 10H 1− γ ≥ 0.088 1− γ .\nNow, since η(π?) ≤ 1/(1 − γ), we have η(π) < 0.92η(π?). Therefore for near-optimal policy π, z(π) = Ω (exp(cH)/H) ." }, { "heading": "B.3 PROOFS OF LEMMA B.3 AND LEMMA B.4", "text": "In this section, we present the proofs of two lemmas used in Section B.1\nProof of Lemma B.3. Note that for any k ∈ K, s(H) = 1,∀s ∈ `k. Now fix a parameter k ∈ K. Suppose π(s) = ai for s ∈ `k. Then for any s such that s(H+1) + i 6= 1, we have\nV ?(s)−Q?(s, π(s)) ≥ γH − ε.\nFor H > 500, we have γH − ε > 0.366. Therefore,∫ s∈`k (V ?(s)−Q?(s, π(s))) ds ≥ ∫ s∈`k 0.366·I[s(H+1) 6= 1−i] ds ≥ 0.366·2−H−1 = 0.183·2−H .\nProof of Lemma B.4. Now let us fix a parameterH and policy π. For every h, we prove by induction that there exists a function ξh(s), such that\n(a) 0 ≤ ξh(s) ≤ min{µπh(s), 1},\n(b) infs∈`k ξh(s) = sups∈`k ξh(s), ∀k ∈ [A H ],\n(c) ∫ s∈[0,1) dξh(s) ≥ 1− h · z(π)/2 H−1.\nFor the base case h = 1, we define ξh(s) = µπh(s) = 1 for all s ∈ [0, 1). Now we construct ξh+1 from ξh.\nFor a fixed k ∈ [2H ], define lk = k · 2−H , rk = (k + 1) · 2−H as the left and right endpoints of interval `k. Let {x(i)k }2i=1 be the set of 2 solutions of equation\n2x+ 2−H ≡ lk mod 1\nwhere 0 ≤ x < 1, and we define y(i)k = x (i) k + 2 −H mod 1. By definition, only states from the set ∪2i=1[x (i) k , y (i) k ) can reach states in interval `k by a single transition. We define a set Ik = {i : 1 ≤ i ≤ 2, |{π(s) : s ∈ [x(i)k , y (i) k )}| = 1}. That is, the intervals where policy π acts unanimously. Consequently, for i ∈ Ik, the set {s : s ∈ [x(i)k , y (i) k ), f(s, π(s)) ∈ `k} is an interval of length 2−H−1, and has the form\nu (i) k def = [x (i) k + w (i) k · 2 −H−1, x (i) k + (w (i) k + 1) · 2 −H−1)\nfor some integer w(i)k ∈ {0, 1}. By statement (b) of induction hypothesis,\ninf s∈u(i)k ξh(s) = sup s∈u(i)k ξh(s). (20)\nNow, the density ξh+1(s) for s ∈ `k is defined as,\nξh+1(s) def = ∑ i∈Ik 1 2 · ξh(x(i)k + w (i) k · 2 −H−1)\nThe intuition of the construction is that, we discard those density that cause non-uniform behavior (that is, the density in intervals [x(i)k , y (i) k ) where i 6∈ Ik). When the number of pieces of π is small, we can keep most of the density. Now, statement (b) is naturally satisfied by definition of ξh+1. We verify statement (a) and (c) below.\nFor any set B ⊆ `k, let (T π)−1 (B) = {s ∈ S : f(s, π(s)) ∈ B} be the inverse of Markov transition T π . Then we have,\n(T πξh)(B) def = ξh ( (T π)−1 (B) ) = ∑ i∈{1,2} ξh ( (T π)−1 (B) ∩ [x(i)k , y (i) k ) )\n≥ ∑ i∈Ik ξh ( (T π)−1 (B) ∩ [x(i)k , y (i) k ) )\n= ∑ i∈Ik ∣∣∣(T π)−1 (B) ∩ [x(i)k , y(i)k )∣∣∣ ξh (x(i)k + w(i)k · 2−H−1) (By Eq. (20)) = ∑ i∈Ik |B| 2 ξh ( x (i) k + w (i) k · 2 −H−1 ) ,\nwhere | · | is the shorthand for standard Lebesgue measure. By definition, we have\nξh+1(B) = ∑ i∈Ik |B| 2 ξh ( x (i) k + w (i) k · 2 −H−1 ) ≤ (T πξh)(B) ≤ (T πµπh)(B) = µπh+1(B),\nwhich verifies statement (a).\nFor statement (c), recall that S = [0, 1) is the state space. Note that T π preserve the overall density. That is (T πξh) (S) = ξh(S). We only need to prove that\n(T πξh) (S)− ξh+1(S) ≤ h · z(π)/2H−1 (21)\nand statement (c) follows by induction.\nBy definition of ξh+1(s) and the induction hypothesis that ξh(s) ≤ 1, we have\n(T πξh) (`k)− ξh+1(`k) ≤ (2− |Ik|)2−H .\nOn the other hand, for any s ∈ S, the set {k ∈ [2H ] : s ∈ ∪2i=1[x (i) k , y (i) k )} has cardinality 2, which means that one intermittent point of π can correspond to at most 2 intervals that are not in Ik for some k. Thus, we have∑\n0≤k<2H |Ik| ≥ 2H+1 − ∑ s:π−(s)6=π+(s) ∣∣∣{k ∈ [2H ] : s ∈ ∪2i=1[x(i)k , y(i)k )}∣∣∣ ≥ 2H+1 − 2 · z(π). Consequently\n(T πξh) (S)− ξh+1(S) = ∑\n0≤k<2H ((T πξh) (`k)− ξh+1(`k)) ≤ z(π)2−H+1,\nwhich proves statement (c)." }, { "heading": "B.4 SAMPLE COMPLEXITY LOWER BOUND OF Q-LEARNING", "text": "Recall that corollary 4.4 says that in order to find a near-optimal policy by a Q-learning algorithm, an exponentially large Q-network is required. In this subsection, we show that even if an exponentially large Q-network is applied for Q learning, still we need to collect an exponentially large number of samples, ruling out the possibility of efficiently solving the constructed MDPs with Q-learning algorithms.\nTowards proving the sample complexity lower bound, we consider a stronger family of Q-learning algorithm, Q-learning with Oracle (Algorithm 3). We assume that the algorithm has access to a Q-ORACLE, which returns the optimal Q-function upon querying any pair (s, a) during the training process. Q-learning with Oracle is conceptually a stronger computation model than the vanilla Q-learning algorithm, because it can directly fit the Q functions with supervised learning, without relying on the rollouts or the previousQ function to estimate the targetQ value. Theorem B.5 proves a sample complexity lower bound for Q-learning algorithm on the constructed example.\nAlgorithm 3 Q-LEARNING WITH ORACLE Require: A hypothesis space Q of Q-function parameterization.\n1: Sample s0 ∼ µ from the initial state distribution µ 2: for i = 1, 2, · · · , n do 3: Decide whether to restart the trajectory by setting si ∼ µ based on historical information 4: Query Q-ORACLE to get the function Q?(si, ·). 5: Apply any action ai (according to any rule) and sample si+1 ∼ f(si, ai). 6: Learn the Q-function that fit all the data the best:\nQ← arg min Q∈Q\n1\nn n∑ i=1 (Q(si, ai)−Q?(si, ai))2 + λR(Q)\n7: Return the greedy policy according to Q.\nTheorem B.5 (Informal Version of Theorem B.7). SupposeQ is an infinitely-wide two-layer neural networks, andR(Q) is `1 norm of the parameters and serves as a tiebreaker. Then, any instantiation of the Q-LEARNING WITH ORACLE algorithm requires exponentially many samples to find a policy π such that η(π) > 0.99η(π?).\nFormal proof of Theorem B.5 is given in Appendix B.5. The proof of Theorem B.5 is to exploit the sparsity of the solution found by minimal-norm tie-breaker. It can be proven that there are at most O(n) non-zero neurons in the minimal-norm solution, where n is the number of data points. The proof is completed by combining with Theorem 4.3." }, { "heading": "B.5 PROOF OF THEOREM B.5", "text": "A two-layer ReLU neural net Q(s, ·) with input s is of the following form,\nQ(s, a) = d∑ i=1 wi,a [kis+ bi]+ + ca, (22)\nwhere d is the number of hidden neurons. wi,a, ca, ki, bi are parameters of this neural net, where ci,a, bi are bias terms. [x]+ is a shorthand for ReLU activation I[x > 0]x. Now we define the norm of a neural net. Definition B.6 (Norm of a Neural Net). The norm of a two-layer ReLU neural net is defined as,\nd∑ i=1 |wi,a|+ |ki|. (23)\nRecall that the Q-learning with oracle algorithm finds the solution by the following supervised learning problem,\nmin Q∈Q\n1\nn n∑ t=1 (Q(st, at)−Q?(st, at))2 . (24)\nThen, we present the formal version of theorem B.5. Theorem B.7. LetQ be the minimal `1 norm solution to Eq. (24), and π the greedy policy according to Q. When n = o(exp(cH)/H), we have η(π) < 0.99η(π?).\nThe proof of Theorem B.5 is by characterizing the minimal-norm solution, namely the sparsity of the minimal-norm solution as stated in the next lemma. Lemma B.8. The minimal-norm solution to Eq. (24) has at most 32n + 1 non-zero neurons. That is, |{i : ki 6= 0}| ≤ 32n+ 1.\nWe first present the proof of Theorem B.7, followed by the proof of Theorem B.8.\nProof of Theorem B.7. Recall that the policy is given by π(s) = arg maxa∈AQ(s, a). For a Qfunction with 32n + 2 pieces, the greedy policy according to Q(s, a) has at most 64n + 4 pieces. Combining with Theorem 4.3, in order to find a policy π such that η(π) > 0.99η(π?), n needs to be exponentially large (in effective horizon H).\nProof of Lemma B.8 is based on merging neurons. Let xi = −bi/ki,wi = (wi,1, wi,2), and c = (c1, c2). In vector form, neural net defined in Eq. (22) can be written as,\nQ(s, ·) = d∑ i=1 wi [ki(s− xi)]+ + c.\nFirst we show that neurons with the same xi can be merged together. Lemma B.9. Consider the following two neurons,\nk1 [s− x1]+ w1, k2 [s− x2]+ w2. with k1 > 0, k2 > 0. If x1 = x2, then we can replace them with one single neuron of the form k′ [x− x1]+ w′ without changing the output of the network. Furthermore, if w1 6= 0,w2 6= 0, the norm strictly decreases after replacement. Proof. We set k′ = √ |k1w1 + k2w2|1, and w′ = (k1w1 + k2w2)/k′, where |w|1 represents the 1-norm of vector w. Then, for all s ∈ R, k′ [x− x1]+ w ′ = (k1w1 + k2w2) [s− x1]+ = k1 [s− x1]+ w1 + k2 [s− x1]+ w2.\nThe norm of the new neuron is |k′|+ |w′|1. By calculation we have, |k′|+ |w′|1 = 2 √ |k1w1 + k2w2|1 ≤ 2 √ |k1w1|1 + |k2w2|1\n(a) ≤ 2 (√ |k1w1|1 + √ |k2w2|1 ) ≤ |k1|+ |w1|1 + |k2|+ |w2|1.\nNote that the inequality (a) is strictly less when |k1w1|1 6= 0 and |k2w2|1 6= 0.\nNext we consider merging two neurons with different intercepts between two data points. Without loss of generality, assume the data points are listed in ascending order. That is, si ≤ si+1. Lemma B.10. Consider two neurons\nk1 [s− x0]+ w1, k2 [s− x0 − δ]+ w2. with k1 > 0, k2 > 0. If si ≤ x0 < x0 + δ ≤ si+1 for some 1 ≤ i ≤ n, then the two neurons can replaced by a set of three neurons,\nk′ [s− x0]+ w ′, k̃ [s− si]+ w̃, k̃ [s− si+1]+ (−w̃)\nsuch that for s ≤ si or s ≥ si+1, the output of the network is unchanged. Furthermore, if δ ≤ (si+1 − si)/16 and |w1|1 6= 0, |w2|1 6= 0, the norm decreases strictly.\nProof. For simplicity, define ∆ = si+1 − si. We set k′ = √ |k1w1 + k2w2|1,\nw′ = (k1w1 + k2w2)/k ′, k̃ = √ |k2w2|1δ/∆,\nw̃ = −k2w2δ/(∆k̃). Note that for s ≤ si, all of the neurons are inactive. For s ≥ si+1, all of the neurons are active, and\nk′w′(s− x0) + k̃w̃(s− si)− k̃w̃(s− si+1) = (k1w1 + k2w2)(s− x0)− k2w2δ = k1(s− x0)w1 + k2(s− x0 − δ)w2,\nwhich means that the output of the network is unchanged. Now consider the norm of the two networks. Without loss of generality, assume |k1w1|1 > |k2w2|1. The original network has norm |k1|+ |w1|1 + |k2|+ |w2|1. And the new network has norm\n|k′|+ |w′|1 + 2|k̃|+ 2|w̃|1 = 2 √ |k1w1 + k2w2|1 + 4 √ |k2w2|1δ/∆\n(a) ≤ |k1|+ |w1|1 + |k2|+ |w2|1 + ( 4 √ |k2w2|1δ/∆− 1\n2 (|k2|+ |w2|1)\n) ,\nwhere the inequality (a) is a result of Lemma E.1, and is strictly less when |w1|1 6= 0, |w2|1 6= 0. When δ/∆ < 1/16, we have ( 4 √ |k2w2|1δ/∆− 12 (|k2|+ |w2|1) ) < 0, which implies that\n|k′|+ |w′|1 + 2|k̃|+ 2|w̃|1 < |k1|+ |w1|1 + |k2|+ |w2|1.\nSimilarly, two neurons with k1 < 0 and k2 < 0 can be merged together.\nNow we are ready to prove Lemma B.8. As hinted by previous lemmas, we show that between two data points, there are at most 34 non-zero neurons in the minimal norm solution.\nProof of Lemma B.8. Consider the solution to Eq. (24). Without loss of generality, assume that si ≤ si+1. In the minimal norm solution, it is obvious that |wi|1 = 0 if and only if ki = 0. Therefore we only consider those neurons with ki 6= 0, denoted by index 1 ≤ i ≤ d′. Let Bt = {−bi/ki : 1 ≤ i ≤ d′, st < −bi/ki < st+1, ki > 0}. Next we prove that in the minimal norm solution, |Bt| ≤ 15. For the sake of contradiction, suppse |Bt| > 15. Then there exists i, j such that, st < −bi/ki < st+1, st < −bj/kj < st+1, |bi/ki − bj/kj | < (st+1 − si)/16, and ki > 0, kj > 0. By Lemma B.10, we can obtain a neural net with smaller norm by merging neurons i, j together without violating Eq. (24), which leads to contradiction.\nBy Lemma B.9, |Bt| ≤ 15 implies that there are at most 15 non-zero neurons with st < −bi/ki < st+1 and ki > 0. For the same reason, there are at most 15 non-zero neurons with st < −bi/ki < st+1 and ki < 0.\nOn the other hand, there are at most 2 non-zero neurons with st = −bi/ki for all t ≤ n, and there are at most 1 non-zero neurons with −bi/ki < s1. Therefore, we have d′ ≤ 32n+ 1." }, { "heading": "B.6 PROOF OF THEOREM 5.1", "text": "In this section we present the full proof of Theorem 5.1.\nProof. First we define the true trajectory estimator\nη(s0, a0, a1, · · · , ak) = k−1∑ j=0 γjr(sj , aj) + γ kQ?(sk, ak),\nthe true optimal action sequence\na?0, a ? 1, · · · , a?k = arg max a0,a1,··· ,ak η(s0, a0, a1, · · · , ak),\nand the true optimal trajectory\ns?0 = s0, s ? j = f(s ? j−1, a ? j−1),∀j > 1.\nIt follows from the definition of optimal policy that, a?j = π ?(sj). Consequently we have\nsk (H−k+1) = sk (H−k+2) = · · · = sk(H) = 1. Define the set G = {s : s(H−k+1) = s(H−k+2) = · · · = s(H) = 1}. We claim that the following function satisfies the statement of Theorem 5.1\nQ(s, a) = I[s ∈ G] · 2 1− γ .\nSince s?k ∈ G, and sk 6∈ G for sk generated by non-optimal action sequence, we have Q(s?k, a) > Q\n?(s?k, a) ≥ Q?(sk, a) > Q(sk, a), where the second inequality comes from the optimality of action sequence a?h. As a consequence, for any (a0, a1, · · · , ak) 6= (a?0, a?1, · · · , a?k) η̂(s0, a ? 0, a ? 1, · · · , a?k) > η(s0, a?0, a?1, · · · , a?k) ≥ η(s0, a0, a1, · · · , ak) > η̂(s0, a0, a1, · · · , ak).\nTherefore, (â?0, â ? 1, · · · , â?k) = (a?0, a?1, · · · , a?k)." }, { "heading": "C EXTENSION OF THE CONSTRUCTED FAMILY", "text": "In this section, we present an extension to our construction such that the dynamics is Lipschitz. The action space is A = {0, 1, 2, 3, 4}. We define CLIP(x) = max{min{x, 1}, 0}. Definition C.1. Given effective horizon H = (1 − γ)−1, we define an MDP M ′H as follows. Let κ = 2−H . The dynamics is defined as\nf(s, 0) = CLIP(2s), f(s, 1) = CLIP(2s− 1), f(s, 2) = CLIP(2s+ κ), f(s, 3) = CLIP(2s+ κ− 1), f(s, 4) = CLIP(2s+ κ− 2)." }, { "heading": "Reward function is given by", "text": "r(s, 0) = r(s, 1) = I[1/2 ≤ s < 1] r(s, 2) = r(s, 3) = r(s, 4) = I[1/2 ≤ s < 1]− 2(γH−1 − γH)\nThe intuition behind the extension is that, we perform the mod operation manually. The following theorem is an analog to Theorem 4.2.\nTheorem C.2. The optimal policy π? for M ′H is defined by,\nπ?(s) = 0, I[s(H+1) = 0] and 2s < 1, 1, I[s(H+1) = 0] and 1 ≤ 2s < 2, 2, I[s(H+1) = 1] and 2s+ θ < 1, 3, I[s(H+1) = 1] and 1 ≤ 2s+ θ < 2, 4, I[s(H+1) = 1] and 2 < 2s+ θ.\n(25)\nAnd the corresponding optimal value function is,\nV ?(s) = H∑ h=1 γh−1s(h) + ∞∑ h=H+1 γh−1 ( 1 + 2(s(h+1) − s(h)) ) + γH−1 ( 2s(H+1) − 2 ) . (26)\nWe can obtain a similar upper bound on the performance of policies with polynomial pieces.\nTheorem C.3. Let MH be the MDP constructed in Definition C.1. Suppose a piecewise linear policy π has a near optimal reward in the sense that η(π) ≥ 0.99 · η(π?), then it has to have at least Ω (exp(cH)/H) pieces for some universal constant c > 0.\nThe proof is very similar to that for Theorem 4.3. One of the difference here is to consider the case where f(s, a) = 0 or f(s, a) = 1 separately. Attentive readers may notice that the dynamics where f(s, a) = 0 or f(s, a) = 1 may destroy the “near uniform” behavior of state distribution µπh (see Lemma B.4). Here we show that such destroy comes with high cost. Formally speaking, if the clip is triggered in an interval, then the averaged single-step suboptimality gap is 0.1/(1− γ). Lemma C.4. Let `k = [k/2H/2, (k + 1)/2H/2). For k ∈ [2H/2], if policy π does not change its action at interval `k (that is, |{π(s) : s ∈ `k}| = 1) and f(s, π(s)) = 0, ∀s ∈ `k or f(s, π(s)) = 1, ∀s ∈ `k. We have\n1\n|`k| ∫ s∈`k (V ?(s)−Q?(s, π(s))) ds ≥ 0.1 1− γ\n(27)\nfor large enough H .\nProof. Without loss of generality, we consider the case where f(s, π(s)) = 0. The proof for f(s, π(s)) = 1 is essentially the same.\nBy elementary manipulation, we have\nV ?(s)− V ?(0) ≥ H∑ i=1 γi−1s(i).\nLet ŝ = f(s, π?(s)). It follows from Bellman equation (1) that\nV ?(s) = r(s, π?(s)) + γV ?(ŝ),\nQ?(s, π(s)) = r(s, π(s)) + γV ?(0). Recall that we define = 2 ( γH−1 − γH ) . As a consequence,\n(V ?(s)−Q?(s, π(s))) > r(s, π?(s))− r(s, π(s)) + γ(V ?(ŝ)− V ?(0))\n≥ − + γ H∑ i=1 γi−1ŝ(i).\nPlugging into Eq (27), we have\n1\n|`k| ∫ s∈`k (V ?(s)−Q?(s, π(s))) ds ≥ − + 1 |`k| ∫ s∈`k ( H∑ i=1 γi ) ŝ(i) ds\n≥ − + H∑ i=1 γi ( 1 |`k| ∫ s∈`k ŝ(i) ds ) ≥ − + γ H/2 − γH 1− γ .\nLemma 27 is proved by noticing for large enough H ,\n− + γ H/2 − γH\n1− γ >\n0.1\n1− γ .\nLet D = {0, 1} for simplicity. For any policy π, we define a transition operator T̂ π , such that( T̂ πµ ) (Z) = µ ({s : p(s, a) ∈ Z, f(s, π(s)) 6∈ D) ,\nand the state distribution induced by it, defined recursively by\nµ̂π1 (s) = 1,\nµ̂πh = T̂ πµπh−1.\nWe also define the density function for states that are truncated as follows,\nρ̂πh(s) = I[f(s, π(s)) ∈ D]µ̂πh (s) .\nFollowing advantage decomposition lemma (Corollary B.2), the key step for proving Theorem C.3 is\nη(π?)− η(π) ≥ ∞∑ h=1 γh−1Es∼µ̂πh [V ?(s)−Q?(s, π(s))] + ∞∑ h=1 γhEs∼ρπh [V ?(s)−Q?(s, π(s))] .\n(28)\nSimilar to Lemma B.4, the following lemma shows that the density for most of the small intervals is either uniformly clipped, or uniformly spread over this interval.\nLemma C.5. Let z(π) be the number of pieces of policy π. For k ∈ [2H/2], define interval `k = [k/2H/2, (k + 1)/2H/2). Let νh(k) = infs∈`k µ̂ π h(s) and ωh(k) = infs∈`k ρ̂ π h(s). If the initial state distribution µ is uniform distribution, then for any h ≥ 1,\n2H/2∑ k=0 2−H/2 · νh(k) + h−1∑ h′=1 2H/2∑ k=0 2−H/2 · ωh′(k) ≥ 1− 2h z(π) + 10 2H/2 . (29)\nProof. Omitted. The proof is similar to Lemma B.4.\nNow we present the proof for Theorem C.3.\nProof of Theorem C.3. For any k ∈ [2H/2], consider the interval `k = [k/2H/2, (k + 1)/2H/2).. If π does not change at interval `k (that is, |{π(s) : s ∈ `k}| = 1), by Lemma B.3 we have∫\ns∈`k (V ?(s)−Q?(s, π(s))) ds ≥ 0.075 · 2−H/2. (30)\nBy Eq (28), Eq (30) and Lemma (27), we have\nη(π?)− η(π)\n≥ H∑ h=1 γh−1 2H/2∑ k=0 0.075 · 2−H/2 · νh(k) + H∑ h=1 2H/2∑ k=0 γh · 2−H/2 · ωh(k) · 0.1 1− γ . (31)\nBy Lemma C.5, we get\n2H/2∑ k=0 2−H/2 · νh(k) + h−1∑ h′=1 2H/2∑ k=0 2−H/2 · ωh′(k) ≥ 1− 2h z(π) + 10 2H/2 . (32)\nFor the sake of contradiction, we assume z(π) = o (exp(cH)/H), then for large enoughH we have,\n1− 2Hz(π) + 10 2H/2 > 0.8.\nConsequently, 2H/2∑ k=0 2−H/2 · νh(k) > 0.8− h−1∑ h′=1 2H/2∑ k=0 2−H/2 · ωh′(k). (33)\nPlugging in Eq (31), we get\nη(π?)− η(π)\n≥ H∑ h=1 0.075γh−1 2H/2∑ k=0 2−H/2νh(k) + H∑ h=1 2H/2∑ k=0 γh · 2−H/2 · ωh(k) · 0.1 1− γ . ≥ H∑ h=1 0.075γh−1 0.8− h−1∑ h′=1 2H/2∑ k=0 2−H/2 · ωh′(k) + H∑ h=1 2H/2∑ k=0 γh · 2−H/2 · ωh(k) · 0.1 1− γ\n≥ 0.061− γ H\n1− γ + H∑ h=1 2H/2∑ k=0 ·2−H/2 · ωh(k)\n( 0.1γh\n1− γ − 0.075 H∑ h′=h γh ′−1\n)\n≥ 0.061− γ H\n1− γ + H∑ h=1 2H/2∑ k=0 ·2−H/2 · ωh(k) γh−1 1− γ ( 0.1γ − 0.075 ( 1− γH−h )) When γ > 1/4, we have 0.1γ − 0.075(1− γH−h) > 0. As a consequence,\nη(π?)− η(π) > 0.061− γ H 1− γ ≥ 0.01 1− γ .\nNow, since η(π?) ≤ 1/(1 − γ), we have η(π) < 0.99η(π?). Therefore for near-optimal policy π, z(π) = Ω (exp(cH)/H) ." }, { "heading": "D OMITTED DETAILS OF EMPIRICAL RESULTS IN THE TOY EXAMPLE", "text": "" }, { "heading": "D.1 TWO METHODS TO GENERATE MDPS", "text": "In this section we present two methods of generating MDPs. In both methods, the dynamics p(s, a) has three pieces and is Lipschitz. The dynamics is generated by connecting kinks by linear lines.\nRAND method. As stated in Section 4.3, the RAND method generates kinks {xi} and the corresponding values {x′i} randomly. In this method, the generated MDPs are with less structure. The details are shown as follows.\n• State space S = [0, 1). • Action space A = {0, 1}. • Number of pieces is fixed to 3. The positions of the kinks are generated by, xi ∼ U(0, 1)\nfor i = 1, 2 and x0 = 0, x1 = 1. The values are generated by x′i ∼ U(0, 1). • The reward function is given by r(s, a) = s, ∀s ∈ S, a ∈ A. • The horizon is fixed as H = 10. • Initial state distribution is U(0, 1).\nFigure 1 visualizes one of the RAND-generated MDPs with complex Q-functions.\nSEMI-RAND method. In this method, we add some structures to the dynamics, resulting in a more significant probability that the optimal policy is complex. We generate dynamics with fix and shared kinks, generate the output at the kinks to make the functions fluctuating. The details are shown as follows.\n• State space S = [0, 1). • Action space A = {0, 1}. • Number of pieces is fixed to 3. The positions of the kinks are generated by, xi = i/3, ∀0 ≤ i ≤ 3. And the values are generated by x′i ∼ 0.65× I[i mod 2 = 0] + 0.35× U(0, 1). • The reward function is r(s, a) = s for all a ∈ A. • The horizon is fixed as H = 10. • Initial state distribution is U(0, 1).\nFigure 1 visualizes one of the MDPs generated by SEMI-RAND method." }, { "heading": "D.2 THE COMPLEXITY OF OPTIMAL POLICIES IN RANDOMLY GENERATED MDPS", "text": "We randomly generate 103 1-dimensional MDPs whose dynamics has constant number of pieces. The histogram of number of pieces in optimal policy π? is plotted. As shown in Figure 9, even for horizon H = 10, the optimal policy tends to have much more pieces than the dynamics.\nD.3 IMPLEMENTATION DETAILS OF ALGORITHMS IN RANDOMLY GENERATED MDP\nSEMI-RAND MDP The MDP where we run the experiment is given by the SEMI-RAND method, described in Section D.1. We list the dynamics of this MDP in the following.\nr(s, a) = s, ∀s ∈ S, a ∈ A,\nf(s, 0) = (0.131− 0.690) · x/0.333 + 0.690, 0 ≤ x < 0.333, (0.907− 0.131) · (x− 0.333)/0.334 + 0.131, 0.333 ≤ x < 0.667, (0.079− 0.907) · (x− 0.667)/0.333 + 0.907, 0.667 ≤ x,\nf(s, 1) = (0.134− 0.865) · x/0.333 + 0.865, 0 ≤ x < 0.333, (0.750− 0.134) · (x− 0.333)/0.334 + 0.134, 0.333 ≤ x < 0.667, (0.053− 0.750) · (x− 0.667)/0.333 + 0.750, 0.667 ≤ x,\nImplementation details of DQN algorithm We present the hyper-parameters of DQN algorithm. Our implementation is based on PyTorch tutorials7.\n• The Q-network is a fully connected neural net with one hidden-layer. The width of the hidden-layer is varying.\n• The optimizer is SGD with learning rate 0.001 and momentum 0.9. • The size of replay buffer is 104. • Target-net update frequency is 50. • Batch size in policy optimization is 128. • The behavior policy is greedy policy according to the current Q-network with -greedy.\nexponentially decays from 0.9 to 0.01. Specifically, = 0.01 + 0.89 exp(−t/200) at the t-th episode.\nImplementation details of MBPO algorithm For the model-learning step, we use `2 loss to train our model, and we use Soft Actor-Critic (SAC) (Haarnoja et al., 2018) in the policy optimization step. The parameters are set as,\n• number of hidden neurons in model-net: 32, • number of hidden neurons in value-net: 512, • optimizer for model-learning: Adam with learning rate 0.001. • temperature: τ = 0.01, • the model rollout steps: M = 5, • the length of the rollout: k = 5, • number of policy optimization step: G = 5.\nOther hyper-parameters are kept the same as DQN algorithm.\nImplementation details of TRPO algorithm For the model-learning step, we use `2 loss to train our model. Instead of TRPO (Schulman et al., 2015), we use PPO (Schulman et al., 2017) as policy optimizer. The parameters are set as,\n• number of hidden neurons in model-net: 32, • number of hidden neurons in policy-net: 512, • number of hidden neurons in value-net: 512, • optimizer: Adam with learning rate 0.001, • number of policy optimization step: 5. • The behavior policy is -greedy policy according to the current policy network. expo-\nnential decays from 0.9 to 0.01. Specifically, = 0.01 + 0.89 exp(−t/20000) at the t-th episode.\n7https://pytorch.org/tutorials/intermediate/reinforcement_q_learning. html\nImplementation details of Model-based Planning algorithm The perfect model-based planning algorithm iterates between learning the dynamics from sampled trajectories, and planning with the learned dynamics (with an exponential time algorithm which enumerates all the possible future sequence of actions). The parameters are set as,\n• number of hidden neurons in model-net: 32, • optimizer for model-learning: Adam with learning rate 0.001.\nImplementation details of bootstrapping The training time behavior of the algorithm is exactly like DQN algorithm, except that the number of hidden neurons in the Q-net is set to 64. Other parameters are set as,\n• number of hidden neurons in model-net: 32, • optimizer for model-learning: Adam with learning rate 0.001. • planning horizon varies." }, { "heading": "E TECHNICAL LEMMAS", "text": "In this section, we present the technical lemmas used in this paper. Lemma E.1. For A,B,C,D ≥ 0 and AC ≥ BD, we have\nA+ C + 1\n2 (B +D) ≥ 2\n√ AC +BD.\nFurthermore, when BD > 0, the inequality is strict.\nProof. Note that A+B + 12 (C +D) ≥ 2 √ AC + √ BD. And we have,(\n2 √ AC + √ BD )2 − ( 2 √ AC +BD )2 = 4 √ AC ·BD − 3BD ≥ BD ≥ 0.\nAnd when BD > 0, the inequality is strict." } ]
2,019
null
SP:a3f8fc0a93ecd80f88cf808e8cd228588010b1b0
[ "This paper proposes a very interesting idea of loss function optimization. At first sight, loss function is the goal of optimization and can not be optimized directly. However, the true goal of optimization is the final accuracy (for classification). So lots of loss functions can be designed and combined to form a large search space. In this paper, the authors adopt genetic programming to design loss functions hierarchically. And experiments show that GLO (Genetic Loss-function Optimization) based loss function can achieve better results than cross entropy. ", "The authors propose using evolutionary computation (EC) to perform meta learning over the set of symbolic expressions for loss functions. It's a compelling idea that is well-motivated. They find that applying their EC method to mnist yields an interesting loss function that they name the 'Baikal loss.' Much of the paper is devoted to analyzing the properties and performance of the Baikal loss. " ]
As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML.
[]
[ { "authors": [ "Jonathan T Barron" ], "title": "A general and adaptive robust loss function", "venue": "arXiv preprint arXiv:1701.03077,", "year": 2017 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "arXiv preprint arXiv:1805.09501,", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "arXiv preprint arXiv:1808.05377,", "year": 2018 }, { "authors": [ "Santiago Gonzalez", "Joshua Landgraf", "Risto Miikkulainen" ], "title": "Faster training by selecting samples using embeddings", "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN),", "year": 2019 }, { "authors": [ "Nikolaus Hansen", "Stefan Kern" ], "title": "Evaluating the CMA evolution strategy on multimodal test functions", "venue": "In International Conference on Parallel Problem Solving from Nature,", "year": 2004 }, { "authors": [ "Nikolaus Hansen", "Andreas Ostermeier" ], "title": "Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation", "venue": "In Proceedings of IEEE international conference on evolutionary computation,", "year": 1996 }, { "authors": [ "Nikolaus Hansen", "Andreas Ostermeier" ], "title": "Completely derandomized self-adaptation in evolution strategies", "venue": "Evolutionary computation,", "year": 2001 }, { "authors": [ "Geoffrey E Hinton", "Nitish Srivastava", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan R Salakhutdinov" ], "title": "Improving neural networks by preventing co-adaptation of feature detectors", "venue": "arXiv preprint arXiv:1207.0580,", "year": 2012 }, { "authors": [ "Rein Houthooft", "Yuhua Chen", "Phillip Isola", "Bradly Stadie", "Filip Wolski", "OpenAI Jonathan Ho", "Pieter Abbeel" ], "title": "Evolved policy gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Katarzyna Janocha", "Wojciech Marian Czarnecki" ], "title": "On loss functions for deep neural networks in classification", "venue": "arXiv preprint arXiv:1702.05659,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Joel Lehman" ], "title": "The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities", "venue": "arXiv preprint arXiv:1803.03453,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "CMA-ES for hyperparameter optimization of deep neural networks", "venue": "arXiv preprint arXiv:1604.07269,", "year": 2016 }, { "authors": [ "Risto Miikkulainen", "Jason Liang", "Elliot Meyerson", "Aditya Rawal", "Daniel Fink", "Olivier Francon", "Bala Raju", "Hormoz Shahrzad", "Arshak Navruzyan", "Nigel Duffy" ], "title": "Evolving deep neural networks", "venue": "In Artificial Intelligence in the Age of Neural Networks and Brain Computing,", "year": 2019 }, { "authors": [], "title": "Cyclical learning rates for training neural networks", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Much of the power of modern neural networks originates from their complexity, i.e., number of parameters, hyperparameters, and topology. This complexity is often beyond human ability to optimize, and automated methods are needed. An entire field of metalearning has emerged recently to address this issue, based on various methods such as gradient descent, simulated annealing, reinforcement learning, Bayesian optimization, and evolutionary computation (EC) (Elsken et al., 2018).\nWhile a wide repertoire of work now exists for optimizing many aspects of neural networks, the dynamics of training are still usually set manually without concrete, scientific methods. Training schedules, loss functions, and learning rates all affect the training and final functionality of a neural network. Perhaps they could also be optimized through metalearning?\nThe goal of this paper is to verify this hypothesis, focusing on optimization of loss functions. A general framework for loss function metalearning, covering both novel loss function discovery and optimization, is developed and evaluated experimentally. This framework, Genetic Loss-function Optimization (GLO), leverages Genetic Programming to build loss functions represented as trees, and subsequently a Covariance-Matrix Adaptation Evolution Strategy (CMA-ES) to optimize their coefficients.\nEC methods were chosen because EC is arguably the most versatile of the metalearning approaches. EC, being a type of population-based search method, allows for extensive exploration, which often results in creative, novel solutions (Lehman et al., 2018). EC has been successful in hyperparameter optimization and architecture design in particular (Miikkulainen et al., 2019; Stanley et al., 2019; Real et al., 2019; Loshchilov & Hutter, 2016). It has also been used to discover mathematical formulas to explain experimental data (Schmidt & Lipson, 2009). It is, therefore, likely to find creative solutions in the loss-function optimization domain as well.\nIndeed, on the MNIST image classification benchmark, GLO discovered a surprising new loss function, named Baikal for its shape. This function performs very well, presumably by establishing an implicit regularization effect. Baikal outperforms the standard cross-entropy loss in terms of training\nspeed, final accuracy, and data requirements. Furthermore, Baikal was found to transfer to a more complicated classification task, CIFAR-10, while carrying over its benefits.\nAt first glance, Baikal behaves rather unintuitively; loss does not decrease monotonically as a network’s predictions become more correct. Upon further analysis, Baikal was found to perform implicit regularization, which caused this effect. Specifically, by preventing the network from being too confident in its predictions, training was able to produce a more robust model. This finding was surprising and encouraging, since it means that GLO is able to discover loss functions that train networks that are more generalizable and overfit less.\nThe next section reviews related work in metalearning and EC, to help motivate the need for GLO. Following this review, GLO is described in detail, along with the domains upon which it has been evaluated. The subsequent sections present the experimental results, including an analysis of the loss functions that GLO discovers." }, { "heading": "2 RELATED WORK", "text": "In addition to hyperparameter optimization and neural architecture search, new opportunities for metalearning have recently emerged. In particular, learning rate scheduling and adaptation can have a significant impact on a model’s performance. Learning rate schedules determine how the learning rate changes as training progresses. This functionality tends to be encapsulated away in practice by different gradient-descent optimizers, such as AdaGrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014). While the general consensus has been that monotonically decreasing learning rates yield good results, new ideas, such as cyclical learning rates (Smith, 2017), have shown promise in learning better models in fewer epochs.\nMetalearning methods have also been recently developed for data augmentation, such as AutoAugment (Cubuk et al., 2018), a reinforcement learning based approach to find new data augmentation policies. In reinforcement learning tasks, EC has proven a successful approach. For instance, in evolving policy gradients (Houthooft et al., 2018), the policy loss is not represented symbolically, but rather as a neural network that convolves over a temporal sequence of context vectors. In reward function search (Niekum et al., 2010), the task is framed as a genetic programming problem, leveraging PushGP (Spector et al., 2001).\nIn terms of loss functions, a generalization of the L2 loss was proposed with an adaptive loss parameter (Barron, 2017). This loss function is shown to be effective in domains with multivariate output spaces, where robustness might vary across between dimensions. Specifically, the authors found improvements in Variational Autoencoder (VAE) models, unsupervised monocular depth estimation, geometric registration, and clustering.\nAdditionally, work has found promise in moving beyond the standard cross-entropy loss for classification (Janocha & Czarnecki, 2017). L1 and L2 losses were found to have useful probabilistic properties. The authors found certain loss functions to be more resilient to noise than the crossentropy loss.\nNotably, no existing work in the metalearning literature automatically optimizes loss functions for neural networks. As shown in this paper, evolutionary computation can be used in this role to improve neural network performance, gain a better understanding of the processes behind learning, and help reach the ultimate goal of fully automated learning." }, { "heading": "3 THE GLO APPROACH", "text": "The task of finding and optimizing loss functions can be framed as a functional regression problem. GLO accomplishes this through the following high-level steps (shown in Figure 1): (1) loss function discovery: using approaches from genetic programming, a genetic algorithm builds new candidate loss functions, and (2) coefficient optimization: to further optimize a specific loss function, a covariance-matrix adaptation evolutionary strategy (CMA-ES) is leveraged to optimize coefficients." }, { "heading": "3.1 LOSS FUNCTION DISCOVERY", "text": "GLO uses a population-based search approach, inspired by genetic programming, to discover new optimized loss function candidates. Under this framework, loss functions are represented as trees within a genetic algorithm. Trees are a logical choice to represent functions due to their hierarchical nature. The loss function search space is defined by the following tree nodes: Unary Operators: log(◦), ◦2, √ ◦ Binary Operators: +, ∗,−,÷ Leaf Nodes: x, y, 1,−1, where x represents a true label, and y represents a predicted label. The search space is further refined by automatically assigning a fitness of 0 to trees that do not contain both at least one x and one y. Generally, a loss function’s fitness within the genetic algorithm is the validation performance of a network trained with that loss function. To expedite the discovery process, and encourage the invention of loss functions that make learning faster, training does not proceed to convergence. Unstable training sessions that result in NaN values are assigned a fitness of 0. Fitness values are cached to avoid needing to retrain the same network twice. These cached values are each associated with a canonicalized version of their corresponding tree, resulting in fewer required evaluations.\nThe initial population is composed of randomly generated trees with a maximum depth of 2. Recursively starting from the root, nodes are randomly chosen from the allowable operator and leaf nodes using a weighting (where log(◦), x, y are three times as likely and √ ◦ is two times as likely as +, ∗,−,÷, 1,−1). This weighting can impart a bias and prevent, for example, the integer 1 from occurring too frequently. The genetic algorithm has a population size of 80, incorporates elitism with six elites per generation, and uses roulette sampling.\nRecombination is accomplished by randomly splicing two trees together. For a given pair of parent trees, a random element is chosen in each as a crossover point. The two subtrees, whose roots are the two crossover points, are then swapped with each other. Figure 1 presents an example of this method of recombination. Both resultant trees become part of the next generation. Recombination occurs with a probability of 80%.\nTo introduce variation into the population, the genetic algorithm has the following mutations, applied in a bottom-up fashion:\n• Integer scalar nodes are incremented or decremented with a 5% probability. • Nodes are replaced with a weighted-random node with the same number of children with a 5% probability. • Nodes (and their children) are deleted and replaced with a weighted-random leaf node with a 5% ∗ 50% = 2.5% probability.\n• Leaf nodes are deleted and replaced with a weighted-random element (and weightedrandom leaf children if necessary) with a 5% ∗ 50% = 2.5% probability.\nCombined, the iterative sampling, recombination, and mutation of trees within the population leads to the discovery of new loss functions which maximize fitness." }, { "heading": "3.2 COEFFICIENT OPTIMIZATION", "text": "Loss functions found by the above genetic algorithm can all be thought of having unit coefficients for each node in the tree. This set of coefficients can be represented as a vector with dimensionality equal to the number of nodes in a loss function’s tree. The number of coefficients can be reduced by pruning away coefficients that can be absorbed by others (e.g., 3 (5x + 2y) = 15x + 6y). The coefficient vector is optimized independently and iteratively using a covariance-matrix adaptation evolutionary strategy (CMA-ES) (Hansen & Ostermeier, 1996). The specific variant of CMA-ES that GLO uses is (µ/µ, λ)-CMA-ES (Hansen & Ostermeier, 2001), which incorporates weighted rank-µ updates (Hansen & Kern, 2004) to reduce the number of objective function evaluations that are needed. The implementation of GLO presented in this paper uses an initial step size σ = 1.5. As in the discovery phase, the objective function is the network’s performance on a validation dataset after a shortened training period." }, { "heading": "4 EXPERIMENTAL EVALUATION", "text": "This section provides an experimental evaluation of GLO, on the MNIST and CIFAR-10 image classification tasks. Baikal, a GLO loss function found on MNIST, is presented and evaluated in terms of its resulting testing accuracy, training speed, training data requirements, and transferability to CIFAR-10. Implementation details are presented in the appendix in Section A.1." }, { "heading": "4.1 TARGET TASKS", "text": "Experiments on GLO are performed using two popular image classification datasets, MNIST Handwritten Digits (LeCun et al., 1998) and CIFAR-10 (Krizhevsky & Hinton, 2009). Both datasets, with MNIST in particular, are well understood, and relatively quick to train. The choice of these datasets allowed rapid iteration in the development of GLO and allowed time for more thorough experimentation. The selected model architectures are simple, since achieving state-of-the-art accuracy on MNIST and CIFAR-10 is not the focus of this paper, rather the improvements brought about by using a GLO loss function are. More information on the datasets, along with their corresponding architectures and experimental setup is provided in the appendix, under Section A.2.\nBoth of these tasks, being classification problems, are traditionally framed with the standard crossentropy loss (sometimes referred to as the log loss): LLog = − 1n ∑n−1 i=0 xi log(yi), where x is sampled from the true distribution, y is from the predicted distribution, and n is the number of classes. The cross-entropy loss is used as a baseline in this paper’s experiments." }, { "heading": "4.2 THE BAIKAL LOSS FUNCTION", "text": "The most notable loss function that GLO discovered against the MNIST dataset (with 2,000-step training for candidate evaluation) is the Baikal loss, named due to its similarity to the bathymetry of Lake Baikal when its binary variant is plotted in 3D (Section 5.1):\nLBaikal = − 1\nn n−1∑ i=0 log(yi)− xi yi , (1)\nwhere x is from the true distribution, y is from the predicted distribution, and n is the number of classes. Additionally, after coefficient optimization, GLO arrived at the following version of the Baikal loss:\nLBaikalCMA = − 1\nn n−1∑ i=0 2.7279 ( 0.9863 ∗ log(1.5352 ∗ yi)− 1.8158 xi yi ) . (2)\nThis loss function, BaikalCMA, was selected for having the highest validation accuracy out of the population. The Baikal and BaikalCMA loss functions had validation accuracies at 2,000 steps equal to 0.9838 and 0.9902, respectively. For comparison, the cross-entropy loss had a validation accuracy at 2,000 steps of 0.9700. Models trained with the Baikal loss on MNIST and CIFAR-10 (to test transfer) are the primary vehicle for validating GLO’s efficacy, as detailed in subsequent sections.\n4.3 TESTING ACCURACY\n0.9895 0.9935 0.9952\n0.9905 0.9924 0.9956\n0.9902 0.9937 0.9944\n0.9896 0.9934 0.9944\n0.9898 0.993 0.9944\n0.9899 0.9935 0.9944\nMean Test Accuracy 0.9899 0.9933 0.9947\nStandard Deviation 0.0003 0.0005 0.0005\nT-Test Baikal vs Log Loss 2-Tailed 1-Tailed\nPaired 0.000000185917 0.000000092958\nHomoscedastic 0.000000000002 0.000000000001\nHeteroscedastic 0.000000000024 0.000000000012\n0.9800\n0.9850\n0.9900\nLog Loss Baikal\np = 0.000 M ea n Te st A cc\nur ac\ny\n0.9800\n0.9850\n0.9900\n0.9950\n1.0000\nLog Loss Baikal BaikalCMA\n*** ***\nT-Test BaikalCMA vs Baikal 2-Tailed 1-Tailed\nPaired\nHomoscedastic\nHeteroscedastic 0.000008504450\n2\nFigure 2: Mean testing accuracy on MNIST, n = 10. Both Baikal and BaikalCMA provide statistically significant improvements to testing accuracy over the crossentropy loss.\nFigure 2 shows the increase in testing accuracy that Baikal and BaikalCMA provide on MNIST over models trained with the cross en ropy loss. Over 10 trained models each, the mean testing accuracies for cross-entropy loss, Baikal, and BaikalCMA were 0.9899, 0.9933, and 0.9947, respectively.\nThis increase in accuracy from Baikal over cross-entropy loss is found to be statistically significant, with a p-value of 2.4 × 10−11, in a heteroscedastic, two-tailed T-test, with 10 samples from each distribution. With the same significance test, the increase in accuracy from BaikalCMA over Baikal was found to be statistically significant, with a p-value of 8.5045× 10−6." }, { "heading": "4.4 TRAINING SPEED", "text": "Training curves for networks trained with the cross-entropy loss, Baikal, and BaikalCMA are shown in Figure 3. Each curve represents 80 testing dataset evaluations spread evenly (i.e., every 250 steps) throughout 20,000 steps of training on MNIST. Networks trained with Baikal and BaikalCMA both learn significantly faster than the cross-entropy loss. These phenomena make Baikal a compelling loss function for fixed time-budget training, where the improvement in resultant accuracy over the cross-entropy loss becomes most evident." }, { "heading": "4.5 TRAINING DATA REQUIREMENTS", "text": "Figure 4 provides an overview of the effects of dataset size on networks trained with cross-entropy loss, Baikal, and BaikalCMA. For each training dataset portion size, five individual networks were trained for each loss function.\nThe degree by which Baikal and BaikalCMA outperform cross-entropy loss increases as the training dataset becomes smaller. This provides evidence of less overfitting when training a network with Baikal or BaikalCMA. As expected, BaikalCMA outperforms Baikal at all tested dataset sizes. The size of this improvement in accuracy does not grow as significantly as the improvement over cross-entropy loss, leading to the belief that the overfitting characteristics of Baikal and BaikalCMA\nare very similar. Ostensibly, one could run the optimization phase of GLO on a reduced dataset specifically to yield a loss function with better performance than BaikalCMA on small datasets." }, { "heading": "4.6 LOSS FUNCTION TRANSFER TO CIFAR-10", "text": "Figure 5 presents a collection of 18 separate tests of the cross-entropy loss and Baikal applied to CIFAR-10. Baikal is found to outperform cross-entropy across all training durations, with the difference becoming more prominent for shorter training periods. These results present an interesting use case for GLO, where a loss function that is found on a simpler dataset can be transferred to a more complex dataset while still maintaining performance improvements. This faster training provides a particularly persuasive argument for using GLO loss functions in fixed time-budget scenarios." }, { "heading": "5 WHAT MAKES BAIKAL WORK?", "text": "This section presents a symbolic analysis of the Baikal loss function, followed by experiments that attempt to elucidate why Baikal works better than the cross-entropy loss. A likely explanation is that Baikal results in implicit regularization, reducing overfitting." }, { "heading": "5.1 BINARY CLASSIFICATION", "text": "Loss functions used on the MNIST dataset, a 10-dimensional classification problem, are difficult to plot and visualize graphically. To simplify, loss functions are analyzed in the context of binary\nclassification, with n = 2, the Baikal loss expands to\nLBaikal2D = − 1\n2\n( log(y0)−\nx0 y0 + log(y1)− x1 y1\n) . (3)\nSince vectors x and y sum to 1, by consequence of being passed through a softmax function, for binary classification x = 〈x0, 1− x0〉 and y = 〈y0, 1− y0〉. This constraint simplifies the binary Baikal loss to the following function of two variables (x0 and y0):\nLBaikal2D ∝ − log(y0) + x0 y0 − log(1− y0) + 1− x0 1− y0 . (4)\nThis same methodology can be applied to the cross-entropy loss and BaikalCMA.\nIn practice, true labels are assumed to be correct with certainty, thus, x0 is equal to either 0 or 1. The specific case where x0 = 1 is plotted in Figure 6 for the cross-entropy loss, Baikal, and BaikalCMA. The cross-entropy loss is shown to be monotonically decreasing, while Baikal and BaikalCMA counterintuitively show an increase in the loss value as the predicted label, y0, approaches the true label x0. This unexpected increase allows the loss functions to prevent the model from becoming too confident in its output predictions, thus providing a form of regularization. Section 5.2 provides reasoning for this unexplained result.\nAs also seen in Figure 6, the minimum for the Baikal loss where x0 = 1 lies near 0.71, while the minimum for the BaikalCMA loss where x0 = 1 lies near 0.77. This minimum, along with the more pronounced slope around x0 = 0.5 is likely a reason why BaikalCMA performs better than Baikal." }, { "heading": "5.2 IMPLICIT REGULARIZATION", "text": "The Baikal and BaikalCMA loss functions are surprising in that they incur a high loss when the output is very close to the correct value (as illustrated in Figure 6). Although at first glance this behavior is counterintuitive, it may provide an important advantage. The outputs of a trained network will not be exactly correct, although they are close, and therefore the network is less likely to overfit. Thus, these loss functions provide an implicit form of regularization, enabling better generalization.\nThis effect is similar to that of the confidence regularizer (Pereyra et al., 2017), which penalizes low-entropy prediction distributions. The bimodal distribution of output probabilities that results from confidence regularization is nearly identical to that of a network trained with BaikalCMA. Histograms of these distributions on the test dataset for cross-entropy and BaikalCMA networks, after 15,000 steps of training on MNIST, are shown in Figure 7. The abscissae in Figures 6 and 7 match, making it clear how the distribution for BaikalCMA has shifted away from the extreme values. The improved behavior under small-dataset conditions described in Section 4.5 further supports implicit regularization; less overfitting was observed when using Baikal and BaikalCMA compared to the cross-entropy loss.\nNotably, the implicit regularization provided by Baikal and BaikalCMA complements the different types of regularization already present in the trained networks. As detailed in Section A.2, MNIST networks are trained with dropout (Hinton et al., 2012), and CIFAR-10 networks are trained with L2 weight decay and local response normalization (Krizhevsky et al., 2012), yet Baikal is able to improve performance further." }, { "heading": "6 DISCUSSION AND FUTURE WORK", "text": "This paper proposes loss function discovery and optimization as a new form of metalearning, and introduces an evolutionary computation approach to it. GLO was evaluated experimentally in the image classification domain, and discovered a surprising new loss function, Baikal. Experiments showed substantial improvements in accuracy, convergence speed, and data requirements. Further analysis suggested that these improvements result from implicit regularization that reduces overfitting to the data. This regularization complements the existing regularization in trained networks.\nIn the future, GLO can be applied to other machine learning datasets and tasks. The approach is general, and could result in discovery of customized loss functions for different domains, or even specific datasets. One particularly interesting domain is generative adversarial networks (GANs). Significant manual tuning is necessary in GANs to ensure that the generator and discriminator networks learn harmoniously. GLO could find co-optimal loss functions for the generator and discriminator networks in tandem, thus making GANs more powerful, robust, and easier to implement.\nGAN optimization is an example of co-evolution, where multiple interacting solutions are developed simultaneously. GLO could leverage co-evolution more generally: for instance, it could be combined with techniques like CoDeepNEAT (Miikkulainen et al., 2019) to learn jointly-optimal network structures, hyperparameters, learning rate schedules, data augmentation, and loss functions simultaneously. Such an approach requires significant computing power, but may also discover and utilize interactions between the design elements that result in higher complexity and better performance than is currently possible." }, { "heading": "7 CONCLUSION", "text": "This paper proposes Genetic Loss-function Optimization (GLO) as a general framework for discovering and optimizing loss functions for a given task. A surprising new loss function, Baikal, was discovered in the experiments, and shown to outperform the cross-entropy loss on MNIST and CIFAR-10 in terms of accuracy, training speed, and data requirements. Further analysis suggested that Baikal’s improvements result from implicit regularization that reduces overfitting to the data. GLO can be combined with other aspects of metalearning in the future, paving the way to robust and powerful AutoML." }, { "heading": "A APPENDIX", "text": "A.1 IMPLEMENTATION DETAILS\nDue to the large number of partial training sessions that are needed for both the discovery and optimization phases, training is distributed across the network to a cluster of dedicated machines that use Condor (Thain et al., 2005) for scheduling. Each machine in this cluster has one NVIDIA GeForce GTX Titan Black GPU and two Intel Xeon E5-2603 (4 core) CPUs running at 1.80GHz with 8GB of memory. Training itself is implemented with TensorFlow (Abadi et al., 2016) in Python. The primary components of GLO (i.e., the genetic algorithm and CMA-ES) are implemented in Swift. These components run centrally on one machine and asynchronously dispatch work to the Condor cluster over SSH.\nA.2 EXPERIMENTAL SETUP\nThe following two sections detail the experimental setup that was used for the evaluation presented in this paper.\nA.2.1 MNIST\nThe first target task used for evaluation was the MNIST Handwritten Digits dataset (LeCun et al., 1998), a widely used dataset where the goal is to classify 28 × 28 pixel images as one of ten digits. The MNIST dataset has 55,000 training samples, 5,000 validation samples, and 10,000 testing samples.\nA simple CNN architecture with the following layers is used: (1) 5× 5 convolution with 32 filters, (2) 2×2 stride-2 max-pooling, (3) 5×5 convolution with 64 filters, (4) 2×2 stride-2 max-pooling, (5) 1024-unit fully-connected layer, (6) a dropout layer (Hinton et al., 2012) with 40% dropout probability, and (7) a softmax layer. ReLU (Nair & Hinton, 2010) activations are used. Training uses stochastic gradient descent (SGD) with a batch size of 100, a learning rate of 0.01, and, unless otherwise specified, occurred over 20,000 steps.\nSeveral experiments tested various learning rate values across a handful of orders-of-magnitude to arrive at the step size used in the paper. For the baseline, this step size provided the highest accuracy.\nA.2.2 CIFAR-10\nTo further validate GLO, the more challenging CIFAR-10 dataset (Krizhevsky & Hinton, 2009), a popular dataset of small, color photographs in ten classes, was used as a medium to test the transferability of loss functions found on a different domain. CIFAR-10 consists of 50,000 training samples, and 10,000 testing samples.\nA simple CNN architecture, taken from (Gonzalez et al., 2019) (and itself inspired by AlexNet (Krizhevsky et al., 2012)), with the following layers is used: (1) 5 × 5 convolution with 64 filters and ReLU activations, (2) 3 × 3 max-pooling with a stride of 2, (3) local response normalization (Krizhevsky et al., 2012) with k = 1, α = 0.001/9, β = 0.75, (4) 5× 5 convolution with 64 filters and ReLU activations, (5) local response normalization with k = 1, α = 0.001/9, β = 0.75, (6) 3× 3 max-pooling with a stride of 2, (7) 384-unit fully-connected layer with ReLU activations, (8) 192-unit fully-connected, linear layer, and (9) a softmax layer.\nInputs to the network are sized 24 × 24 × 3, rather than 32 × 32 × 3 as provided in the dataset; these smaller sized inputs enable more sophisticated data augmentation. To force the network to learn better spatial invariance, random 24 × 24 croppings are selected from each full-size image, randomly flipped longitudinally, randomly lightened or darkened, and their contrast is randomly perturbed. Furthermore, to attain quicker convergence, an image’s mean pixel value and variance are subtracted and divided, respectively, from the whole image during training and evaluation. CIFAR10 networks were trained with SGD, L2 regularization with a weight decay of 0.004, a batch size of 1024, and an initial learning rate of 0.05 that decays by a factor of 0.1 every 350 epochs.\nSeveral experiments tested various initial learning rate values across a handful of orders-ofmagnitude to arrive at the step size used in the paper. For the baseline, this initial learning rate provided the highest accuracy.\nA.3 BINARY CLASSIFICATIONS SURFACE PLOTS\nWhen plotted in three-dimensions, as in Figure 8, the binary cross-entropy and Baikal loss functions can be observed to have characteristic surfaces. The shape of Baikal’s surface, and its similarity to the bathymetry of Lake Baikal, is where it gets its name. Note that the case plotted in Figure 6 is equivalent to the front “slice” of the surface plots in Figure 8." } ]
2,019
null
SP:ff7ab4e497018b2fa801bd05e7e14d59265babed
[ "This paper investigates the problem of predicting the truth of quantified boolean formulae using deep reinforcement learning. In this setting, the problem is formulated as a reinforcement learning task, in which the learner is interacting with a solver (CADET), and its goal is to find a sequence of actions (each associated with a choice of a variable and a value) in order to reach a terminal state as fast as possible. The neural architecture includes a GNN encoder for the input formula, a policy neural net for iteratively assessing the quality of literals, and a final softmax layer for choosing the final literal. Experiments, performed on various 2QBF instances, address several questions such as the ability to compete with existing heuristics (VSIDS) in CADET and to generalize predictions on long episodes or different formulae.", "This will be an uncharacteristically short review. The work poses an interesting idea: why not mix heuristics and learning. It reads as if the paper was written a while ago and the intro was not updated, since there is a lot of related work using the same concept. Please cite existing work in the introduction, it reflects negatively on the paper." ]
We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning. We focus on a backtracking search algorithm, which can already solve formulas of impressive size up to hundreds of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For a family of challenging problems in 2QBF we learn a heuristic that solves significantly more formulas compared to the existing handwritten heuristics.
[ { "affiliations": [], "name": "REINFORCEMENT LEARNING" }, { "affiliations": [], "name": "Gil Lederman" }, { "affiliations": [], "name": "Markus N. Rabe" }, { "affiliations": [], "name": "Sanjit A. Seshia" } ]
[ { "authors": [ "Miltiadis Allamanis", "Pankajan Chanthirasegaran", "Pushmeet Kohli", "Charles Sutton" ], "title": "Learning continuous semantic representations of symbolic expressions", "venue": "arXiv preprint arXiv:1611.01423,", "year": 2016 }, { "authors": [ "Rajeev Alur", "Rastislav Bodik", "Garvit Juniwal", "Milo M.K. Martin", "Mukund Raghothaman", "Sanjit A. Seshia", "Rishabh Singh", "Armando Solar-Lezama", "Emina Torlak", "Abhishek Udupa" ], "title": "Syntaxguided synthesis", "venue": "In Proceedings of the IEEE International Conference on Formal Methods in Computer-Aided Design (FMCAD),", "year": 2013 }, { "authors": [ "Saeed Amizadeh", "Sergiy Matusevych", "Markus Weimer" ], "title": "Learning to solve circuit-SAT: An unsupervised differentiable approach", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Gilles Audemard", "Laurent Simon" ], "title": "Glucose in the SAT 2014 competition", "venue": "SAT COMPETITION", "year": 2014 }, { "authors": [ "Mislav Balunovic", "Pavol Bielik", "Martin Vechev" ], "title": "Learning to solve SMT formulas", "venue": "In NeurIPS. Curran Associates, Inc.,", "year": 2018 }, { "authors": [ "Kshitij Bansal", "Sarah M. Loos", "Markus N. Rabe", "Christian Szegedy", "Stewart Wilcox" ], "title": "HOList: An environment for machine learning of higher-order theorem proving", "venue": "URL http://arxiv.org/abs/1904.03241", "year": 1904 }, { "authors": [ "Armin Biere" ], "title": "Resolve and expand", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2004 }, { "authors": [ "Armin Biere" ], "title": "Lingeling, plingeling, picosat and precosat at sat race", "venue": "FMV Report Series Technical Report,", "year": 2010 }, { "authors": [ "Armin Biere", "Alessandro Cimatti", "Edmund M Clarke", "Ofer Strichman", "Yunshan Zhu" ], "title": "Bounded model checking", "venue": "Advances in computers,", "year": 2003 }, { "authors": [ "Samuel R Bowman", "Christopher Potts", "Christopher D Manning" ], "title": "Recursive neural networks can learn logical semantics", "venue": "arXiv preprint arXiv:1406.1827,", "year": 2014 }, { "authors": [ "Xinyun Chen", "Yuandong Tian" ], "title": "Learning to progressively plan", "venue": "CoRR, abs/1810.00337,", "year": 2018 }, { "authors": [ "Ziliang Chen", "Zhanfu Yang" ], "title": "Graph neural reasoning may fail in certifying boolean unsatisfiability, 2019", "venue": null, "year": 2019 }, { "authors": [ "Junyoung Chung", "Çaglar Gülçehre", "KyungHyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "CoRR, abs/1412.3555,", "year": 2014 }, { "authors": [ "Karel Chvalovsky" ], "title": "Top-down neural model for formulae", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Byron Cook", "Daniel Kroening", "Philipp Rümmer", "Christoph M Wintersteiger" ], "title": "Ranking function synthesis for bit-vector relations", "venue": "Formal methods in system design,", "year": 2013 }, { "authors": [ "Martin Davis", "Hilary Putnam" ], "title": "A computing procedure for quantification theory", "venue": "Journal of the ACM (JACM),", "year": 1960 }, { "authors": [ "Martin Davis", "George Logemann", "Donald Loveland" ], "title": "A machine program for theorem-proving", "venue": "Communications of the ACM,", "year": 1962 }, { "authors": [ "Niklas Eén", "Niklas Sörensson" ], "title": "An extensible SAT-solver", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2003 }, { "authors": [ "Richard Evans", "David Saxton", "David Amos", "Pushmeet Kohli", "Edward Grefenstette" ], "title": "Can neural networks understand logical entailment", "venue": "arXiv preprint arXiv:1802.08535,", "year": 2018 }, { "authors": [ "Peter Faymonville", "Bernd Finkbeiner", "Markus N Rabe", "Leander Tentrup" ], "title": "Encodings of bounded synthesis", "venue": "In International Conference on Tools and Algorithms for the Construction and Analysis of Systems,", "year": 2017 }, { "authors": [ "Maxime Gasse", "Didier Chételat", "Nicola Ferroni", "Laurent Charlin", "Andrea Lodi" ], "title": "Exact combinatorial optimization with graph convolutional neural networks. 2019", "venue": null, "year": 2019 }, { "authors": [ "Thibault Gauthier", "Cezary Kaliszyk", "Josef Urban" ], "title": "TacticToe: Learning to reason with HOL4 tactics", "venue": "EPiC Series in Computing,", "year": 2017 }, { "authors": [ "Enrico Giunchiglia", "Massimo Narizzano", "Armando Tacchella" ], "title": "QUBE: A system for deciding quantified boolean formulas satisfiability", "venue": "In International Joint Conference on Automated Reasoning,", "year": 2001 }, { "authors": [ "Eugene Goldberg", "Yakov Novikov" ], "title": "Berkmin: A fast and robust sat-solver", "venue": "Discrete Applied Mathematics,", "year": 2007 }, { "authors": [ "Daniel Huang", "Prafulla Dhariwal", "Dawn Song", "Ilya Sutskever" ], "title": "Gamepad: A learning environment for theorem proving", "venue": "arXiv preprint arXiv:1806.00608,", "year": 2018 }, { "authors": [ "Jiayi Huang", "Mostofa Patwary", "Gregory Diamos" ], "title": "Coloring big graphs with AlphaGoZero", "venue": "arXiv preprint arXiv:1902.10162,", "year": 2019 }, { "authors": [ "Geoffrey Irving", "Christian Szegedy", "Alexander A Alemi", "Niklas Een", "Francois Chollet", "Josef Urban" ], "title": "Deepmath-deep sequence models for premise selection", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Mikoláš Janota" ], "title": "Towards generalization in QBF solving via machine learning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Mikoláš Janota" ], "title": "Circuit-based search space pruning in QBF", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT). Springer,", "year": 2018 }, { "authors": [ "Mikoláš Janota", "Joao Marques-Silva" ], "title": "Abstraction-based algorithm for 2QBF", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2011 }, { "authors": [ "Mikolás Janota", "Joao Marques-Silva" ], "title": "Solving QBF by clause selection", "venue": "In IJCAI, pp", "year": 2015 }, { "authors": [ "Mikoláš Janota", "William Klieber", "Joao Marques-Silva", "Edmund Clarke" ], "title": "Solving QBF with counterexample guided refinement", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2012 }, { "authors": [ "Robert G Jeroslow", "Jinchang Wang" ], "title": "Solving propositional satisfiability problems", "venue": "Annals of mathematics and Artificial Intelligence,", "year": 1990 }, { "authors": [ "Charles Jordan", "Łukasz Kaiser" ], "title": "Experiments with reduction finding", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2013 }, { "authors": [ "Cezary Kaliszyk", "Josef Urban", "Henryk Michalewski", "Mirek Olšák" ], "title": "Reinforcement learning of theorem proving", "venue": "arXiv preprint arXiv:1805.07563,", "year": 2018 }, { "authors": [ "Vitaly Kurin", "Saad Godil", "Shimon Whiteson", "Bryan Catanzaro" ], "title": "Improving SAT solver heuristics with graph networks and reinforcement", "venue": "learning. ArXiv,", "year": 2019 }, { "authors": [ "Mitsuru Kusumoto", "Keisuke Yahata", "Masahiro Sakai" ], "title": "Automated theorem proving in intuitionistic propositional logic by deep reinforcement learning", "venue": "arXiv preprint arXiv:1811.00796,", "year": 2018 }, { "authors": [ "Jia Hui Liang", "Vijay Ganesh", "Pascal Poupart", "Krzysztof Czarnecki" ], "title": "Learning rate based branching heuristic for sat solvers", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2016 }, { "authors": [ "Florian Lonsing", "Armin Biere" ], "title": "DepQBF: A dependency-aware", "venue": "QBF solver. JSAT,", "year": 2010 }, { "authors": [ "Sarah Loos", "Geoffrey Irving", "Christian Szegedy", "Cezary Kaliszyk" ], "title": "Deep network guided proof search", "venue": "arXiv preprint arXiv:1701.06972,", "year": 2017 }, { "authors": [ "João P Marques-Silva", "Karem A Sakallah" ], "title": "GRASP - A new search algorithm for satisfiability", "venue": "In Computer Aided Design,", "year": 1997 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Matthew W Moskewicz", "Conor F Madigan", "Ying Zhao", "Lintao Zhang", "Sharad Malik" ], "title": "Chaff: Engineering an efficient SAT solver", "venue": "In Proceedings of the 38th annual Design Automation Conference,", "year": 2001 }, { "authors": [ "Matthew W. Moskewicz", "Conor F. Madigan", "Ying Zhao", "Lintao Zhang", "Sharad Malik" ], "title": "Chaff: Engineering an efficient SAT solver", "venue": "In Proceedings DAC, pp. 530–535", "year": 2001 }, { "authors": [ "Aditya Paliwal", "Sarah M. Loos", "Markus N. Rabe", "Kshitij Bansal", "Christian Szegedy" ], "title": "Graph representations for higher-order logic and theorem proving", "venue": "CoRR, abs/1905.10006,", "year": 2019 }, { "authors": [ "Florian Pigorsch", "Christoph Scholl" ], "title": "An AIG-based QBF-solver using SAT for preprocessing", "venue": "In Proceedings of the 47th Design Automation Conference,", "year": 2010 }, { "authors": [ "Luca Pulina" ], "title": "The ninth QBF solvers evaluation-preliminary report", "venue": "In QBF@ SAT,", "year": 2016 }, { "authors": [ "Markus N Rabe", "Sanjit A Seshia" ], "title": "Incremental determinization", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2016 }, { "authors": [ "Markus N Rabe", "Leander Tentrup" ], "title": "CAQE: A certifying QBF solver", "venue": "In Formal Methods in Computer-Aided Design (FMCAD),", "year": 2015 }, { "authors": [ "Markus N Rabe", "Leander Tentrup", "Cameron Rasmussen", "Sanjit A Seshia" ], "title": "Understanding and extending incremental determinization for 2QBF", "venue": "In International Conference on Computer Aided Verification (accepted),", "year": 2018 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "Trans. Neur. Netw.,", "year": 2009 }, { "authors": [ "Daniel Selsam", "Nikolaj Bjørner" ], "title": "Neurocore: Guiding high-performance SAT solvers with unsat-core predictions", "venue": "CoRR, abs/1903.04671,", "year": 2019 }, { "authors": [ "Daniel Selsam", "Matthew Lamm", "Benedikt Bunz", "Percy Liang", "Leonardo de Moura", "David L Dill" ], "title": "Learning a SAT solver from single-bit supervision", "venue": "arXiv preprint arXiv:1802.03685,", "year": 2018 }, { "authors": [ "Sanjit A. Seshia" ], "title": "Combining induction, deduction, and structure for verification and synthesis", "venue": "Proceedings of the IEEE,", "year": 2015 }, { "authors": [ "Armando Solar-Lezama", "Liviu Tancau", "Rastislav Bodik", "Sanjit Seshia", "Vijay Saraswat" ], "title": "Combinatorial sketching for finite programs", "venue": "ACM Sigplan Notices,", "year": 2006 }, { "authors": [ "Mate Soos", "Raghav Kulkarni", "Kuldeep S. Meel" ], "title": "Crystalball: Gazing in the black box of sat solving", "venue": "In Proceedings of the International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2019 }, { "authors": [ "Leander Tentrup" ], "title": "Non-prenex QBF solving using abstraction", "venue": "In International Conference on Theory and Applications of Satisfiability Testing (SAT),", "year": 2016 }, { "authors": [ "Leander Tentrup" ], "title": "On expansion and resolution in CEGAR based QBF solving", "venue": "In International Conference on Computer Aided Verification,", "year": 2017 }, { "authors": [ "Grigori S Tseitin" ], "title": "On the complexity of derivation in propositional calculus", "venue": "Studies in constructive mathematics and mathematical logic,", "year": 1968 }, { "authors": [ "Mingzhe Wang", "Yihe Tang", "J.J. Wang", "Jia Deng" ], "title": "Premise selection for theorem proving by deep graph embedding", "venue": null, "year": 2017 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Lin Xu", "Frank Hutter", "Holger H Hoos", "Kevin Leyton-Brown" ], "title": "Satzilla: portfolio-based algorithm selection for sat", "venue": "Journal of artificial intelligence research,", "year": 2008 }, { "authors": [ "Kaiyu Yang", "Jia Deng" ], "title": "Learning to prove theorems via interacting with proof assistants", "venue": "arXiv preprint arXiv:1905.09381,", "year": 2019 }, { "authors": [ "Zhanfu Yang", "Fei Wang", "Ziliang Chen", "Guannan Wei", "Tiark Rompf" ], "title": "Graph neural reasoning for 2-quantified boolean formula solvers", "venue": null, "year": 1904 }, { "authors": [ "Emre Yolcu", "Barnabás Póczos" ], "title": "Learning local search heuristics for boolean satisfiability", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "One of the most intriguing questions for artificial intelligence is: can (deep) learning be effectively used for symbolic reasoning? The benefits of combining deductive reasoning with inductive learning for automated reasoning and in formal methods for system design have been noted (e.g., see Seshia (2015)). There is a whole spectrum of approaches to combine them: One extreme is to use learning for predicting which of a small pool of algorithms (or heuristics) performs best, and run only that one to solve the given problem (e.g. SATzilla (Xu et al., 2008)). This approach is clearly limited by the availability of handwritten algorithms and heuristics (i.e. it can only solve problems for which we have written at least one algorithm that can solve it). The other extreme is to analyze formulas solely with deep learning approaches (Allamanis et al., 2016; Evans et al., 2018; Selsam et al., 2018; Amizadeh et al., 2019). However, this approach shows poor scalability compared to the state-of-the-art in the respective domains depsite the recent breakthroughs in deep learning. Instead of relying entirely on deep learning or on the availability of good handwritten algorithms, we explore the middle ground. We ask the question how to tightly combine deep learning with formal reasoning algorithms with the goal to improve the state-of-the-art, i.e. to solve formulas that could not be solved previously.\nExisting formal reasoning tools work in a mechanical way: they only apply a small number of carefully crafted operations and use heuristics to resolve the degrees of freedom in how to apply them. We address the problem of automatically learning better heuristics for a given set of formulas. We focus on the branching heuristic in modern backtracking search algorithms, as they are known to have a high impact on the performance of the algorithm. We cast the problem to learn better branching heuristics for backtracking search algorithms as a reinforcement learning problem: Initially, the reinforcement learning environment randomly picks a formula from a given set of formulas, and then runs the backtracking search algorithm on that formula. The actions that are controlled by the learning agent are the branching decisions, i.e. pick a variable and assign it a value - everything else is handled by the solver.\nChallenges This reinforcement learning problem comes with several unique challenges:\nRepresentation: While learning algorithms for images and board-games usually rely on the grid-like structure of the input and employ neural networks that match that structure (e.g. convolutional neural networks). For formulas, however, there is no standard representation for learning algorithms.\nIt may seem reasonable to treat Boolean formulas as text and learn embeddings for formulas through techniques such as word2vec (Mikolov et al., 2013), LSTMs, or tree RNNs. However, formulas in formal reasoning tools typically consist of thousands of variables, which is much larger than the text-fragments typically analyzed with neural networks. Further, unlike words in natural language, individual variables in Boolean formulas are completely devoid of meaning. The meaning of variable x in one formula is basically independent from variable x in a second formula. Hence, learning embeddings for variables and sharing them between formulas would be futile.\nUnbounded action space: An action consists of a choice of variable and value. While values will be Boolean, the number of variables depends on the size of the input formula. Therefore, we have an unbounded number of actions, which are further different for every formula.\nLength of episodes: As we are dealing with a highly complex search problem, solver runs (= learning episodes) can be very long—in fact, for many of the formulas we have never observed a terminating run—and we observed a huge variance in the length of runs.\nPerformance: Our aim is to solve more formulas in less time. The use of neural networks incurs a huge runtime overhead for each decision (the solver takes ≥10x fewer decisions per second). So the decisions taken by the neural networks need to be dramatically better than the handcoded heuristic decisions to outweigh their runtime cost.\nCorrectness: Reinforcement learning algorithms have shown to often find and exploit subtle implementation errors in the environment, instead of solving the intended problem. While testing and manual inspection of the results is a feasible approach for board games and Atari games, it is neither possible nor sufficient in large-scale formal reasoning - a solver run is simply too large to inspect manually and even tiny mistake can invalidate the result. In order to ensure correctness, we need an environment with the ability to produce formal proofs, and check the proofs by an independent tool.\nQuantified Boolean Formulas In this paper we focus on 2QBF, that is quantified Boolean formulas of the form ∀X.∃Y.ϕ, where X and Y are sets of Boolean variables and ϕ is in conjunctive normal form. 2QBFs are complex enough to serve as an interesting proxy for complex mathematical reasoning tasks. Challenging applications such as program synthesis and the synthesis of controllers and ranking functions have been encoded into 2QBFs Solar-Lezama et al. (2006); Faymonville et al. (2017); Cook et al. (2013). However, the problem definition and also the syntactical structure of 2QBF is simple compared to more general settings, such as first-order or even higher-order logics. This makes algorithms for 2QBF a good target for the study of neural architectures.\nWhile our approach in principle works with most algorithms for QBF, we decided to demonstrate its use in Incremental Determinization (Rabe & Seshia, 2016). We modified CADET, an opensource implementation of Incremental Determinization that performed competitively in recent QBF competitions (Pulina, 2016) and turned it into a reinforcement learning environment. The advantage of CADET in the context of reinforcement learning is its ability to produce proofs (which most other solvers do not), which ensures that the reinforcement learning cannot simply learn to exploit bugs in the environment.\nGraph Neural Networks We consider each constraint and each variable of a given formula as a node in a graph. Whenever a variable occurs in a constraint, we draw an edge between their nodes. We then use a Graph Neural Network (GNN) (Scarselli et al., 2009) to predict the quality of each variable as a decision variable, and pick our next action accordingly. GNNs allow us to compute an embedding for every variable, based on the occurrences of that variable in the given formula, instead of learning an embedding that is shared across all formulas. Based on this embedding, we then use a policy network to predict the quality of each variable (or literal), and choose the next action accordingly. GNNs also allow us to scale to arbitrarily large formulas with a small and constant number of parameters.\nContributions This paper presents the successful integration of GNNs in a modern automated reasoning algorithm in a reinforcement learning setup.1 Our approach balances the performance penalty incured by the use of neural networks with the impact that improved heuristic decisions have on the overall reasoning capabilities. The branching heuristic that we learn significantly improves CADET’s reasoning capabilities on the test set of the benchmark, i.e. it solves more formulas within the same resource constraints. This is a huge step towards replacing VSIDS, the dominant branching heuristic in CDCL-based solvers for the last 20 years Moskewicz et al. (2001b); Eén & Sörensson (2003); Biere et al. (2009); Lonsing & Biere (2010); Rabe & Seshia (2016).\nWe also study the generalization properties of our approach: We show that training a heuristic on small and easy formulas helps us to solve much larger and harder formulas; generalization to formulas from different benchmarks is still limited though. Further, we provide an open-source learning environment for reasoning in quantified Boolean formulas. The environment includes the ability to verify its own runs, and thereby ensures that the reinforcement learning agent does not only learn to exploit implementation errors of the environment.\nStructure: After a primer on Boolean logics in Section 2 we define the problem in Section 3, and describe the network architecture in Section 4. We describe our experiments in Section 5, discuss related work in Section 6 and present our conclusions in Section 7." }, { "heading": "2 BOOLEAN LOGICS AND SEARCH ALGORITHMS", "text": "We start with describing propositional (i.e. quantifier-free) Boolean logic. Propositional Boolean logic allows us to use the constants 0 (false) and 1 (true), variables, and the standard Boolean operators like ∧ (“and”), ∨ (“or”), and ¬ (“not”). A literal of variable v is either the variable itself or its negation ¬v. By l̄ we denote the logical negation of literal l. We call a disjunction of literals a clause and say that a formula is in conjunctive normal form (CNF), if it is a conjunction of clauses. For example, (x ∨ y) ∧ (¬x ∨ y) is in CNF. It is well known that any Boolean formula can be transformed into CNF. It is less well known that this increases the size only linearly, if we allow the transformation to introduce additional variables (Tseitin, 1968). We thus assume that all formulas in this work are given in CNF.\nDPLL and CDCL. The satisfiability problem of propositional Boolean logics (SAT) is to find a satisfying assignment for a given Boolean formula or to determine that there is no such assignment. SAT is the prototypical NP-complete problem and many other problems in NP can be easily reduced to it. The first backtracking search algorithms for SAT are attributed to Davis, Putnam, Logemann, and Loveland (DPLL) (Davis & Putnam, 1960; Davis et al., 1962). Backtracking search algorithms gradually extend a partial assignment until it becomes a satisfying assignment, or until a conflict is reached. A conflict is reached when the current partial assignment violates one of the clauses and hence cannot be completed to a satisfying assignment. In case of a conflict, the search has to backtrack and continue in a different part of the search tree.\nConflict-driven clause learning (CDCL) is a significant improvement over DPLL due to MarquesSilva and Sakallah (Marques-Silva & Sakallah, 1997). CDCL combines backtracking search with clause learning. While DPLL simply backtracks out of conflicts, CDCL “analyzes” the conflict by performing a couple of resolution steps. Resolution is an operation that takes two existing clauses (l1 ∨ · · · ∨ ln) and (l′1 ∨ · · · ∨ l′n) that contain a pair of complementary literals l1 = ¬l′1, and derives the clause (l2 ∨ · · · ∨ ln ∨ l′2 ∨ · · · ∨ l′n). Conflict analysis adds new clauses over time, which cuts off large parts of the search space and thereby speeds up the search process.\nSince the introduction of CDCL in 1997, countless refinements of CDCL have been explored and clever data structures improved its efficiency significantly (Moskewicz et al., 2001a; Eén & Sörensson, 2003; Goldberg & Novikov, 2007). Today, the top-performing SAT solvers, such as Lingeling (Biere, 2010), Crypominisat (Soos, 2014), Glucose (Audemard & Simon, 2014), and MapleSAT (Liang et al., 2016), all rely on CDCL and they solve formulas with millions of variables for industrial applications such as bounded model checking (Biere et al., 2003).\n1An earlier version of this work was published on https://arxiv.org/abs/1807.08058\nQuantified Boolean Formulas. QBF extends propositional Boolean logic by quantifiers, which are statements of the form “for all x” (∀x) and “there is an x” (∃x). The formula ∀x. ϕ is true if, and only if, ϕ is true if x is replaced by 0 (false) and also if x is replaced by 1 (true). The semantics of ∃ arises from ∃x. ϕ = ¬∀x. ¬ϕ. We say that a QBF is in prenex normal form if all quantifiers are in the beginning of the formula. W.l.o.g., we will only consider QBF that are in prenex normal form and whose propositional part is in CNF. Further, we assume that for every variable in the formula there is exactly one quantifier in the prefix. An example QBF in prenex CNF is ∀x. ∃y. (x ∨ y) ∧ (¬x ∨ y). We focus on 2QBF, a subset of QBF that admits only one quantifier alternation. W.l.o.g. we can assume that the quantifier prefix of formulas in 2QBF consists of a sequence of universal quantifiers ∀x1 . . . ∀xn, followed by a sequence of existential quantifiers ∃y1 . . . ∃ym. While 2QBF is less powerful than QBF, we can encode many interesting applications from verification and synthesis, e.g. program synthesis (Solar-Lezama et al., 2006; Alur et al., 2013). Solvers for (2)QBF typically address the decision problem to determine the truth of a given formula (TQBF). After the success of CDCL for SAT, CDCL-like algorithms have been explored for QBF as well (Giunchiglia et al., 2001; Lonsing & Biere, 2010; Rabe & Seshia, 2016; Rabe et al., 2018). We focus on CADET, a solver that implements Incremental Determinization a generalized CDCL backtracking search algorithm (Rabe & Seshia, 2016; Rabe et al., 2018). Instead of considering only Booleans as values, the Incremental Determinization algorithm assigns and propagates on the level of Skolem functions. For the purpose of this work, however, we do not have to dive into the details of Incremental Determinization and can consider it simply as some backtracking algorithm.\nCorrectness. Writing performant code is an error-prone task, and correctness is critical for many applications of formal reasoning. Some automated reasoning tools hence have the ability to produce proofs, which can be checked independently. CADET is one of the few QBF solvers that can produce proofs without runtime overhead. We believe that the ability to verify results of solvers is particularly crucial for learning applications, as it allows us to ensure that the reinforcement learning algorithm does not simply exploit implementation error (bugs) in the environment." }, { "heading": "3 PROBLEM DEFINITION", "text": "In this section, we first revisit reinforcement learning and explain how it maps to the setting of logic solvers. In reinforcement learning, we consider an agent that interacts with an environment E which is modeled as a Markov Decision Process (MDP) over discrete time steps and accumulates reward. Formally, a MDP is a 4-tuple of states S, action space A, transition probabilities P and reward function R. A policy is a mapping from states to probability distributions over the actions π : S → dist(A). The goal of the agent is to maximize the expected (possibly discounted) reward accumulated over the episode; formally J(π) = E [ ∑∞ t=0 γ\ntrt|π]. In our setting, the environment E is the solver CADET (Rabe & Seshia, 2016). The environment is deterministic except for the initial state, where a formula is chosen randomly from a distribution. At each time step, the agent gets an observation, which consists of the formula and the solver state. Only those variables that do not have a value yet are valid actions, and we assume that the observation includes the set of available actions. The agent then selects one action from the subset of the available variables. Formally, the space of actions is the set of all variables in all possible formulas in all solver states, where at every state only a small finite number of them is available. Practically, the agent will never see the effect of even a small part of these actions, and so it must generalize to unseen actions. An episode is the result of the interaction of the agent with the environment. We consider an episode to be complete, if the solver reaches a terminating state in the last step. As there are arbitrarily long episodes, we want to abort them after some step limit (the decision limit) and consider these episodes as incomplete." }, { "heading": "3.1 BASELINES", "text": "While there are no competing learning approaches yet, human researchers and engineers have tried many heuristics for selecting the next variable. VSIDS is the best known heuristic for the solver we consider. It has been a dominant heuristic for SAT and several CDCL-based QBF algorithms for over 20 years now Moskewicz et al. (2001b); Eén & Sörensson (2003); Biere et al. (2009); Lonsing & Biere (2010); Rabe & Seshia (2016). We therefore consider VSIDS as the main baseline. VSIDS\nmaintains an activity score per variable and always chooses the variable with the highest activity that is still available. The activity reflects how often a variable recently occurred in conflict analysis. To select a literal of the chosen variable, VSIDS uses the Jeroslow-Wang heuristic (Jeroslow & Wang, 1990), which selects the polarity of the variable that occurs more often, weighted by the size of clauses they occur in. For reference, we also consider the Random heuristic, which chooses one of the available actions uniformly at random." }, { "heading": "4 THE NEURAL NETWORK ARCHITECTURE", "text": "Our model gets an observation, consisting of a formula and the state of the solver, and selects one of the formula’s literals (= a variable and a Boolean value) as its action. The model has two components: An encoder that produces an embedding for every literal, and a policy network that that rates the quality of each literal based on its embedding. We give an overview of the architecture in Fig. 1, describe the GNN in Subsection 4.1 and the policy network in Subsection 4.2." }, { "heading": "4.1 A GNN ENCODER FOR BOOLEAN FORMULAS", "text": "In order to employ GNNs, we view the formula as a graph, where each clause and each literal is a node (see Fig. 2. For each literal in each clause, we draw an edge between their nodes.\nThe resulting graph is bipartite and hence, we represent its edges as an 2n ×m adjacency matrix A with values in {0, 1}, where 2n is the number of literals and m is the number of clauses. This graph structure determines the semantics of the formula except for the quantification of variables (i.e. whether a variable is universally or existentially quantified), which are provided as labels to the variables. For each variable v, the variable label v ∈ RλV , with λV = 7, indicates whether the variable is universally or existentially quantified, whether it currently has a value assigned, and whether it was selected as a decision variable already on the current search branch. We use the variable label for both of its literals and by vl we denote\nthe label of the variable of l. For each clause c, the clause label c ∈ R is a single scalar (in {0, 1}), indicating whether the clause was original or derived during conflict analysis.\nWhile we are ultimately only interested in embeddings for literals, our GNN also computes embeddings for clauses as intermediate values. Literal embeddings have dimension δL = 16 and clause embeddings have dimension δC = 64. The GNN computes the embeddings over τ rounds. We define\nthe initial literal embedding as l0 = 0, and for each round 1 ≤ t ≤ τ , we define the literal embedding lt ∈ RδL for every literal l and the clause embedding ct ∈ RδC for every clause c ∈ C as follows:\nct = ReLU (∑ l∈cWL[v > l , l > t−1, l̄ > t−1] + BL ) , and lt = ReLU (∑ c,l∈cWC [c >, c>t ] + BC ) .\nThe trainable parameters of our model are indicated as bold capital letters. They consist of the matrix WL of shape (2δL + λV , δC), the vector BL of dimension δC , the matrix WC of shape (δC + λC , δL), and the vector BC of dimension δL.\nInvariance properties. The meaning of a formula in CNF is invariant under permutations of its clauses and of literals within each clause due to the commutativity of conjunction and disjunction. Our GNN architecture is invariant under these reorderings, as both conjunctions and disjunctions are computed through commutative operations (a sum), and, therefore, it cannot accidentally overspecialize to the ordering of clauses or literals. Swapping the literals of a variable does not change the truth of the formula either, and our GNN architecture respects that as well. The only place in our architecture where we use the information of which literals belong to the same variable is in the input to ct. Depending on which literal of a variable occurs in the clause we order its literal embeddings differently. Lastly, note that variables are completely nameless in our representation." }, { "heading": "4.2 POLICY NETWORK", "text": "The policy network predicts the quality of each literal based on the literal embedding and the global solver state. The global solver state is a collection of λG = 5 values that include only the essential parts of solver state that are not associated with any particular variable or clause. We provide additional details in Appendix A. The policy network thus maps the final literal embedding [v>l , l > τ , l̄ > τ ] concatenated with the global solver state to a single numerical value indicating the quality of the literal. The policy network thus has λV + 2δL + λG inputs, which are followed by two fully-connected layers. The two hidden layers use the ReLU nonlinearity. We turn the predictions of the policy network into action probabilities by a masked softmax. We mask all “illegal” actions, effectively ignoring the embeddings of variables which are universal, or are assigned already.\nNote that the policy network predicts a score for each literal independently. All information about the graph that is relevant to the policy network must hence flow through the literal embedding. Since we experimented with graph neural networks with few iterations this means that the quality of each literal is decided locally. The rationale behind this design is that it is simple and efficient." }, { "heading": "5 EXPERIMENTS", "text": "We conducted several experiments to examine whether we can improve the heuristics of the logic solver CADET through our deep reinforcement learning approach. 2\nQ1 Can we learn to predict good actions for a family of formulas? Q2 How does the policy trained on short episodes generalize to long episodes? Q3 How well does the learned policy generalize to formulas from a different family of formulas? Q4 Does the improvement in the policy outweigh the additional computational effort? That is,\ncan we solve more formulas in less time with the learned policy?" }, { "heading": "5.1 DATA", "text": "In contrast to most other works in the area, we evaluate our approach over a benchmark that (1) has been generated by a third party before the conception of this paper, and (2) is challenging to stateof-the-art solvers in the area. We consider a set of formulas representing the search for reductions between collections of first-order formulas generated by Jordan & Kaiser (2013), which we call Reductions in the following. Reductions is interesting from the perspective of QBF solvers, as its formulas are often part of the QBF competition. It consists of 4608 formulas of varying sizes and with varying degrees of hardness. On average the formulas have 316 variables; the largest formulas in the set have over 1600 variables and 12000 clauses. We filtered out 2573 formulas that are solved\n2We provide the code and data of our experiments at https://github.com/lederg/learningqbf.\nwithout any heuristic decisions. In order to enable us to answer question 2 (see above), we further set aside a test set of 200 formulas, leaving us with a training set of 1835 formulas.\nWe additionally consider the 2QBF evaluation set of the annual competition of QBF solvers, QBFEVAL (Pulina, 2016). This will help us to study cross-benchmark generalization." }, { "heading": "5.2 REWARDS AND TRAINING", "text": "We jointly train the encoder network and the policy network using REINFORCE (Williams, 1992). For each batch we sample a formula from the training set, and generate b episodes by solving it multiple times. In each episode we run CADET for up to 400 steps using the latest policy. Then we assign rewards to the episodes and estimate the gradient. We apply standard techniques to improve the training, including gradient clipping, normalization of rewards, and whitening of input data.\nWe assign a small negative reward of −10−4 for each decision to encourage the heuristic to solve each formula in fewer steps. When a formula is solved successfully, we assign reward 1 to the last decision. In this way, we effectively treat unfinished episodes (> 400 steps) as if they take 10000 steps, punishing them strongly." }, { "heading": "5.3 RESULTS", "text": "We trained the model described in Section 4 on the Reductions training set. We denote the resulting policy Learned and present the aggregate results in Figure 3 as a cactus plot, as usual for logic solvers. The cactus plot in Figure 3 indicates how the number of solved formulas grows for increasing decision limits on the test set of the Reductions formulas. In a cactus plot, we record one episode for each formula and each heuristic. We then sort the runs of each heuristic by the number of decisions taken in the episode and plot the series. When comparing heuristics, lower lines (or lines reaching further to the right) are thus better, as they indicate that more formulas were solved in less time.\nWe see that for a decision limit of 400 (dashed line in Fig. 3, left), i.e. the decision limit during training, Learned solved significantly more formulas than either of the baselines. The advantage of Learned over VSIDS is about as large as VSIDS over purely random choices. This is remarkable for the field and we can answer Q1 positively.\nFigure 3 (left) also shows us that Learned performs well far beyond the decision limit of 400 steps that was used during its training. Observing the vertical distance between the lines of Learned and VSIDS, we can see that the advantage of Learned over VSIDS even grows exponentially with an increasing decision limit. (Note that the axis indicating the number of decisions is log-scaled.) We can thus answer Q2 positively.\nA surprising fact is that small and shallow neural networks already achieved the best results. Our best model uses τ = 1, which means that for judging the quality of each variable, it only looks at the variable itself and the immediate neighbors (i.e. those variables it occurs together with in a constraint).\nThe hyperparameters that resulted in the best model are δL = 16, δC = 64, and τ = 1, leading to a model with merely 8353 parameters. The small size of our model was also helpful to achieve quick inference times.\nTo answer Q3, we evaluated the learned heuristic also on our second data set of formulas from the QBF solver competition QBFEVAL. Random solved 67 formulas, VSIDS solved 125 formulas, and Learned solved 111 formulas. The policy trained on Reductions significantly improved over random choices, but does not beat VSIDS. This is hardly surprising, as our learning approach specialized the solver to a specific—different—distribution of formulas. Also it must be taken into account that the solver CADET has been tuned to QBFEVAL over year, and hence may perform much stronger on QBFEVAL than on the Reductions benchmark. We include further cross-benchmark generalization studies in the Appendix.\nTo answer our last question, Q4, we compare the runtime of CADET in with our learned heuristic to CADET with the standard VSIDS heuristic. In Fig. 3 (right) we see that for small time limits (up to 10 seconds), VSIDS still solves more formulas than the learned heuristic. But, for higher time limits, the learned heuristic starts to outperform VSIDS. For a time limit of 1 hour, we solved 120 formulas with the learned heuristic while only 110 formulas were solved with VSIDS (see right top corner). Conversely, for solving 110 formulas the learned heuristic required a timeout of less than 12 minutes, while VSIDS took an hour. Furthermore, our learning and inference implementation is written in Python and not particularly optimized. The NN agent is running in a different process from CADET, and incurs an overhead per step for inter-process communication and context switches, which is enormous compared to the pure C implementation of CADET using VSIDS. This overhead could be easily reduced, and so we expect the advantage of our approach to grow." }, { "heading": "6 RELATED WORK", "text": "Independent from our work, GNNs for Boolean logic have been explored in NeuroSAT (Selsam et al., 2018), where the authors use it to solve the SAT problem directly. While using a similar neural architecture, the network is not integrated in a state-of-the-art logic solver, and does not improve the state of the art in performance. Selsam & Bjørner (2019) recently extended NeuroSAT to use its predictions in a state-of-the-art SAT solver. In contrast to their work, we integrate GNNs much tigher into the solver and train the heuristics directly through reinforcement learning. Thus allow deep learning to take direct control of the solving process. Also, we focus on QBF instead of SAT, which strongly affects the runtime tradeoffs between spending time on “thinking” about a better decision versus executing many “stupid” decisions.\nAmizadeh et al. (2019) suggest an architecture that solves circuit-SAT problems. Unlike NeuroSAT, and similar to our approach, they train their model directly to find a satisfying assignment by using a differentiable “soft” satisfiability score as their loss. However, like NeuroSAT, their approach aims to solve the problem from scratch, without leveraging an existing solver, and so is difficult to scale to state-of-the-art performance. They hence focus on small random problems. In contrast, our approach improves the performance of a state-of-the-art algorithm. Furthermore, our learned heuristic applies to SAT and UNSAT problems alike.\nYang et al. (2019) extended the NeuroSAT architecture to 2QBF problems. In contrast to our work, they do not embed their GNN model in a modern DPLL solver, and instead try to predict good counter-examples for a CEGAR solving approach. They focus on formulas with 18 variables, which are trivial for state-of-the-art solvers. Chen & Yang (2019) showed that a pure GNN approach is unable to solve Boolean formulas when they are unsatisfiable, which in our work is addressed by combining GNNs with a logic reasoning engine.\nReinforcement learning has been applied to other logic reasoning tasks. Kaliszyk et al. (2018) recently explored learning linear policies for tableaux-style theorem proving. Kurin et al. (2019) follow a similar approach to ours for SAT solvers, but only evaluate on small synthetic formulas and do not improve the overall performance of the underlying SAT solver. Kusumoto et al. (2018) applied reinforcement learning to propositional logic in a setting similar to ours; just that we employ the learning in existing strong solving algorithms, leading to much better scalability. Balunovic et al. (2018) use deep reinforcement learning to improve the application of high-level strategies in SMT solvers, but do not investigate a tighter integration of deep learning with logic solvers. Also other\nworks on combinatorial search explored the use of GNNs (some trained with reinforcement learning) for problems such as random SAT (Yolcu & Póczos, 2019), coloring graphs (Huang et al., 2019), and MILP (Gasse et al., 2019).\nMost previous approaches that applied neural networks to logical formulas used LSTMs or tree models syntax-tree of formulas (Bowman et al., 2014; Irving et al., 2016; Allamanis et al., 2016; Loos et al., 2017; Evans et al., 2018; Chvalovsky, 2019; Chen & Tian, 2018) or classical ML models (Gauthier et al., 2017; Kaliszyk et al., 2018; Soos et al., 2019). Instead, we suggest a GNN approach, based on a graph-view on formulas in CNF. Recent work suggests that GNNs appear to be a good architecture for logics (Paliwal et al., 2019; Wang et al., 2017). Bansal et al. (2019); Huang et al. (2018); Yang & Deng (2019) provide a learning environments around interactive theorem provers.\nOther competitive QBF algorithms include expansion-based algorithms (Biere, 2004; Pigorsch & Scholl, 2010), CEGAR-based algorithms (Janota & Marques-Silva, 2011; 2015; Rabe & Tentrup, 2015), circuit-based algorithms (Klieber, 2012; Tentrup, 2016; Janota, 2018a;b), and hybrids (Janota et al., 2012; Tentrup, 2017). Recently, Janota (2018a) successfully explored the use of (classical) machine learning techniques to address the generalization problem in QBF solvers." }, { "heading": "7 CONCLUSIONS", "text": "We presented an approach to improve the heuristics of a backtracking search algorithm for Boolean logic through deep reinforcement learning. Our approach brings together the best of two worlds: The superior flexibility and performance of intuitive reasoning of neural networks, and the ability to explain (prove) results in formal reasoning. The setting is new and challenging to reinforcement learning; QBF is a very general, combinatorial problem class, featuring an unbounded input-size and action space. We demonstrate that these problems can be overcome, and reduce the overall execution time of a competitive QBF solver by a factor of 10 after training on similar formulas.\nThis work demonstrates the huge potential that lies in the tight integration of deep learning and logical reasoning algorithms, and hence motivates more aggressive research efforts in the area. Our experiments suggest two challenges that we want to highlight: (1) We used very small neural networks, and—counterintuitively—larger neural networks were not able to improve over the small ones in our experiments. (2) The performance overhead due to the use of neural networks is large; however we think that with more engineering effort we could be significantly reduce this overhead.\nAcknowledgements. This work was supported in part by National Science Foundation (NSF) grants CNS-1836601, CNS-1446619, CNS-1739816, and CCF-1837132, by the iCyPhy center, and by Berkeley Deep Drive. The second author was affiliated with UC Berkeley during the initial part of this work." }, { "heading": "A GLOBAL SOLVER STATE", "text": "1. Current decision level 2. Number of restarts 3. Restarts since last major restart 4. Conflicts until next restart 5. Ratio of variables that already have a Skolem function to total variables. Formula is solved\nwhen this reaches 1." }, { "heading": "B LITERAL LABELS", "text": "Here we describe the details of the variable labels presented to the neural network described in Section 4. The vector v consists of the following 7 values:\ny0 ∈ {0, 1} indicates whether the variable is universally quantified, y1 ∈ {0, 1} indicates whether the variable is existentially quantified, y2 ∈ {0, 1} indicates whether the variable has a Skolem function already, y3 ∈ {0, 1} indicates whether the variable was assigned constant True, y4 ∈ {0, 1} indicates whether the variable was assigned constant False, y5 ∈ {0, 1} indicates whether the variable was decided positive, y6 ∈ {0, 1} indicates whether the variable was decided negative, and" }, { "heading": "C THE QDIMACS FILE FORMAT", "text": "QDIMACS is the standard representation of quantified Boolean formulas in prenex CNF. It consists of a header “p cnf <num_variables> <num_clauses>” describing the number of variables and the number of clauses in the formula. The lines following the header indicate the quantifiers. Lines starting with ‘a’ introduce universally quantified variables and lines starting with ‘e’ introduce existentially quantified variables. All lines except the header are terminated with 0; hence there cannot be a variable named 0. Every line after the quantifiers describes a single clause (i.e. a disjunction over variables and negated variables). Variables are indicated simply by an index; negated variables are indicated by a negative index. Below give the QDIMACS representation of the formula ∀x. ∃y. (x ∨ y) ∧ (¬x ∨ y): p c n f 2 2 a 1 0 e 2 0 1 2 0 −1 2 0\nThere is no way to assign variables strings as names. The reasoning behind this decision is that this format is only meant to be used for the computational backend." }, { "heading": "D HYPERPARAMETERS AND TRAINING DETAILS", "text": "We trained a model on the reduction problems training set for 10M steps on an AWS server of type C5. We trained with the following hyperparameters, yet we note that training does not seem overly sensitive:\n• Literal embedding dimension: δL = 16\n• Clause embedding dimension: δC = 64 • Learning rate: 0.0006 for the first 2m steps, then 0.0001 • Discount factor: γ = 0.99 • Gradient clipping: 2 • Number of iterations (size of graph convolution): 1 • Minimal number of timesteps per batch: 1200" }, { "heading": "E ADDITIONAL DATASETS AND EXPERIMENTS", "text": "While the set of Reductions-formulas we considered in the main part of the paper was created independently from this paper and is therefore unlikely to be biased towards our approach, one may ask if it is just a coincidence that our approach was able to learn a good heuristic for that particular set of formulas. In this appendix we consider two additional sets of formulas that we call Boolean and Words, and replicated the results from the main part. We show that we can learn a heuristic for a given set/distribution of formulas that outperforms VSIDS by a significant margin.\nBoolean is a set of formulas of random circuits. Starting from a fixed number (8) of Boolean inputs to the circuit, individual AND-gates are added (with randomly chosen inputs with random polarity) up to a certain randomized limit. This circuit is turned into a propositional Boolean formula using the Tseitin transformation, and then a small fraction of random clauses is added to add some irregularities to the circuit. (Up to this point, the process is performed by the fuzz-tester for SAT solvers, FuzzSAT, available here http://fmv.jku.at/fuzzsat/.) To turn this kind of propositional formulas into QBFs, we randomly selected 4 variables to be universally quantified. This resulted in a more or less even split of true and false formulas. The formulas have 50.7 variables on average. In Figure 5 we see that training a model on these formulas (we call this model Boolean, like the data set) results in significantly better performance than VSIDS. The advantage of the learned heuristic over VSIDS and Random is smaller compared to the experiments on Reductions in the main part of the paper. We conjecture that this is due to the fact that these formulas are much easier to begin with, which means that there is not as much potential for improvement.\nWords is a data set of random expressions over (signed) bitvectors. The top-level operator is a comparison (=,≤,≥,<,>), and the two subexpressions of the comparison are arithmetic expressions. The number of operators and leafs in each expression is 9, and all bitvectors have word size 8. The expressions contain up to four bitvector variables, alternatingly assigned to be existentially and\nuniversally quantified. The formulas are simplified using the circuit synthesis tool ABC, and then they are turned into CNF using the standard Tseitin transformation. The resulting formulas have 71.4 variables on average and are significantly harder for both Random and VSIDS. For example, the first formula from the data set looks as follows: ∀z.∃x.((x− z) xor z) 6= z + 1, which results in a QBF with 115 variables and 298 clauses. This statement happens to be true and is solved with just 9 decisions using the VSIDS heuristic. In Figure 4 we see that training a new model on the Words dataset again results in significantly improved performance. (We named the model Words, after the data set.)\nWe did not include the formula sets Boolean and Words in the main part, as they are generated by a random process - in contrast to Reductions, which is generated with a concrete application in mind. In the formal methods community, artificially generated sets of formulas are known to differ from application formulas in non-obvious ways." }, { "heading": "F ADDITIONAL EXPERIMENTS ON GENERALIZATION TO LARGER FORMULAS", "text": "An interesting observation that we made is that models trained on sets of small formulas generalize well to larger formulas from similar distributions. To demonstrate this, we generated a set of larger formulas, similar to the Words dataset. We call the new dataset Words30, and the only difference to Words is that the expressions have size 30. The resulting formulas have 186.6 variables on average. This time, instead of training a new model, we test the model trained on Words (from Figure 4) on this new dataset.\nIn Figure 6, we see that the overall hardness (measured in the number of decisions needed to solve the formulas) has increased a lot, but the relative performance of the heuristics is still very similar. This shows that the heuristic learned on small formulas generalizes relatively well to much larger/harder formulas.\nIn Fig. 3, we have already observed that the heuristic also generalizes well to much longer episodes than those it was trained on. We believe that this is due to the “locality” of the decisions we force the network to take: The graph neural network approach uses just one iteration, such that we force the heuristics to take very local decisions. Not being able to optimize globally, the heuristics have to learn local features that are helpful to solve a problem sooner rather than later. It seems plausible that this behavior generalizes well to larger formulas (Fig. 6) or much longer episodes (Fig. 3)." }, { "heading": "G ENCODER VARIANTS AND HYPERPARAMETERS", "text": "The encoder described in Subsection 4.1 is by no means the only reasonable choice. In fact, the graph representation described in Fig. 2 is not unique. One could just as well represent the formula as a bipartite graph on variables and clauses, with two types of edges, one for each polarity. The encoder then produces encodings of variables rather than literals, and the propagation along edges is performed with two different learned parameter matrices, one for each edge type. The equations for such an encoder are:\nct = ReLU (∑ l∈c WV [v >, v>t−1] + BV )\nvt = ReLU (∑ c,v∈c WC [c >, c>t ] + BC )\nWhere WV is one of W+V ,W − V (and similarly, WC ∈ {W + C ,W − C }), depending on the polarity of the occurence of v in c, with v as the variable’s label. Accordingly, we change the policy network to produce two scores per variable embedding vτ , as the qualities of assigning this variable to positive or negative polarity. In our experiments, this variant of the encoder achieved comparable results to those of the literal-based encoder.\nThe hyperparameter τ controls the number of iterations within the GNN. Here too, there are several variants of the encoder one could consider. The architecture described in Subsection 4.1, which achieved the reported results, applies the same transformation for every iteration (the matrices WC , WL). We’ve also experimented with a variant that uses τ different learned transformations, one per iteration, denoted W tC ,W t L, for 1 ≤ t ≤ τ (intuitively, this allows the network to perform a different computation in every iteration). It achieved comparable results, yet with roughly τ times the number of parameters. A version with even more parameters gave the t′th transformation access not only to the t− 1 embedding, but to all the 1, . . . , t− 1 previous embeddings, through residual connections. This version also didn’t achieve significantly better results. To get results with more than one iteration we had to add a normalization layer between every two iterations. We experimented with both Layer Normalization (Ba et al., 2016) and a GRU cell (Chung et al., 2014), which gave\nsimilar results. Adding a 2nd and 3rd iteration achieved only slightly better results when measuring number of decisions to solve a formula, at the cost of more parameters, slower training, and more importantly, slower inference at runtime. When measuring number of formulas solved in real time, a single iteration achieved best results overall. However, given the large overhead of our agent implementation, it is possible that an optimized in-process implementation could still benefit from multiple iterations in the GNN.\nIt is interesting to point out that when we tested a model with zero iterations, that is, no GNN at all, where the policy network gets to see only the variable labels from the solver, it achieved results that were better than Random and clearly demonstrated learning, but worse than VSIDS, and considerably less than the results for 1 iteration. That shows that at least the 1-hop neighborhood of a variable contains information which is crucial, we cannot achieve comparable results without considering this local topology of the graph.\nAnother interesting observation is that the model which achieved best results did not have access to the variable VSIDS activity scores! Adding activity scores to the variable feature vectors in fact slightly degraded performence. It learns faster, but converges to a lower average reward, and performs slighly worse on the validation and test sets, especially on the harder problems. We hypothesize that this is because the model learns to rely on the activity scores, and they will be quite different in harder (longer) episodes, and outside the range it trained on. Furthermore, it shows that it is possible to achieve a heuristic which performs better than VSIDS without even computing activities!\nThe results for the different variants can be seen in Figure 7." } ]
2,020
null